Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
10 Lacs
Chennai
Remote
Seeking a Sigma Developer to build dashboards, optimize SQL, integrate with JS frameworks, connect to cloud warehouses, ensure BI security, and support CI/CD. Must excel in Sigma, data modeling, and cross-team collaboration for data-driven insights. Required Candidate profile Bachelor’s in CS/Data field, 2+ yrs in Sigma/BI tools, SQL expert, experience with embedding, cloud warehouses (Snowflake/BigQuery), data modeling, BI security, and building responsive dashboards
Posted 1 day ago
7.0 - 10.0 years
20 - 25 Lacs
Pune
Work from Office
Role & Responsibilities Architect, build, and tune Snowflake data warehouses and ELT pipelines (SQL, Streams, Tasks, UDFs, Stored Procedures) to meet complex commercial-analytics workloads. Integrate diverse pharma data sources (Veeva, Salesforce, IQVIA, Symphony, RWD, patient-services feeds) via Fivetran, ADF, or Python-based frameworks, ensuring end-to-end data quality. Establish robust data models (star, snowflake, Data Vault) optimized for sales reporting, market-share analytics, and AI/ML use-cases. Drive governance & compliance (HIPAA, GDPR, GxP) through fine-grained access controls, masking, lineage, and metadata management. Lead code reviews, mentor engineers, and resolve performance bottlenecks while right-sizing compute for cost efficiency. Partner with business stakeholders to translate commercial objectives into scalable data solutions and actionable insights. Skills & Qualifications Must-Have 7+ yrs data-engineering / warehousing experience, incl. 4+ yrs hands-on Snowflake design & development. Expertlevel SQL plus strong data-modeling (Dimensional, Data Vault) and ETL/ELT optimisation skills. Proficiency in Python (or similar) for automation, API integrations, and orchestration. Proven governance/security acumen within regulated industries (HIPAA, GDPR, PII). Bachelors in Computer Science, Engineering, Information Systems (Masters preferred). Strong client-facing communication and problem-solving ability in fast-paced, agile environments. Preferred Direct experience with pharma commercial datasets (sales, CRM, claims, MDM, adherence KPIs). Cloud-platform depth (AWS, Azure, or GCP) and familiarity with tools such as Matillion/DBT/Airflow, Git. Snowflake certifications (SnowPro Core / Advanced) plus Tableau, Power BI, or Qlik connectivity know-how.
Posted 1 day ago
6.0 - 11.0 years
13 - 23 Lacs
Bengaluru
Work from Office
Greetings for the Day!!! Scouting for a BI Engineer to be associated with IT Service based (SaaS) organization. Designation: BI Architect Location: Bangalore. Mode: Hybrid Shift: 1 PM to 10 PM Role & responsibilities: Engineer Self-Service BI Solutions: Design and implement robust and intuitive Power BI models that empower superb reporting and strategic decisions. Data warehouse Development and Integration: Leverage Snowflake and DBT to design and develop data warehouse tables, ensuring they are seamlessly incorporated into the Power BI ecosystem for comprehensive reporting. Simplify Complexity: Turn complex data and process into intuitive and actionable assets Design solutions and processes that enable data engineers, BI, and analysts to accelerate delivery of high-quality data assets Performance Optimization: Ensure optimal performance of Power BI solutions Stakeholder Collaboration: Deliver consistently on internal partner requests Mentorship and Support: Mentor coworkers and empower business partners to build their own reports using our Power BI and Snowflake models. Continuous Improvement: Stay on the cutting edge of tools and tech to continuously enhance our Power BI/Snowflake capabilities Technical Documentation: For internal development alignment and stakeholder enablement Additional Qualifications: Power BI Version Control using Azure DevOps or GIT for scalable development Analytical Thinking: Strong analytical skills to interpret data and model for actionable insights. SaaS Experience: Preferred experience with subscription data in a SaaS environment. Python Experience: and FIvetran . Preferred to automate processes and tap into the Power BI API. Interested candidates kindly share your updated resume to james.lobo@mappyresources.com
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients . Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant - ML Engineer ! In this role, we are looking for candidates who have relevant years of experienc e in d esigning and developing machine learning and deep learning system . Who have professional software development experience . Hands on r unning machine learning tests and experiments . Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Q ualifications / Skills Good years experience B achelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Google composer Languages and scripting: Python, Scala Java etc Cloud Services - GCP, Snowflake Analytics and ML tooling - Sagemaker , ML Studio Execution Paradigm - low latency/Streaming, batch Preferred Q ualifications / Skills Data platforms - DBT, Fivetran and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at and on , , , and . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 1 week ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients . Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant - ML Engineer ! In this role, we are looking for candidates who have relevant years of experienc e in d esigning and developing machine learning and deep learning system . Who have professional software development experience . Hands on r unning machine learning tests and experiments . Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Q ualifications / Skills Good years experience B achelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Google composer Languages and scripting: Python, Scala Java etc Cloud Services - GCP, Snowflake Analytics and ML tooling - Sagemaker , ML Studio Execution Paradigm - low latency/Streaming, batch Preferred Q ualifications / Skills Data platforms - DBT, Fivetran and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at and on , , , and . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 1 week ago
5.0 - 10.0 years
17 - 30 Lacs
Hyderabad
Remote
At Mitratech, we are a team of technocrats focused on building world-class products that simplify operations in the Legal, Risk, Compliance, and HR functions of Fortune 100 companies. We are a close-knit, globally dispersed team that thrives in an ecosystem that supports individual excellence and takes pride in its diverse and inclusive work culture centered around great people practices, learning opportunities, and having fun! Our culture is the ideal blend of entrepreneurial spirit and enterprise investment, enabling the chance to move at a rapid pace with some of the most complex, leading-edge technologies available. Given our continued growth, we always have room for more intellect, energy, and enthusiasm - join our global team and see why it's so special to be a part of Mitratech! Job Description We are seeking a highly motivated and skilled Analytics Engineer to join our dynamic data team. The ideal candidate will possess a strong background in data engineering and analytics, with hands-on experience in modern analytics tools such as Airbyte, Fivetran, dbt, Snowflake, Airflow, etc. This role will be pivotal in transforming raw data into valuable insights, ensuring data integrity, and optimizing our data infrastructure to support the organization's data platform. Essential Duties & Responsibilities Data Integration and ETL Processes: Design, implement, and manage ETL pipelines using tools like Airbyte and Fivetran to ensure efficient and accurate data flow from various sources into our Snowflake data warehouse. Maintain and optimize existing data integration workflows to improve performance and scalability. Data Modeling and Transformation: Develop and maintain data models using dbt / dbt Cloud to transform raw data into structured, high-quality datasets that meet business requirements. Ensure data consistency and integrity across various datasets and implement data quality checks. Data Warehousing: Manage and optimize our Redshift / Snowflake data warehouses, ensuring it meets performance, storage, and security requirements. Implement best practices for data warehouse management, including partitioning, clustering, and indexing. Collaboration and Communication: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions that meet their needs. Communicate complex technical concepts to non-technical stakeholders in a clear and concise manner. Continuous Improvement: Stay updated with the latest developments in data engineering and analytics tools, and evaluate their potential to enhance our data infrastructure. Identify and implement opportunities for process improvements, automation, and optimization within the data pipeline. Requirements & Skills: Education and Experience: Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field. 3-5 years of experience in data engineering or analytics engineering roles. Experience in AWS and DevOps is a plus. Technical Skills: Proficiency with modern ETL tools such as Airbyte and Fivetran. Must have experience with dbt for data modeling and transformation. Extensive experience working with Snowflake or similar cloud data warehouses. Solid understanding of SQL and experience writing complex queries for data extraction and manipulation. Familiarity with Python or other programming languages used for data engineering tasks. Analytical Skills: Strong problem-solving skills and the ability to troubleshoot data-related issues. Ability to understand business requirements and translate them into technical specifications. Soft Skills: Excellent communication and collaboration skills. Strong organizational skills and the ability to manage multiple projects simultaneously. Detail-oriented with a focus on data quality and accuracy. We are an equal-opportunity employer that values diversity at all levels. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, national origin, age, sexual orientation, gender identity, disability, or veteran status.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai, Hyderabad, Bengaluru
Hybrid
Your day at NTT DATA The Software Applications Development Engineer is a seasoned subject matter expert, responsible for developing new applications and improving upon existing applications based on the needs of the internal organization and or external clients. What you'll be doing Yrs. Of Exp: 5 Yrs. Data Engineer- Work closely with Lead Data Engineer to understand business requirements, analyse and translate these requirements into technical specifications and solution design. Work closely with Data modeller to ensure data models support the solution design Develop , test and fix ETL code using Snowflake, Fivetran, SQL, Stored proc. Analysis of the data and ETL for defects/service tickets (for solution in production ) raised and service tickets. Develop documentation and artefacts to support projects.
Posted 2 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
Okta is looking for a Sr. Marketing Data Operations Analyst to join the Marketing Data Operations & Technology team. Reporting into the Sr. Manager, Marketing Technology this new role will support the management and optimisation of Okta s marketing data across our core marketing technology stack.Okta has a large marketing technology and data estate spanning an audience of millions which includes inputs from a range of systems, including Okta s CRM system (Salesforce), marketing automation platform (Adobe Marketo Engage), and connected infrastructure which includes tools spanning sales outreach (Outreach), ABM (6Sense, Folloze), and data enrichment (Clay, Clearbit).The Sr. Marketing Data Operations Analyst will contribute to a number of critical areas supporting Okta s drive towards operational excellence across its marketing technology estate. This includes driving overall database health and improving data quality, the management of integrations in the data operations function and the conducting of ongoing data maintenance as well as processes to support these efforts.The role is integral to delivering a program of technical efficiency, operational excellence and a supporting framework of data-driven insights from within the Marketing Data Operations & Technology team. This role requires strong analytical skills, attention to detail and the ability to collaborate with cross functional teams. As such the successful candidate will be able to demonstrate a data-driven marketing mindset and have demonstrable experience of working within a data operations function or role to support and drive marketing performance. Job Duties And Responsibilities : Manage and drive data cleansing initiatives and enrichment processes to improve data accuracy and completeness. Administer and maintain key data enrichment marketing technology tools, with a focus on Clay and Clearbit, ensuring optimal configuration and utilization. Partner closely with key Marketing stakeholders to create and manage new use cases and workflows within our data enrichment tools - creating business requirement docs, technical architectural flows, and monitoring/measuring business impact. Partner closely with the greater Marketing Operations team to manage data updates, maintenance, and logic within 6sense to support effective ABM strategies. Identifying data gaps, discrepancies and issues and, where appropriate, owning the design and implementation of processes to improve these issues. Assist with manual data load fixes across various platforms (Salesforce, Marketo, etc), ensuring data integrity and resolving data discrepancies. Provide miscellaneous data operations tool support and fulfill tool provisioning requests, ensuring users have the necessary access and functionality. Drive and collaborate on the creation of a Marketing Data Ops Data Dictionary, ensuring data governance and clarity across tools and systems. Skills & Experience: Required3+ years of experience working in a data operations function or role supporting go-to-market teams RequiredExperience of working with Salesforce (preference for candidates who have worked directly with systems integrations with Salesforce. Salesforce certifications are a plus). Candidates should be comfortable with the core Salesforce Object models RequiredExperience of working with business stakeholders to understand existing workflows, business requirements and translate this into solution design and delivery. RequiredStrong critical thinker and problem solver, with an eye for detail PreferredKnowledge of SQL (for analytics) and comfortable querying data-warehouses for analytical purposes (such as SnowFlake) PreferredCandidates with experience of integrating with analytics and data orchestrations platforms (Openprise, FiveTran, Tableau, Datorama, Google Data Studio/Looker Studio). Preferred Candidates with exposure to a range of Marketing Technology applicationsFor example: Sales outreach platformsSuch as Outreach / SalesLoft. ABM platformsSuch as 6Sense / Folloze. Optimization/ Personalization platformssuch as Intellimize / Optimizely Data Enrichment ToolsSuch as Leadspace/ Clay / ZoomInfo / Clearbit This role requires in-person onboarding and travel to our Bengaluru, IN office during the first week of employment.
Posted 3 weeks ago
5.0 - 10.0 years
25 - 35 Lacs
Chennai
Work from Office
Job Summary: We are seeking an experienced Manager - Data Engineer to join our dynamic team. In this role, you will be responsible for designing, implementing, and maintaining data infrastructure on Azure with extensive focus on Azure Databricks. You will work hand in hand with our analytics team to support data-driven decision making across different external clients in a variety of industries. SCOPE OF WORK Design, build, and maintain scalable data pipelines using Azure Data Factory (ADF), Fivetran and other Azure services. Administer, monitor, and troubleshoot SQL Server databases, ensuring high performance and availability. Develop and optimize SQL queries and stored procedures to support data transformation and retrieval. Implement and maintain data storage solutions in Azure, including Azure Databricks, Azure SQL Database, Azure Blob Storage, and Data Lakes. Collaborate with business analysts, clients and stakeholders to deliver insightful reports and dashboards using Power BI. Develop scripts to automate data processing tasks using languages such as Python, PowerShell, or similar. Ensure data security and compliance with industry standards and organizational policies. Stay updated with the latest technologies and trends in Azure cloud services and data engineering. Desired experience in healthcare data analytics, including familiarity with healthcare data models such as Encounter based models, or Claims focused models or Manufacturing data analytics or Utility Analytics IDEAL CANDIDATE PROFILE Bachelors degree in Computer Science, Engineering, Information Technology, or related field. At least 5-8 years of experience in data engineering with a strong focus on Microsoft Azure and Azure Databricks. Proven expertise in SQL Server database administration and development. Experience in building and optimizing data pipelines, architectures, and data sets on Azure. Experience with dbt and Fivetran Familiarity with Azure AI and LLM’s including Azure OpenAI Proficiency in Power BI for creating reports and dashboards. Strong scripting skills in Python, PowerShell, or other relevant languages. Familiarity with other Azure data services (e.g., Azure Synapse Analytics, Azure Blob..etc). Knowledge of data modeling, ETL processes, and data warehousing concepts. Excellent problem-solving skills and the ability to work independently or as part of a team. Strong communication and interpersonal skills to collaborate effectively with various teams and understand business requirements. Certifications in Azure Data Engineering or related fields. Experience with machine learning and data science projects (huge plus). Knowledge of additional BI tools and data integration platforms
Posted 3 weeks ago
7.0 - 12.0 years
25 - 30 Lacs
Coimbatore
Remote
Role & responsibilities SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities: Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. Required Skills This job has no supervisory responsibilities. Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at prioritizing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau)
Posted 3 weeks ago
7.0 - 12.0 years
15 - 30 Lacs
Hyderabad
Remote
Lead Data Engineer with Health Care Domain Role & responsibilities Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies. Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented. Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes. Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies. Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults . Required Skills This job has no supervisory responsibilities. Need strong experience with Snowflake and Azure Data Factory(ADF). Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work. 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at priori zing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source so ware platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g. Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)
Posted 3 weeks ago
5.0 - 10.0 years
4 - 9 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Advanced understanding of AWS services Understanding of Cloud based services (GitHub, ServiceNow, Orca, Datadog, Broadcom, Fivetran) Hands on experience with Release Management and deployment Advanced understanding of Linux Administration (log files, command line, system services, custom and managed package installations). Knowledge of network protocols, security and compliance Strong knowledge of scripting (Python, PHP, Bash) Knowledge of application integration technologies (API, Middleware, Webhooks)
Posted 4 weeks ago
6.0 - 11.0 years
20 - 25 Lacs
Noida, Mumbai
Work from Office
Responsibilities: Act as Data domain expert for Snowflake in a collaborative environment to provide demonstrated understanding of data management best practices and patterns. Design and implement robust data architectures to meet and support business requirements leveraging Snowflake platform capabilities. Develop and enforce data modeling standards and best practices for Snowflake environments. Develop, optimize, and maintain Snowflake data warehouses. Leverage Snowflake features such as clustering, materialized views, and semi-structured data processing to enhance data solutions. Ensure data architecture solutions meet performance, security, and scalability requirements. Stay current with the latest developments and features in Snowflake and related technologies, continually enhancing our data capabilities. Collaborate with cross-functional teams to gather business requirements, translate them into effective data solutions in Snowflake and provide data-driven insights. Stay updated with the latest trends and advancements in data architecture and Snowflake technologies. Provide mentorship and guidance to junior data engineers and architects. Troubleshoot and resolve data architecture-related issues effectively. Skills Requirement: 5+ years of proven experience as a Data Engineer with 3+ years as Data Architect. Proficiency in Snowflake with Hands-on experience with Snowflake features such as clustering, materialized views, and semi-structured data processing. Experience in designing and building manual or auto ingestion data pipeline using Snowpipe. Design and Develop automated monitoring processes on Snowflake using combination of Python, PySpark, Bash with SnowSQL. SnowSQL Experience in developing stored Procedures writing Queries to analyze and transform data Working experience on ETL tools like Fivetran, DBT labs, MuleSoft Expertise in Snowflake concepts like setting up Resource monitors, RBAC controls, scalable virtual warehouse, SQL performance tuning, zero copy clone, time travel and automating them. Excellent problem-solving skills and attention to detail. Effective communication and collaboration abilities. Relevant certifications (e.g., SnowPro Core / Advanced) are a must have. Must have expertise in AWS. Azure, Salesforce Platform as a Service (PAAS) model and its integration with Snowflake to load/unload data. Strong communication and exceptional team player with effective problem-solving skills Educational Qualification Required: Masters degree in Business Management (MBA / PGDM) / Bachelor's degree in computer science, Information Technology, or related field.
Posted 4 weeks ago
8.0 - 13.0 years
25 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Looking for Cloud Engineering and Operations Specialist deep Understanding of AWS services, Cloud based services (GitHub, ServiceNow, Orca, Datadog, Broadcom, Five Tran) Hands on experience with Release Management and deployment
Posted 4 weeks ago
3.0 - 8.0 years
7 - 17 Lacs
Pune
Work from Office
vConstruct, a Pune-based Construction Technology company is seeking a Data Engineer for its Data Science and Analytics team, a close-knit group of analysts and engineers supporting all data aspects of the business. You will be responsible for designing, developing, and maintaining our data infrastructure, ensuring data integrity, and supporting various data-driven projects. You will work closely with cross-functional teams to integrate, process, and manage data from various sources, enabling business insights and enhancing operational efficiency. Responsibilities Design, develop, and maintain robust, scalable data pipelines and ETL/ELT processes to efficiently ingest, transform, and store data from diverse sources. Collaborate with cross-functional teams to design, implement, and sustain data-driven solutions that optimize data flow and system integration. Develop and maintain pipelines to move data in real-time (streaming), on-demand, and batch modeswhether inbound to a central data warehouse, outbound to other systems, or point-to-pointfocusing on security, reusability, and data quality. Implement pipelines with comprehensive error-handling mechanisms that are visible to both technical and functional teams. Ensure optimized pipeline performance with timely data delivery, including appropriate alerts and notifications. Adhere to data engineering best practices for code management and automated deployments, incorporating validation and test automation across all data engineering efforts. Perform debugging, application issue resolution, root cause analysis, and assist in proactive/preventive maintenance. Collaborate with the extended data team to define and enforce standards, guidelines, and data models that ensure data quality and promote best practices. Write and execute complete testing plans, protocols, and documentation for assigned portions of the data system or components; identify defects and create solutions for issues with code and integration into data system architecture. Work closely with data analysts, business users, and developers to ensure the accuracy, reliability, and performance of data solutions. Monitor data performance, troubleshooting issues, and optimize existing solutions. Create and maintain technical documentation related to data architecture, integration flows, and processes. Organize and lead discussions with business and operational data stakeholders to understand requirements and deliver solutions. Partner with analysts, developers, and business users to build data solutions that are scalable, maintainable, and aligned with business objectives. Qualifications 3 to 6 years of experience as a Data Engineer, with a focus on building scalable data solutions. Over 3 years of experience in scripting languages such as Python for data processing, automation, and ETL development. 3+ years of hands-on experience working with Snowflake. 3+ years of experience with data integration tools such as Azure Data Factory, Fivetran, or Matillion. Strong experience in writing complex, highly optimized SQL queries on large datasets (3+ years). Deep expertise in SQL, with a focus on database performance tuning and optimization. Experience working with data platforms like Snowflake, Azure Synapse, or Microsoft Fabric. Proven experience integrating APIs and handling diverse data sources. Ability to understand, consume, and utilize APIs, JSON, and web services for building data pipelines. Experience designing and implementing data pipelines using cloud platforms such as Azure or AWS. Familiarity with orchestration tools like Apache Airflow or equivalent. Experience with CI/CD practices and automation in data engineering workflows. Knowledge of dbt or similar tools for data transformation is a plus. Familiarity with Power BI or other data visualization tools is a plus. Strong problem-solving skills with the ability to troubleshoot complex data issues. Excellent communication skills and a collaborative mindset to work effectively in team environments. Education Bachelors or Masters degree in Computer Science/Information technology or related field. Equivalent academic and work experience can be considered. About vConstruct : vConstruct specializes in providing high quality Building Information Modeling and Construction Technology services geared towards construction projects. vConstruct is a wholly owned subsidiary of DPR Construction. For more information, please visit www.vconstruct.com About DPR Construction: DPR Construction is a national commercial general contractor and construction manager specializing in technically challenging and sustainable projects for the advanced technology, biopharmaceutical, corporate office, and higher education and healthcare markets. With the purpose of building great things, great teams, great buildings, great relationshipsDPR is a truly great company. For more information, please visit www.dpr.com
Posted 4 weeks ago
10.0 - 12.0 years
1 - 1 Lacs
Hyderabad
Hybrid
Role: Lead Data Engineer Experience: 10+ years Contract: 6+ months Job Summary: We are seeking an experienced and results-oriented Lead Data Engineer to drive the design, development, and optimization of enterprise data solutions. This onsite role requires deep expertise in FiveTran, Snowflake, SQL, Python, and data modeling, as well as a demonstrated ability to lead teams and mentor both Data Engineers and BI Engineers. The role will play a critical part in shaping the data architecture, improving analytics readiness, and enabling self-service business intelligence through scalable star schema designs. Key Responsibilities: Lead end-to-end data engineering efforts, including architecture, ingestion, transformation, and delivery. Architect and implement FiveTran-based ingestion pipelines and Snowflake data models. Create optimized Star Schemas to support analytics, self-service BI, and KPI reporting. Analyze and interpret existing report documentation and KPIs to guide modeling and transformation strategies. Design and implement efficient, scalable data workflows using SQL and Python. Review and extend existing reusable data engineering templates and frameworks. Provide technical leadership and mentorship to Data Engineers and BI Engineers, ensuring best practices in coding, modeling, performance tuning, and documentation. Collaborate with business stakeholders to gather requirements and translate them into scalable data solutions. Work closely with BI teams to enable robust reporting and dashboarding capabilities. Required Skills: 7+ years of hands-on data engineering experience, with 2+ years in a technical leadership or lead role. Deep expertise in FiveTran, Snowflake, and SQL development. Proficiency in Python for data transformation and orchestration. Strong understanding of data warehousing principles, including star schema design and dimensional modeling. Experience in analysing business KPIs and reports to influence data model design. Demonstrated ability to mentor both Data Engineers and BI Engineers and provide architectural guidance. Excellent problem-solving, communication, and stakeholder management skills Share CV to: Careers@rwavesoftech.com
Posted 4 weeks ago
5.0 - 10.0 years
5 - 10 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Role & responsibilities Design, develop, and optimize scalable data pipelines for ETL/ELT processes. Develop and maintain Python-based data processing scripts and automation tools. Write and optimize complex SQL queries (preferably in Snowflake) for data transformation and analytics. Experience with Jenkins or other CI/CD tools. Experience developing with Snowflake as the data platform. Experience with ETL/ELT tools (preferably Fivetran, dbt). Implement version control best practices using Git or other tools to manage code changes. Collaborate with cross-functional teams (analysts, product managers, and engineers) to understand business needs and translate them into technical data solutions. Ensure data integrity, security, and governance across multiple data sources. Optimize query performance and database architecture for efficiency and scalability. Lead troubleshooting and debugging efforts for data-related issues. Document data workflows, architectures, and best practices to ensure maintainability and knowledge sharing. Preferred candidate profile 5+ years of experience in Data Engineering, Software Engineering, or a related field. Bachelors or masters degree in computer science, Computer Engineering, or a related discipline High proficiency in SQL (preferably Snowflake) for data modeling, performance tuning, and optimization. Strong expertise in Python for data processing and automation. Experience with Git or other version control tools in a collaborative development environment. Strong communication skills and ability to collaborate with cross-functional teams for requirements gathering and solution design. Experience working with large-scale, distributed data systems and cloud data warehouse.
Posted 1 month ago
5 - 7 years
15 - 25 Lacs
Pune, Mumbai (All Areas)
Hybrid
DUTIES AND RESPONSIBILITIES: Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented • Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. SUPERVISORY RESPONSIBILITIES: This job has no supervisory responsibilities. QUALIFICATIONS: Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work 3-5 years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) • Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker • Excellent communicator Well-developed interpersonal skills Good at prioritizing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g.Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)fepa
Posted 1 month ago
8 - 13 years
12 - 22 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Work from Office
Greetings of The Day...!!! We have an URGENT on-rolls opening for the position of "Snowflake Architect" at One of our reputed clients for WFH. Name of the Company - Confidential Rolls - Onrolls Mode of Employment - FTE / Sub-Con / Contract Job Location - Remote Job Work Timings Night Shift – 06.00 pm to 03.00 am IST Nature of Work – Work from Home Working Days – 5 Days Weekly Educational Qualification - Bachelor's degree in computer science, BCA, engineering, or a related field. Salary – Maximum CTC Would be 23LPA (Salary & benefits package will be commensurate with experience and qualifications, PF, Medical Insurance cover available) Language Known - English, Hindi, & local language. Experience – 9 Years + of relevant experience in the same domain. Job Summary: We are seeking a highly skilled and experienced Snowflake Architect to lead the design, development, and implementation of scalable, secure, and high-performance data warehousing solutions on the Snowflake platform. The ideal candidate will possess deep expertise in data modelling, cloud architecture, and modern ELT frameworks. You will be responsible for architecting robust data pipelines, optimizing query performance, and ensuring enterprise-grade data governance and security. In this role, you will collaborate with data engineers, analysts, and business stakeholders to deliver efficient data solutions that drive informed decision-making across the organization. Key Responsibilities: Manage and maintain the Snowflake platform to ensure optimal performance and reliability. Collaborate with data engineers and analysts to design and implement data pipelines. Develop and optimize SQL queries for efficient data retrieval and manipulation. Create custom scripts and functions using JavaScript and Python to automate platform tasks. Troubleshoot platform issues and provide timely resolutions. Implement security best practices to protect data within the Snowflake platform. Stay updated on the latest Snowflake features and best practices to continuously improve platform performance. Required Qualifications: Bachelor’s degree in computer science, Engineering, or a related field. Minimum of Nine years of experience in managing any Database platform. Proficiency in SQL for data querying and manipulation. Strong programming skills in JavaScript and Python. Experience in optimizing and tuning Snowflake for performance. Preferred Skills: Technical Expertise Cloud & Integration Performance & Optimization Security & Governance Soft Skills THE PERSON SHOULD BE WILLING TO JOIN IN 07-10 DAYS TIME OR IMMEDIATE JOINER. Request for interested candidates; Please share your updated resume with us below Email-ID executivehr@monalisammllp.com, also candidate can call or WhatsApp us at 9029895581. Current /Last Net in Hand - Salary will be offered based on the interview /Technical evaluation process -- Notice Period & LWD was/will be - Reason for Changing the job - Total Years of Experience in Specific Field – Please specify the location which you are from – Do you hold any offer from any other association - ? Regards, Monalisa Group of Services HR Department 9029895581 – Call / WhatsApp executivehr@monalisammllp.com
Posted 1 month ago
5 - 8 years
0 - 1 Lacs
Hyderabad
Hybrid
Job Title: Sr Data Engineer (Fivetran SDK Connector / High touch Developer). Work Location: Hyderabad. Years of Experience: 5 to 8 Years Shift Timings: 3 PM to 12 AM Skill Set: Fivetran and Fivetran SDK Development Expertise in Python for connector development Understanding of High touch. Roles & Responsibilities: Design, build, and maintain custom connectors using the Fivetran SDK Develop and manage Reverse ETL pipelines using Hightouch Integrate data from diverse APIs and source systems into cloud data warehouses Ensure data reliability, quality, and performance across pipelines Optimize SQL transformations and data workflows Collaborate with data engineers, analysts, and stakeholders to deliver high-quality data solutions Monitor and troubleshoot connector issues, ensuring robust logging and error handling. Other Specifications: 3 years of hands-on experience with Fivetran and Fivetran SDK Strong proficiency in Python, especially for SDK-based connector development Advanced SQL skills for data manipulation and transformation Practical experience with Hightouch for Reverse ETL use cases Experience with cloud data warehouses: Snowflake, BigQuery, or Redshift Strong understanding of REST APIs, webhooks, and authentication mechanisms Solid knowledge of ETL/ELT pipelines, data modeling, and data syncing Excellent problem-solving, debugging, and documentation skills.
Posted 1 month ago
1 - 4 years
3 - 6 Lacs
Pune
Work from Office
The Data Integration Engineer will play a key role in designing, building, and maintaining data integrations between core business systems such as Salesforce and SAP and our enterprise data warehouse on Snowflake. This position is ideal for an early-career professional (1 to 4 years of experience) eager to contribute to transformative data integration initiatives and learn in a collaborative, fast-paced environment. Duties Responsibilities: Collaborate with cross-functional teams to understand business requirements and translate them into data integration solutions. Develop and maintain ETL/ELT pipelines using modern tools like Informatica IDMC to connect source systems to Snowflake. Ensure data accuracy, consistency, and security in all integration workflows. Monitor, troubleshoot, and optimize data integration processes to meet performance and scalability goals. Support ongoing integration projects, including Salesforce and SAP data pipelines, while adhering to best practices in data governance. Document integration designs, workflows, and operational processes for effective knowledge sharing. Assist in implementing and improving data quality controls at the start of processes to ensure reliable outcomes. Stay informed about the latest developments in integration technologies and contribute to team learning and improvement. Qualifications: Required Skills and Experience: 5+ years of hands-on experience in data integration, ETL/ELT development, or data engineering. Proficiency in SQL and experience working with relational databases such as Snowflake, PostgreSQL, or SQL Server. Familiarity with data integration tools such as FiveTran, Informatica Intelligent Data Management Cloud (IDMC), or similar platforms. Basic understanding of cloud platforms like AWS, Azure, or GCP. Experience working with structured and unstructured data in varying formats (e.g., JSON, XML, CSV). Strong problem-solving skills and the ability to troubleshoot data integration issues effectively. Excellent verbal and written communication skills, with the ability to document technical solutions clearly. Preferred Skills and Experience: Exposure to integrating business systems such as Salesforce or SAP into data platforms. Knowledge of data warehousing concepts and hands-on experience with Snowflake. Familiarity with APIs, event-driven pipelines, and automation workflows. Understanding of data governance principles and data quality best practices. Education: Bachelors degree in Computer Science, Data Engineering, or a related field, or equivalent practical experience.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane