Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 - 0 Lacs
hyderabad, telangana
On-site
You will be joining QTek Digital, a leading data solutions provider known for its expertise in custom data management, data warehouse, and data science solutions. Our team of dedicated data professionals, including data scientists, data analysts, and data engineers, collaborates to address present-day challenges and pave the way for future innovations. At QTek Digital, we value our employees and focus on fostering engagement, empowerment, and continuous growth opportunities. As a BI ETL Engineer at QTek Digital, you will be taking on a full-time remote position. Your primary responsibilities will revolve around tasks such as data modeling, applying analytical skills, implementing data warehouse solutions, and managing Extract, Transform, Load (ETL) processes. This role demands strong problem-solving capabilities and the capacity to work autonomously. To excel in this role, you should ideally possess: - 6-9 years of hands-on experience in ETL and ELT pipeline development using tools like Pentaho, SSIS, FiveTran, Airbyte, or similar platforms. - 6-8 years of practical experience in SQL and other data manipulation languages. - Proficiency in Data Modeling, Dashboard creation, and Analytics. - Sound knowledge of data warehousing principles, particularly Kimball design. - Bonus points for familiarity with Pentaho and Airbyte administration. - Demonstrated expertise in Data Modeling, Dashboard design, Analytics, Data Warehousing, and ETL procedures. - Strong troubleshooting and problem-solving skills. - Effective communication and collaboration abilities. - Capability to operate both independently and as part of a team. - A Bachelor's degree in Computer Science, Information Systems, or a related field. This position is based in our Hyderabad office, offering an attractive compensation package ranging from INR 5-19 Lakhs, depending on various factors such as your skills and prior experience. Join us at QTek Digital and be part of a dynamic team dedicated to shaping the future of data solutions.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. You will be part of a team of highly skilled professionals working with cutting-edge technologies. Our purpose is to bring real positive changes in an increasingly virtual world, transcending generational gaps and disruptions of the future. We are seeking AWS Glue Professionals with the following qualifications: - 3 or more years of experience in AWS Glue, Redshift, and Python - 3+ years of experience in engineering with expertise in ETL work with cloud databases - Proficiency in data management and data structures, including writing code for data reading, transformation, and storage - Experience in launching spark jobs in client mode and cluster mode, with knowledge of spark job property settings and their impact on performance - Proficiency with source code control systems like Git - Experience in developing ELT/ETL processes for loading data from enterprise-sized RDBMS systems such as Oracle, DB2, MySQL, etc. - Coding proficiency in Python or expertise in high-level languages like Java, C, Scala - Experience in using REST APIs - Expertise in SQL for manipulating database data, familiarity with views, functions, stored procedures, and exception handling - General knowledge of AWS Stack (EC2, S3, EBS), IT Process Compliance, SDLC experience, and formalized change controls - Working in DevOps teams based on Agile principles (e.g., Scrum) - ITIL knowledge, especially in incident, problem, and change management - Proficiency in PySpark for distributed computation - Familiarity with Postgres and ElasticSearch At YASH, you will have the opportunity to build a career in an inclusive team environment. We offer career-oriented skilling models and leverage technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our workplace is grounded in four principles: - Flexible work arrangements, free spirit, and emotional positivity - Agile self-determination, trust, transparency, and open collaboration - Support for the realization of business goals - Stable employment with a great atmosphere and ethical corporate culture.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
You are a strategic thinker passionate about driving solutions in Data Domain. You have found the right team. As a Data Domain Modeler in Transformation & Innovation team, you will lead the design and implementation of end-to-end data models starting from raw data to the semantic layer that makes our data more accessible and understandable for different personas ranging from finance users, data analysts, automation, quantitative research, and machine learning teams. Being part of an influential and data-centric team focused on data accessibility, you will work on designing new data models for domains such as headcount, contractors, financials, forecasting models, markets, and macro-economic scenarios. You will also represent the data domains in the overall information architecture strategy to optimize data models for end-user consumption, identify data homogenization opportunities, and optimize data pipelines in our data lake-house. You will lead the engagement and partner with product owners, business users (both technical and non-technical), data providers, and technology teams across the entire finance function to design and deliver data products. Work on some of the most complex and highly visible data problems in finance, at the intersection of finance and technology. Design and build a new cloud-based data lakehouse for the P&A community, leveraged by Analysts to CFO for their day-to-day reporting. Work on a wide range of data sets and use cases to support different Planning & Analysis processes, and personally lead and drive the design of them. Create solutions for key data challenges and implement innovative technology-based solutions at the bank such as an enterprise data catalog and AI-enabled conversational analytics. Partner with other high-performing teams within JPM to inspire innovation and champion change throughout the bank. Required qualifications, capabilities, and skills: - Strong analytical and problem-solving skills with attention to detail to formulate effective data models to address user consumption pain points and lead their delivery. - Curious mind to dig deep into the business and data to understand the context: Inquisitive and analytical mindset, challenges the status quo, and strives for excellence. - 5+ years of relevant experience designing and implementing data models and analytic solutions using dimensional and relational data models. - Hands-on and flexible approach to creating solutions aligned with the tools and skills of the client user. Strong communication skills to present data products and educate data consumers. - Strong knowledge and experience using SQL & Python for data analysis, data engineering, and transformation to answer business questions. - Experience with ETL / ELT process and architecture to move data across pipelines in a lake. - Experience building analytics dashboards or models suited for interactive dashboard consumption. - Experience with cloud-based data lake platforms such as AWS, Azure, or Google Cloud. - Bachelor's degree in computer science, data science, information systems, business analytics, or a related discipline.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Agivant is seeking a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes. Responsibilities: Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness. Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines. Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS. Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs. Implement data quality checks and monitoring to ensure data integrity and identify potential issues. Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes. Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum, and emerging technologies in data engineering. Contribute to the development and enhancement of our data warehouse architecture. Requirements: - Bachelor's degree in Computer Science, Engineering, or a related field. - 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes. - At least 3+ years of experience in Snowflake data warehousing technologies. - At least 3+ years of experience in creating and maintaining Airflow ETL pipelines. - Minimum 3+ years of professional level experience with Python languages for data manipulation and automation. - Working experience with Elastic Search and its application in data pipelines. - Proficiency in SQL and experience with data modeling techniques. - Strong understanding of cloud-based data storage solutions such as AWS S3. - Experience working with NFS and other file storage systems. - Excellent problem-solving and analytical skills. - Strong communication and collaboration skills.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You will be part of a data analytics services company that specializes in creating and managing scalable data platforms for a diverse client base. Leveraging cutting-edge technologies, you will provide actionable insights and value through modern data stack solutions. Your responsibilities will include designing, building, and managing customer data platforms independently using Snowflake, dbt, Fivetran, and SQL. Collaborating with clients and internal teams to gather business requirements and translating them into reliable data solutions will be a key aspect of your role. You will also develop and maintain ELT pipelines with Fivetran and dbt for automating data ingestion, transformation, and delivery. Optimizing SQL code and data models for scalability, performance, and cost efficiency in Snowflake will be crucial. Additionally, ensuring data platform reliability, monitoring, and data quality maintenance will be part of your responsibilities. You will also provide technical mentorship and guidance to junior engineers and maintain comprehensive documentation of engineering processes and architecture. The required skills and qualifications for this role include proven hands-on experience with Snowflake, dbt, Fivetran, and SQL. You should have a strong understanding of data warehousing concepts, ETL/ELT best practices, and modern data stack architectures. Experience in working independently and owning project deliverables end-to-end is essential. Familiarity with version control systems like Git and workflow automation tools, along with solid communication and documentation skills, is necessary. You should also be able to interact directly with clients and understand their business requirements. Preferred skills that would be beneficial for this role include exposure to cloud platforms like AWS, GCP, and Azure, knowledge of Python or other scripting languages for data pipelines, and experience with BI/analytics tools such as Tableau, Power BI, and Looker. In return, you will have the opportunity to lead the implementation of state-of-the-art data platforms for global clients in a dynamic, growth-oriented work environment with flexible working arrangements and a competitive compensation package. If you are interested in this opportunity, please submit your resume and a short cover letter detailing your experience with Snowflake, dbt, Fivetran, and SQL.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You strive to be an essential member of a diverse team of visionaries dedicated to making a lasting impact. Don't pass up this opportunity to collaborate with some of the brightest minds in the field and deliver best-in-class solutions to the industry. As a Senior Lead Data Architect at JPMorgan Chase within the Consumer and Community Banking Data Technology, you are an integral part of a team that works to develop high-quality data architecture solutions for various software applications, platform, and data products. Drive significant business impact and help shape the global target state architecture through your capabilities in multiple data architecture domains. Represents the data architecture team at technical governance bodies and provides feedback regarding proposed improvements regarding data architecture governance practices. Evaluates new and current technologies using existing data architecture standards and frameworks. Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors. Design secure, high-quality, scalable solutions and reviews architecture solutions designed by others. Drives data architecture decisions that impact data product & platform design, application functionality, and technical operations and processes. Serves as a function-wide subject matter expert in one or more areas of focus. Actively contributes to the data engineering community as an advocate of firmwide data frameworks, tools, and practices in the Software Development Life Cycle. Influences peers and project decision-makers to consider the use and application of leading-edge technologies. Advises junior architects and technologists. Required qualifications, capabilities, and skills: - Formal training or certification on software engineering concepts and 5+ years of applied experience. - Advanced knowledge of architecture, applications, and technical processes with considerable in-depth knowledge in data architecture discipline and solutions (e.g., data modeling, native cloud data services, business intelligence, artificial intelligence, machine learning, data domain driven design, etc.). - Practical cloud-based data architecture and deployment experience, preferably AWS. - Practical SQL development experiences in cloud-native relational databases, e.g. Snowflake, Athena, Postgres. - Ability to deliver various types of data models with multiple deployment targets, e.g. conceptual, logical, and physical data models deployed as operational vs. analytical data stores. - Advanced in one or more data engineering disciplines, e.g. streaming, ELT, event processing. - Ability to tackle design and functionality problems independently with little to no oversight. - Ability to evaluate current and emerging technologies to select or recommend the best solutions for the future state data architecture. Preferred qualifications, capabilities, and skills: - Financial services experience, card and banking a big plus. - Practical experience in modern data processing technologies, e.g., Kafka streaming, DBT, Spark, Airflow, etc. - Practical experience in data mesh and/or data lake. - Practical experience in machine learning/AI with Python development a big plus. - Practical experience in graph and semantic technologies, e.g. RDF, LPG, Neo4j, Gremlin. - Knowledge of architecture assessments frameworks, e.g. Architecture Trade-off Analysis.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
The role is seeking a dynamic individual to join the M&R Sales Tech team, bringing expertise in software development of ETL and ELT jobs for the data warehouse software development team. This position plays a crucial role in defining the Design and Architecture during the migration from legacy SSIS technology to cutting-edge cloud technologies such as Azure, Databricks, and Snowflake. The ideal candidate will possess a robust background in Software Architecture, data engineering, and cloud technologies. Key Responsibilities: Architectural Design: Design and implement data architectures of ETL, including creating algorithms, developing data models and schemas, and setting up data pipelines. Technical Leadership: Provide technical leadership to the software development team to ensure alignment of data solutions with business objectives and overall IT strategy. Data Strategy and Management: Define data strategy and oversee data management within the organization, focusing on data governance, quality, privacy, and security using Databricks and Snowflake technologies. Implementation of Machine Learning Models: Utilize Databricks for implementing machine learning models, conducting data analysis, and deriving insights. Data Migration and Integration: Transfer data from on-premise or other cloud platforms to Snowflake, integrating Snowflake and Databricks with other systems for seamless data flow. Performance Tuning: Optimize database performance by fine-tuning queries, enhancing processing speed, and improving data storage and retrieval mechanisms. Troubleshooting and Problem Solving: Identify and resolve issues related to Database, data migration, data pipelines, and other ETL processes, addressing concerns like data quality, system performance, and data security. Stakeholder Communication: Effectively communicate with stakeholders to grasp requirements and deliver solutions that meet business needs. Requirement Qualifications: Education: Bachelor's degree in Computer Science, Engineering, or related field, or equivalent experience. Experience: Minimum of 8 years of experience in software development and Architecture role. Technical Skills: Proficiency in ETL/ELT processes and tools, particularly SSIS; 5+ years of experience with large data warehousing applications; solid experience with reporting tools like Power BI and Tableau; familiarity with creating batch and real-time jobs with Databricks and Snowflake, and working with streaming platforms like Kafka and Airflow. Soft Skills: Strong leadership and team management skills, problem-solving abilities, and effective communication and interpersonal skills. Preferred Qualifications: Experience with Agile development methodologies. Certification in relevant cloud technologies (e.g., Azure, Databricks, Snowflake). Primary Skills: Azure, Snowflake, Databricks Secondary Skills: SSIS, Power BI, Tableau Role Purpose: The purpose of the role is to create exceptional architectural solution design and thought leadership, enabling delivery teams to provide exceptional client engagement and satisfaction. Key Roles and Responsibilities: Develop architectural solutions for new deals/major change requests, ensuring scalability, reliability, and manageability of systems. Provide solutioning of RFPs from clients, ensuring overall design assurance. Manage the portfolio of to-be-solutions to align with business outcomes, analyzing technology environment, client requirements, and enterprise specifics. Offer technical leadership in designing, developing, and implementing custom solutions using modern technology. Define current and target state solutions, articulate architectural targets, recommendations, and propose investment roadmaps. Evaluate and recommend solutions for integration with the technology ecosystem. Collaborate with IT groups to ensure task transition, performance, and issue resolution. Enable Delivery Teams by providing optimal delivery solutions, building relationships with stakeholders, and developing relevant metrics to drive results. Manage multiple projects, identify risks, ensure quality assurance, and recommend tools for reuse and automation. Support pre-sales teams in presenting solution designs to clients, negotiate requirements, and demonstrate thought leadership. Competency Building and Branding: Develop PoCs, case studies, and white papers, attain market recognition, and mentor team members for career development. Team Management: Resourcing, Talent Management, Performance Management, Employee Satisfaction and Engagement. Join us at Wipro, a business driven by purpose and reinvention, where your ambitions can be realized through constant evolution and empowerment. Applications from individuals with disabilities are encouraged.,
Posted 2 days ago
2.0 - 7.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Senior Data Engineer Apply now Date: 27 Jul 2025 Location: Bangalore, IN Company: kmartaustr A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Job Description: 5-7 Yrs of expereience in Data Engineer 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Apply now Find similar jobs:
Posted 3 days ago
3.0 - 8.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Primary Responsibilities: Be a team player in an agile team within a release team / value stream Develop and automate business solutions by creating new and modifying existing software applications Be technically hands-on and excellent in Design, Coding and Testing End to End, product quality Participate and contribute to Sprint Ceremonies Promote and develop the culture of collaboration, accountability, and quality Provide technical support to the team and help the team in resolving technical issues Closely work with Tech Lead, Onshore partners, deployment, and infrastructure teams Basic, structured, standard approach to work Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Graduate degree or equivalent experience 3+ years of experience working in Data warehousing and Data Mart Platforms 3+ years of working experience in warehousing ecosystemDesign & Development, scheduling jobs using Airflow, running, and monitoring refreshes 3+ years of working experience in Big Data Technologies around Spark Or PySpark and Databricks 3+ years of working experience in Agile team 2+ years of working experience in cloud and Dev Ops technologies preferably on AzureDocker/ Kubernetes/Terraform/Chef Working experience in CI/CD pipeline (test, build, deployment and monitoring automation) Knowledge of software configuration management and packaging Demonstrates excellent problem-solving skills Preferred Qualification: 3+ years of working experience in ELT/ETL Design & Development and solid experience in SQL on Teradata and Snowflake At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Nic
Posted 3 days ago
5.0 - 7.0 years
5 - 9 Lacs
Gurugram
Work from Office
Assist in building out the backlog of Power BI dashboards, ensuring they meet business requirements and provide actionable insights. Collect and maintain a firmwide inventory of existing reports, identifying those that need to be converted to Power BI. Collaborate with the team to contract and integrate Snowflake, ensuring seamless data flow and accessibility for reporting and analytics. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Information Systems, Computer Science or a related field 3+ years strong experience in developing and managing Power BI dashboards and reports, preferably within the financial services industry. Experience required in Data Warehousing, SQL, and hands-on expertise in ETL/ELT processes. Familiarity with Snowflake data warehousing solutions and integration. Proficiency in data integration from various sources including APIs and databases. Proficient in SQL for querying and manipulating data. Strong understanding of data warehousing concepts and practices. Experience with deploying and managing dashboards on a Power BI server to service a large number of users. Familiarity with other BI tools and platforms. Experience with financial datasets and understanding Private equity metrics. Knowledge of cloud platforms, particularly Azure, Snowflake, and Databricks. Excellent problem-solving skills and attention to detail. Strong communication skills, both written and oral, with a business and technical aptitude Must possess good verbal and written communication and interpersonal skills Key Responsibilities Create and maintain interactive and visually appealing Power BI dashboards to visualize data insights. Assist in building out the backlog of Power BI dashboards, ensuring they meet business requirements and provide actionable insights. Integrate data from various sources including APIs, databases, and cloud storage solutions such as Azure, Snowflake, and Databricks. Collect and maintain a firmwide inventory of existing reports, identifying those that need to be converted to Power BI. Collaborate with the team to contract and integrate Snowflake, ensuring seamless data flow and accessibility for reporting and analytics. Continuously refine and improve the user interface of dashboards based on ongoing input and feedback. Monitor and optimize the performance of dashboards to handle large volumes of data efficiently. Work closely with stakeholders to understand their reporting needs and translate them into effective Power BI solutions. Ensure the accuracy and reliability of data within Power BI dashboards and reports. Deploy dashboards onto a Power BI server to be serviced to a large number of users, ensuring high availability and performance. Ensure that dashboards provide self-service capabilities and are interactive for end-users. Create detailed documentation of BI processes and provide training to internal teams and clients on Power BI usage Stay updated with the latest Power BI and Snowflake features and best practices to continuously improve reporting capabilities. Behavioral Competencies Effectively communicate with business and technology partners, peers and stakeholders Ability to deliver results under demanding timelines to real-world business problems Ability to work independently and multi-task effectively Identify and communicate areas for improvement Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 3 days ago
7.0 - 12.0 years
9 - 14 Lacs
Pune, Bengaluru
Work from Office
What Youll Do Design & Implement an enterprise data management strategy aligned with business process, focusing on data models designs, database development standards and data management frameworks Develop and maintain data management and governance frameworks to ensure data quality, consistency and compliance for different Discover domains such as Multi omics, In Vivo, Ex Vivo, In Vitro datasets Design and develop scalable cloud based (AWS or Azure) solutions following enterprise standards Design robust data model for semi-structured/structured datasets by following various modelling techniques Design & implement of complex ETL data-pipelines to handle various semi-structured/structured datasets coming from Labs and scientific platforms Work with LAB ecosystems (ELNs, LIMS, CDS etc) to build Integration & data solutions around them Collaborate with various stakeholders, including data scientists, researchers, and IT, to optimize data utilization and align data strategies with organizational goals Stay abreast of the latest trends in Data management technologies and introduce innovative approaches to data analysis and pipeline development. Lead projects from conception to completion, ensuring alignment with enterprise goals and standards. Communicate complex technical details effectively to both technical and non-technical stakeholders. What Youll Bring Minimum of 7+ years of hands-on experience in developing data management solutions solving problems in Discovery/ Research domain Advanced knowledge of data management tools and frameworks, such as SQL/NoSQL, ETL/ELT tools, and data visualization tools across various private clouds Strong experience in following: Cloud based DBMS/Data warehouse offerings AWS Redshift, AWS RDS/Aurora, Snowflake, Databricks ETL tools Cloud based tools Well versed with different cloud computing offerings in AWS and Azure Well aware of Industry followed data security and governance norms Building API Integration layers b/w multiple systems Hands-on experience with data platforms technologies like: Databricks, AWS, Snowflake, HPC ( certifications will be a plus) Strong programming skills in languages such as Python, R Strong organizational and leadership skills. Bachelors or Masters degree in Computational Biology, Computer Science, or a related field. Ph.D. is a plus. Preferred/Good To Have MLOps expertise leveraging ML Platforms like Dataiku, Databricks, Sagemaker Experience with Other technologies like Data Sharing (eg. Starburst), Data Virtualization (Denodo), API Management (mulesoft etc) Cloud Solution Architect certification (like AWS SA Professional or others)
Posted 3 days ago
4.0 - 7.0 years
6 - 9 Lacs
Bengaluru
Work from Office
Job Description: Vontiers Data & Analytics Hub is seeking an experienced Snowflake Data Engineer to join our team The Data Engineer will have a key role in designing, developing, and maintaining data pipelines and data models using Snowflake cloud data platform, The ideal candidate will have considerable experience in hands-on experience with Snowflake data warehouse, including data ingestion, transformation, and optimization, Responsibilities: Design, develop, and maintain data pipelines and data models using Snowflake cloud data platform, Implement best practices for data quality, security, and performance, Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions, Provide technical guidance and mentorship to junior data engineers, Stay updated with the latest trends and technologies in the data engineering domain, Requirements: Bachelor's degree in computer science, Engineering, or related field, At least 5 years of experience in data engineering, preferably in a cloud environment, Cloud Platforms: Familiarity with any cloud environmentsAWS or Azure, Proficient in SQL and Python, and familiar with other programming languages such as Java, Scala, or R, Hands-on experience with Snowflake data warehouse, including data ingestion, transformation, and optimization, ETL/ ELT process & methodologies for data integration Collaborate with data engineers and developers to seamlessly integrate diverse data sources into the Snowflake data platform This task requires proficiency in ETL/ELT processes, ensuring data is accurately and efficiently ingested, transformed, and stored, Successful application of native Snowflake ingestion components such as snowpipes, streams use of external tables and incremental processing, Successful application of orchestration and materialization techniques native to snowflake such as dynamic tables and complex DAGs, Experience with Kafka streaming pipes and Kafka forwarders, Experience with data integration tools such as Matillion, Data modelling to design and implement data models and schemas, Data Warehousing Concepts: Understanding of data warehousing concepts and practices, including data normalization, denormalization, and partitioning Implement best practices for data quality, security, and performance, Performance Tuning: Skills in optimizing Snowflake performance, including query optimization and resource management, Strong analytical and problem-solving skills, and attention to detail, Excellent communication and collaboration skills, and ability to work in a fast-paced environment, Good to have: Preferred if candidate comes with certifications in Snowflake, Experience with data visualization tools such as Power BI, Familiarity with finance, procurement, and manufacturing processes, including budgeting, cost management, and supply chain optimization, to support business efficiency and strategic decision-making is a plus, MLOps: Demonstrated familiarity with decision sciences applications and approaches utilizing Snowpark compute, Snowflake Model registries, Snowflake Feature registries and the ongoing supporting to MLOps role of snowflake ML workflows practices, Who Is Vontier Vontier (NYSE: VNT) is a global industrial technology company uniting productivity, automation and multi-energy technologies to meet the needs of a rapidly evolving, more connected mobility ecosystem Leveraging leading market positions, decades of domain expertise and unparalleled portfolio breadth, Vontier enables the way the world moves delivering smart, safe and sustainable solutions to our customers and the planet Vontier has a culture of continuous improvement and innovation built upon the foundation of the Vontier Business System and embraced by colleagues worldwide Additional information about Vontier is available on the Companys website at vontier , At Vontier, we empower you to steer your career in the direction of success with a dynamic, innovative, and inclusive environment, Our commitment to personal growth, work-life balance, and collaboration fuels a culture where your contributions drive meaningful change We provide the roadmap for continuous learning, allowing creativity to flourish and ideas to accelerate into impactful solutions that contribute to a sustainable future, Join our community of passionate people who work together to navigate challenges and seize opportunities At Vontier, you are not on this journey alone-we are dedicated to equipping you with the tools and support needed to fuel your innovation, lead with impact, and thrive both personally and professionally, Together, lets enable the way the world moves! Show
Posted 3 days ago
6.0 - 11.0 years
12 - 17 Lacs
Pune
Work from Office
Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities: Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria: Bachelors degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture
Posted 3 days ago
12.0 - 17.0 years
12 - 17 Lacs
Pune
Work from Office
Role Overview: The Technical Architect specializes in Traditional ETL tools such as Informatica Intelligent Cloud Services (IICS), and similar technologies. The jobholder designs, implements, and oversees robust ETL solutions to support our organization's data integration and transformation needs. Responsibilities: Design and develop scalable ETL architectures using tools like IICS, and other traditional ETL platforms. Collaborate with stakeholders to gather requirements and translate them into technical solutions. Ensure data quality, integrity, and security throughout the ETL processes. Optimize ETL workflows for performance and reliability. Provide technical leadership and mentorship to development teams. Troubleshoot and resolve complex technical issues related to ETL processes. Document architectural designs and decisions for future reference. Stay updated with emerging trends and technologies in ETL and data integration. Key Technical Skills & Responsibilities 12+ years of experience in data integration and ETL development, with at least 3 years in an Informatica architecture role. Extensive expertise in Informatica PowerCenter, IICS, and related tools (Data Quality, EDC, MDM). Proven track record of designing ETL solutions for enterprise-scale data environments Advanced proficiency in Informatica PowerCenter and IICS for ETL/ELT design and optimization. Strong knowledge of SQL, Python, or Java for custom transformations and scripting. Experience with data warehousing platforms (Snowflake, Redshift, Azure Synapse) and data lakes. Familiarity with cloud platforms (AWS, Azure, GCP) and their integration services. Expertise in data modeling, schema design, and integration patterns. Knowledge of CI/CD, Git, and infrastructure-as-code (e.g., Terraform) Experience of working on proposals, customer workshops, assessments etc is preferred Must have good communication and presentation skills Primary Skills: Informatica,IICS Data Lineage and Metadata Management Data Modeling Data Governance Data integration architectures Informatica data quality Eligibility Criteria: Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience in ETL architecture and development using tools like IICS, etc. Strong understanding of data integration, transformation, and warehousing concepts. Proficiency in SQL and scripting languages. Experience with cloud-based ETL solutions is a plus. Familiarity with Agile development methodologies. Excellent problem-solving and analytical skills. Strong communication and leadership abilities. Knowledge of data governance and compliance standards. Ability to work in a fast-paced environment and manage multiple priorities.
Posted 3 days ago
2.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Data Analyst with 2 to 8 years of experience, you will be based in Bangalore and will have the following responsibilities: - Demonstrating strong proficiency in SQL, Python, and Excel, along with the ability to adapt and learn other Analytics tools as required. - Building and optimizing Data Pipelines on popular Cloud Platforms. - Automation of Data extraction and insertion processes to Management Information Systems (MIS) using Python. - Extracting actionable insights from analysis results. - Possessing attention to detail, a strong data orientation, and a commitment to creating clean and reproducible code. - Showing a genuine passion for collaboration, along with exceptional interpersonal and communication skills. Good to have skills: - Experience with tools like Google Big Query, Meta base, Clever Tap, Google Data Studio, Firebase/Google Analytics. - Proficiency in Visualization Tools such as Google Data Studio, Tableau, etc. - Hands-on experience with ETL/ELT Pipelines and Data flow setups.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
madurai, tamil nadu
On-site
You are an experienced Java architect responsible for designing and implementing sophisticated Java-based software solutions. Your role involves overseeing system architecture, selecting appropriate technologies, ensuring scalability and performance, collaborating with cross-functional teams, mentoring junior developers, and staying updated on emerging Java technologies, focusing on areas such as microservices, cloud computing, and high-availability systems. **Key Responsibilities:** **Architecture Design:** - Define overall system architecture for large-scale Java applications, including component design, data flow, and integration patterns. - Select appropriate Java frameworks and libraries based on project requirements. - Design for scalability, performance, and security considerations. - Implement microservices architecture where applicable. **Technology Evaluation and Selection:** - Research and evaluate new Java technologies, frameworks, and tools. - Stay updated on cloud platforms like AWS, Azure, and GCP for potential integration. - Make informed technology decisions based on project needs. **Development Leadership:** - Guide development teams on technical best practices and design patterns. - Provide code reviews and mentor junior developers. - Troubleshoot complex technical issues and design flaws. **Collaboration and Stakeholder Management:** - Work closely with product managers, business analysts, and other stakeholders to understand requirements. - Communicate technical concepts effectively to non-technical audiences. - Collaborate with DevOps teams to ensure smooth deployment and monitoring. **Performance Optimization:** - Identify performance bottlenecks and implement optimization strategies. - Monitor system health and performance metrics. **Essential skills for a Java architect:** - Deep expertise in Java Core concepts: Object-oriented programming, Collections, Concurrency, JVM internals. - Advanced Java frameworks: Spring Boot, Spring MVC, Hibernate, JPA. - Architectural patterns: Microservices, Event-driven architecture, RESTful APIs. - Database design and SQL: Proficiency in relational databases and SQL optimization, Proficiency in NO SQL (ElasticSearch/Opensearch). - Cloud computing knowledge: AWS, Azure, GCP. - Hands-on Experience in ETL, ELT. - Knowledge of Python, Pyspark would be an added advantage. - Strong communication and leadership skills. **Minimum Qualifications:** - Bachelor's degree in Computer Science, Information Technology, or a related field. - Deep expertise in Java Core concepts, Advanced Java frameworks, Architectural patterns, Database design and SQL, Cloud computing knowledge, Hands-on Experience in ETL, ELT, Knowledge of Python, Pyspark. - Strong communication and leadership skills. This is a full-time job for the position of Principal Consultant based in India-Madurai. If you possess the required qualifications and skills, we invite you to apply for this role.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Senior Platform Engineer at Kenvue Data Platforms, you will have an exciting opportunity to be part of our growing Data & Analytics product line team. Your role involves collaborating closely with various teams such as Business partners, Product Owners, Data Strategy, Data Platform, Data Science, and Machine Learning (MLOps) to drive innovative data products for end users. You will play a key role in shaping the overall solution and data platforms, ensuring their stability, responsiveness, and alignment with business and cloud computing needs. Your expertise will be crucial in optimizing business outcomes and contributing to the growth and success of the organization. Your responsibilities will include providing leadership for data platforms in partnership with architecture teams, conducting proof of concepts to deliver secure and scalable platforms, staying updated on emerging technologies, mentoring other platform engineers, and focusing on the execution and delivery of reliable data platforms. You will work closely with Business Analytics leaders to understand business needs and create value through technology. Additionally, you will lead data platforms operations, build next-generation data and analytics capabilities, and drive the adoption and scaling of data products within the organization. To be successful in this role, you should have an undergraduate degree in Technology, Computer Science, applied data sciences, or related fields, with an advanced degree being preferred. You should possess strong analytical skills, effective communication abilities, and a proven track record in developing and maintaining data platforms. Experience with cloud platforms such as Azure, GCP, AWS, cloud-based databases, data streaming platforms, and Agile methodology will be essential. Your ability to define platforms tech stack, prioritize work items, and work effectively in a diverse and inclusive company culture will be critical to your success in this role. If you are passionate about leveraging data and technology to drive business growth, make a positive impact on personal health, and shape the future of data platforms, then this role at Kenvue Data Platforms is the perfect opportunity for you. Join us in our mission to empower millions of people every day through insights, innovation, and care. We look forward to welcoming you to our team! Location: Asia Pacific-India-Karnataka-Bangalore Function: Digital Product Development Qualifications: - Undergraduate degree in Technology, Computer Science, applied data sciences or related fields; advanced degree preferred - Strong interpersonal and communication skills, ability to explain digital concepts to business leaders and vice versa - 4 years of data platforms experience in Consumer/Healthcare Goods companies - 6 years of progressive experience in developing and maintaining data platforms - Minimum 5 years hands-on experience with Cloud Platforms and cloud-based databases - Experience with data streaming platforms, microservices, and data integration - Proficiency in Agile methodology within DevSecOps model - Ability to define platforms tech stack to address data challenges - Proven track record of delivering high-profile projects within defined resources - Commitment to diversity, inclusion, and equal opportunity employment,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
kochi, kerala
On-site
You will be responsible for big data development and support for production deployed applications, analyzing business and functional requirements for completeness, and developing code with minimum supervision. Working collaboratively with team members, you will ensure accurate and timely communication and delivery of assigned tasks to guarantee the end-products" performance upon release to production. Handling software defects or issues within production timelines and SLA is a key aspect of the role. Your responsibilities will include authoring test cases within a defined testing strategy, participating in test strategy development for Configuration and Custom reports, creating test data, assisting in code merge peer reviews, reporting status and progress to stakeholders, and providing risk assessment throughout development cycles. You should have a strong understanding of system and big data strategies/approaches adopted by IQVIA, stay updated on software applications development industry knowledge, and be open to production support roles within the project. To excel in this role, you should have 5-8 years of overall experience, with at least 2-3 years in Big Data, proficiency in Big Data Technologies such as HDFS, Hive, Pig, Sqoop, HBase, and Oozie, strong experience in SQL Queries and Airflow, familiarity with PSql, CI-CD, Jenkins, and UNIX commands, excellent communication skills, comprehensive skills, good confidence level, proven analytical, logical, and problem-solving techniques. Experience in Spark Application Development, ETL, and ELT tools is preferred. Possessing fine-tuned analytical skills, attention to detail, and the ability to work effectively with colleagues from diverse backgrounds is essential. The minimum educational requirement for this position is a Bachelor's Degree in Information Technology or a related field, along with 5-8 years of development experience or an equivalent combination of education, training, and experience. IQVIA is a leading global provider of clinical research services, commercial insights, and healthcare intelligence, facilitating the acceleration of innovative medical treatments" development and commercialization to enhance patient outcomes and population health worldwide. To learn more, visit https://jobs.iqvia.com.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
NTT DATA is looking for a Databricks Developer to join their team in Bangalore, Karnataka, India. As a Databricks Developer, your responsibilities will include pushing data domains into a massive repository and building a large data lake by highly leveraging Databricks. To be considered for this role, you should have at least 3 years of experience in a Data Engineer or Software Engineer role. An undergraduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field is required, while a graduate degree is preferred. You should also have experience with data pipeline and workflow management tools, advanced working SQL knowledge, and familiarity with relational databases. Additionally, an understanding of Datawarehouse (DWH) systems, ELT and ETL patterns, data models, and transforming data into various models is essential. You should be able to build processes supporting data transformation, data structures, metadata, dependency and workload management. Experience with message queuing, stream processing, and highly scalable big data data stores is also necessary. Preferred qualifications include experience with Azure cloud services such as ADLS, ADF, ADLA, and AAS. The role also requires a minimum of 2 years of experience in relevant skills. NTT DATA is a trusted global innovator of business and technology services with a commitment to helping clients innovate, optimize, and transform for long-term success. They serve 75% of the Fortune Global 100 and have a diverse team of experts in more than 50 countries. As a Global Top Employer, NTT DATA offers services in business and technology consulting, data and artificial intelligence, industry solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. They are known for providing digital and AI infrastructure solutions and are part of the NTT Group, investing over $3.6 billion each year in R&D to support organizations and society in moving confidently into the digital future. Visit their website at us.nttdata.com for more information.,
Posted 3 days ago
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Palantir Foundry Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Project Role:Lead Data Engineer Project Role Description:Design, build and enhance applications to meet business process and requirements in Palantir foundry.Work experience:Minimum 6 years Must have Skills: Palantir Foundry, PySparkGood to Have Skills: Experience in PySpark, python and SQLKnowledge on Big Data tools & TechnologiesOrganizational and project management experience.Job Requirements & Key Responsibilities:Responsible for designing, developing, testing, and supporting data pipelines and applications on Palantir foundry.Configure and customize Workshop to design and implement workflows and ontologies.Collaborate with data engineers and stakeholders to ensure successful deployment and operation of Palantir foundry applications.Work with stakeholders including the product owner, data, and design teams to assist with data-related technical issues and understand the requirements and design the data pipeline.Work independently, troubleshoot issues and optimize performance.Communicate design processes, ideas, and solutions clearly and effectively to team and client. Assist junior team members in improving efficiency and productivity.Technical Experience:Proficiency in PySpark, Python and SQL with demonstrable ability to write & optimize SQL and spark jobs.Hands on experience on Palantir foundry related services like Data Connection, Code repository, Contour, Data lineage & Health checks.Good to have working experience with workshop, ontology, slate.Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry.Experience in ingesting data from different external source systems using data connections and sync.Good Knowledge on Spark Architecture and hands on experience on performance tuning & code optimization.Proficient in managing both structured and unstructured data, with expertise in handling various file formats such as CSV, JSON, Parquet, and ORC.Experience in developing and managing scalable architecture & managing large data sets.Good understanding of data loading mechanism and adeptly implement strategies for capturing CDC.Nice to have test driven development and CI/CD workflows.Experience in version control software such as Git and working with major hosting services (e. g. Azure DevOps, GitHub, Bitbucket, Gitlab).Implementing code best practices involves adhering to guidelines that enhance code readability, maintainability, and overall quality.Educational Qualification:15 years of full-term education Qualification 15 years full time education
Posted 4 days ago
12.0 - 17.0 years
12 - 17 Lacs
Pune
Work from Office
Role Overview: The Technical Architect specializing in Data Governance and Master Data Management (MDM) designs, implements, and optimizes enterprise data solutions. The jobholder has expertise in tools like Collibra, Informatica, InfoSphere, Reltio, and other MDM platforms, ensuring data quality, compliance, and governance across the organization. Responsibilities: Architect and optimize strategies for data quality, metadata management, and data stewardship. Design and implement data governance frameworks and MDM solutions using tools like Collibra, Informatica, InfoSphere, and Reltio. Develop strategies for data quality, metadata management, and data stewardship. Collaborate with cross-functional teams to integrate MDM solutions with existing systems. Establish best practices for data governance, security, and compliance. Monitor and troubleshoot MDM environments for performance and reliability. Provide technical leadership and guidance to data teams. Stay updated on advancements in data governance and MDM technologies. Key Technical Skills & Responsibilities Overall 12+Yrs of Experience 10+ years of experience working on DG/MDM projects Strong on Data Governance concepts Hands-on different DG tools/services Hands-on Reference Data, Taxonomy Strong understanding of Data Governance, Data Quality, Data Profiling, Data Standards, Regulations, Security Match and Merge strategy Design and implement the MDM Architecture and Data Models Usage of Spark capabilities Statistics to deduce meanings from vast enterprise level data Different data visualization means of analyzing huge data sets Good to have knowledge of Python/R/Scala languages Experience on DG on-premise and on-cloud Understanding of MDM, Customer, Product, Vendor Domains and related artifacts Experience of working on proposals, customer workshops, assessments etc is preferred Must have good communication and presentation skills Technology Stack - Collibra, IBM MDM, Reltio, Infosphere Eligibility Criteria: Bachelors degree in Computer Science, Data Management, or a related field. Proven experience as a Technical Architect in Data Governance and MDM. Certifications in relevant MDM tools (e.g., Collibra Data Governance, Informatica / InfoSphere / Reltio MDM, ). Experience with cloud platforms like AWS, Azure, or GCP. Proficiency in tools like Collibra, Informatica, InfoSphere, Reltio, and similar platforms. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Excellent problem-solving and communication skills.
Posted 4 days ago
5.0 - 8.0 years
6 - 10 Lacs
Chandigarh
Work from Office
Design, develop & optimize complex databases. Create, deploy & maintain interactive Power BI reports & dashboards to meet business intelligence requirements. Develop SQL queries, stored procedures, functions to extract, manipulate & analyse data. Required Candidate profile Power BI Developer with 5+ years experience in building dynamic dashboards, interactive reports & data models. Microsoft Certified Power BI Data Analyst Associate. Strong knowledge of SQL,T-SQL & DMS.
Posted 6 days ago
5.0 - 9.0 years
14 - 24 Lacs
Hyderabad
Hybrid
Experience:- Required: Bachelors degree in computer science or engineering. 7+ years of experience with data analytics, data modeling, and database design. 5+ years of experience with Vertica. 2+ years of coding and scripting (Python, Java, Scala) and design experience. 2+ years of experience with Airflow. Experience with ELT methodologies and tools. Experience with GitHub. Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / pAdditional/preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and isissuessuein s. Ability to adapt to fast changing environment. Experience designing and implementing automated ETL processes. Experience with MicroStrategy reporting tool.
Posted 6 days ago
7.0 - 12.0 years
22 - 27 Lacs
Hyderabad, Pune, Mumbai (All Areas)
Work from Office
Job Description - Snowflake Developer Experience: 7+ years Location: India, Hybrid Employment Type: Full-time Job Summary We are looking for a Snowflake Developer with 7+ years of experience to design, develop, and maintain our Snowflake data platform. The ideal candidate will have strong expertise in Snowflake SQL, data modeling, and ETL/ELT processes to build efficient and scalable data solutions. Key Responsibilities 1. Snowflake Development & Implementation Design and develop Snowflake databases, schemas, tables, and views Write and optimize complex SQL queries, stored procedures, and UDFs Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks) Manage virtual warehouses, resource monitors, and cost optimization 2. Data Pipeline & Integration Build and maintain ETL/ELT pipelines using Snowflake and tools like Snowpark, Python, or Spark Integrate Snowflake with cloud storage (S3, Blob Storage) and data sources (APIs) Develop data ingestion processes (batch and real-time) using Snowpipe 3. Performance Tuning & Optimization Optimize query performance through clustering, partitioning, and indexing Monitor and troubleshoot data pipelines and warehouse performance Implement caching strategies and materialized views for faster analytics 4. Data Modeling & Governance Design star schema, snowflake schema, and normalized data models Implement data security (RBAC, dynamic data masking, row-level security) Ensure data quality, documentation, and metadata management 5. Collaboration & Support Work with analysts, BI teams, and business users to deliver data solutions Document technical specifications and data flows Provide support and troubleshooting for Snowflake-related issues Required Skills & Qualifications 7+ years in database development, data warehousing, or ETL 3+ years of hands-on Snowflake development experience Strong SQL and scripting (Python, Bash) skills Experience with Snowflake utilities (SnowSQL, Snowsight) Knowledge of cloud platforms (AWS, Azure) and data integration tools SnowPro Core Certification (preferred but not required) Experience with Coalesce DBT , Airflow, or other data orchestration tools Familiarity with CI/CD pipelines and DevOps practices Knowledge of data visualization tools (Power BI, Tableau)
Posted 6 days ago
6.0 - 11.0 years
7 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough