Home
Jobs

235 Snowflake Jobs - Page 3

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 years

0 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Foundit logo

VP - Data & Analytics Role Summary This leadership role is at the forefront of driving enterprise-wide data-driven decision-making, strategic analytics, and innovation. The position blends technical depth, business acumen, stakeholder influence, and people leadership to transform analytics into a powerful catalyst for growth, operational excellence, and customer-centric outcomes. Key Responsibilities Strategic Analytics & Decision Support Develop and execute an enterprise analytics strategy aligned with business goals. Create and maintain a long-term analytics roadmap to support strategic priorities. Translate complex business problems into data-driven solutions and actionable insights. Contribute to large-scale digital transformation and finance modernization initiatives. Data Analysis, Modeling & Quality Assurance Build, deploy, and monitor advanced statistical models and machine learning algorithms. Ensure data quality and governance by defining standards, controls, and escalation paths. Develop key performance indicators (KPIs) and track the impact of analytics initiatives. Leadership & Team Development Lead high-performing analytics teams Build and mentor talent, driving team development, empowerment, and succession planning. Foster a culture of accountability, collaboration, and continuous improvement. Stakeholder Engagement & Influence Manage client interactions and ensure the delivery of high-quality analytics solutions. Serve as a strategic partner and Lead cross-functional collaboration efforts and analytics governance forums. Engage executive stakeholders, articulating analytics value clearly to both technical and non-technical audiences. Business & Domain Expertise Apply strong domain expertise in areas like financial services, banking, or marketing analytics. Understand customer behavior, market dynamics, and product performance to drive insights. Stay informed on compliance, privacy, and regulatory trends impacting analytics. Data Infrastructure & Tools Enablement Partner with Data Engineering to ensure scalable, efficient analytics infrastructure. Utilize modern tools and platforms such as SQL, Python, Tableau, Power BI, Snowflake, Hadoop, and Hive. Promote data platform automation and drive adoption of advanced analytics tools. Project & Change Leadership Lead strategic change management initiatives including Agile transformations and M&A integrations. Prioritize and manage complex, high-impact projects across global, matrixed organizations. Design and implement analytics programs that align with evolving business needs. Communication & Storytelling Simplify complex data into business-relevant insights using compelling storytelling and visualizations. Deliver executive-level presentations that drive decisions and inspire action. Champion a culture of insight-led decision-making and knowledge sharing. Compliance & Governance Uphold strong data ethics, privacy, and governance standards across all analytics activities. Ensure alignment with internal controls and evolving regulatory requirements. Promote a compliance-first mindset and mitigate risks through effective oversight. Go-to-Market Offerings Define and lead the creation of data and analytics-driven go-to-market offerings and accelerators. Collaborate with business development, product, and strategy teams to shape client-facing solutions. Develop thought leadership, sales enablement materials, and proposals for new analytics-driven services. Establish strategic partnerships with technology vendors, startups, and academic institutions to enhance offerings and innovation. Preferred Qualifications & Certifications Bachelor's or Master's degree in Engineering, Mathematics, Statistics, Finance, Economics, or a related quantitative discipline. Key Competencies & Attributes Strategic and innovative thinking with strong commercial acumen. Resilience and adaptability in dynamic business environments. High emotional intelligence, integrity, and commitment to building inclusive teams. Strong project management and organizational skills with attention to execution.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Foundit logo

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Assistant Vice President- Data Engineering Legacy - Snowflake Architect ! In this role, the Snowflake Architect is responsible for providing technical direction and lead a group of one or more developer to address a goal. Responsibilities Strong experience in building/designing the data warehouse, data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers. Strong understanding on Snowflake Architecture Able to create the design and data modelling independently. Able to create the high level and low-level design document based on requirement. Strong experience with building productionized data ingestion and data pipelines in Snowflake Should have prior experience as an Architect in interacting with customer and Team/Delivery leaders and vendor teams. Should have strong experience in migration/greenfield projects in Snowflake. Should have experience in implementing Snowflake best practices for network policy , Storage integration, data governance, cost optimization, resource monitoring, data ingestion, transformation, consumption layers Should have good exp on Snowflake RBAC and data security. Should have good experience in implementing strategy for CDC or SCD type 2 Strong experience in Snowflake features including new snowflake features. Should have good experience in Python. Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF) Should have experience in DBT. Must to have Snowflake SnowPro Core or SnowPro Advanced Architect certification. Should have experience/knowledge in orchestration and scheduling tools experience. Should have good understanding on ETL processes and ETL tools. Good understanding of agile methodology. Good to have some understanding on GenAI. Good to have exposure to other databases such as Redshift, Databricks, SQL Server, Oracle, PostgreSQL etc. Able to create POCs, roadmaps, solution architecture, estimations & implementation plan Experience for Snowflake integrations with other data processing. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and strong experience as a Snowflake Architect. Skill Metrix: Snowflake, Python, AWS/Azure, Data Modeling, Design Patterns, DBT, ETL process and Data Warehousing concepts. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

, India

On-site

Foundit logo

Job Description : YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we're a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hireSnowflake Professionals in the following areas : JD as below Snowflake SnowSQL, PL/SQL Any ETL Tool Job Description: 3+ years of IT experience in Analysis, Design, Development and unit testing of Data warehousing applications using industry accepted methodologies and procedures Write complex SQL queries to implement ETL (Extract, Transform, Load) processes and for Business Intelligence reporting. Strong experience with Snowpipe execustion, snowflake Datawarehouse, deep understanding of snowflake architecture and Processing, Creating and managing automated data pipelines for both batch and streaming data using DBT. Designing and implementing data models and schemas to support data warehousing and analytics within Snowflake. Writing and optimizing SQL queries for efficient data retrieval and analysis. Deliver robust solutions through Query optimization ensuring Data Quality. Should have experience in writing Functions and Stored Procedures. Strong understanding of the principles of Data Warehouse using Fact Tables, Dimension Tables, star and snowflake schema modelling Analyse & translate functional specifications /user stories into technical specifications. Good to have experience in Design/ Development in any ETL tool. Good interpersonal skills, experience in handling communication and interactions between different teams. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment.We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 1 week ago

Apply

6.0 - 8.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Job Description : YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we're a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hireSnowflake Professionals in the following areas : JD as below Snowflake SnowSQL, PL/SQL Any ETL Tool Job Description: Min 6-7 years of IT experience in Analysis, Design, Development and unit testing of Data warehousing applications using industry accepted methodologies and procedures Write complex SQL queries to implement ETL (Extract, Transform, Load) processes and for Business Intelligence reporting. Strong experience with Snowpipe execustion, snowflake Datawarehouse, deep understanding of snowflake architecture and Processing, Creating and managing automated data pipelines for both batch and streaming data using DBT. Designing and implementing data models and schemas to support data warehousing and analytics within Snowflake. Writing and optimizing SQL queries for efficient data retrieval and analysis. Deliver robust solutions through Query optimization ensuring Data Quality. Should have experience in writing Functions and Stored Procedures. Strong understanding of the principles of Data Warehouse using Fact Tables, Dimension Tables, star and snowflake schema modelling. Analyse & translate functional specifications /user stories into technical specifications. Good to have experience in Design/ Development in any ETL tool like snaplogic, Informatica, Datastage. Good interpersonal skills, experience in handling communication and interactions between different teams. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment.We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture

Posted 1 week ago

Apply

0.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Foundit logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant - Data Engineer (DBT+Snowflake)! As a Data Engineer, the candidate should possess strong expertise in Data analysis, Data Integration , Data transformation and ETL/ELT skills required to perform the role. Also he/she should possess the relevant domain experience in Investment Banking, exposure to Cloud preferably AWS. Responsibilities: 1. Assist in the design and implementation of Snowflake-based analytics solution (data lake and data warehouse) on AWS 2. Requirements definition, source data analysis and profiling, the logical and physical design of the data lake and data warehouse as well as the design of data integration and publication pipelines 3. Develop Snowflake deployment and usage best practices 4. Help educate the rest of the team members on the capabilities and limitations of Snowflake 5. Build and maintain data pipelines adhering to suggested enterprise architecture principles and guidelines 6. Design, build, test, and maintain data management systems 7. Work in sync with internal and external team members like data architects, data scientists, data analysts to handle all sorts of technical issue 8. Act as technical leader within the team 9. Working in Agile/Lean model 10. Deliver quality deliverables on time 11. Translating complex functional requirements into technical solutions. 12. Creation and maintenance of optimum data pipeline architecture for ingestion, processing of data 13. Creation of necessary infrastructure for ETL jobs from a wide range of data sources using Talend, DBT, S3, Snowflake. 14. Experience in Data storage technologies like Amazon S3, SQL, NoSQL 15. Data modeling technical awareness 16. Experience in working with stakeholders working in different time zones Qualifications we seek in you! Minimum Qualifications: Bachelor&rsquos Degree (Computer Science, Mathematics or Statistics) . relevant experience Preferred Qualifications/skills: We appreciate candidate to have knowledge of the following: 1. AWS data services development experience. 2. Working knowledge on using Bigdata technologies. 3. Experience in collaborating data quality and data governance team. 4. Exposure to reporting tools like Tableau 5. Apache Airflow, Apache Kafka (nice to have) 6. Payments domain knowledge 7. CRM, Accounting, etc. in depth understanding 8. Regulatory reporting exposure Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients.Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant, DB ETL Developer In this role, you will be responsible for coding, testing and delivering high quality deliverables, and should be willing to learn new technologies. Responsibilities . Will be responsible for design, code & maintain databases and ensuring their stability, reliability, and performance. . Research and suggest new database products, services and protocols. . Ensure all database programs meet company and performance requirements. . Collaborate with other database teams and owners of different applications. . Modify databases according to requests and perform tests. . Maintain and own database in all environments Qualifications we seek in you! Minimum Qualifications . BE/B Tech/MCA . Excellent written and verbal communication skills Preferred Qualifications/ Skills . A bachelor&rsquos degree in Computer Science or a related field. . Hands-on developer in Sybase, DB2, ETL technologies. . Worked extensively on data integration, designing, and developing reusable interfaces/ . Advanced experience in Sybase, shell scripting, Unix, database design and modelling, and ETL technologies, Informatica . Hands-on experience with Snowflake OR Informatica with below details:- o Demonstrate expertise in Snowflake data modelling and ELT using Snowflake SQL, implementing complex stored procedures and best practices with data warehouse and ETL concepts. o Designing, implementing, and testing cloud computing solutions using Snowflake technology o Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, zero copy clone, time travel, and understand how to use these features. o Creating, monitoring and optimization of ETL/ELT processes (Talend, Informatica) migrating solutions from on-premises to public cloud platforms. . Expert level understanding of data warehouse, core database concepts and relational database design . Skilled at writing, editing large complicated SQL statements . Experience in writing stored procedures, optimization, and performance tuning . Strong Technology acumen and a deep strategic mindset . Proven track record of delivering results . Proven analytical skills and experience making decisions based on hard and soft data . A desire and openness to learning and continuous improvement, both of yourself and your team members . Exposure to SDLC tools, such as: JIRA, Confluence, SVN, TeamCity, Jenkins, Nolio, and Crucible. . Experience on DevOps, CI/CD, and Agile methodology. . Good to have experience with Business Intelligence tools . Familiarity with Postgres and Python is a plus Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on , , , and .

Posted 1 week ago

Apply

6.0 - 12.0 years

15 - 25 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Job | Immediately Hiring for Snowflake Developer (Snowflake + Python) SPN Globe is a specialized IT recruitment and staffing firm. We are the sourcing partner of various software companies on Permanent and Contract to Hire Demands. Also working with few top notch (CMMI-5 level) clients, PAN India clients and major city clients such as Hyderabad, Bengaluru, Pune, Mumbai, Nagpur and NCR. Please find the position details below: Role: Snowflake Developer (Snowflake + Python) Type: Permanent (No third-party payroll) Location: Pune Joining: 0 to 30 days / Immediate Joiners Experience: 6+ years Mandatory Skills:Snowflake + Python Work Mode: Hybrid Job Description: We're looking for a candidate who has a strong background in data technologies such as SQL Server, Snowflake, and similar platforms. In addition to that, they should bring experience in at least one other programming language, with a proficiency in Python being a key requirement. The ideal candidate should also have: Exposure to DevOps pipelines within a data engineering context At least a high-level understanding of AWS services and how they fit into modern data architectures A proactive mindsetsomeone who is motivated to take initiative and contribute beyond assigned tasks. Apply immediately to grab it. Email your resume at [HIDDEN TEXT] Also, immediately refer this opportunity to your friends

Posted 1 week ago

Apply

10.0 - 15.0 years

10 - 15 Lacs

Ahmedabad, Gujarat, India

On-site

Foundit logo

Key Responsibilities Design, develop, and implement end-to-end data architecture solutions. Provide technical leadership in Azure, Databricks, Snowflake, and Microsoft Fabric. Architect scalable, secure, and high-performing data solutions. Work on data strategy, governance, and optimization. Implement and optimize Power BI dashboards and SQL-based analytics. Collaborate with cross-functional teams to deliver robust data solutions. Primary Skills Required Data Architecture & Solutioning Azure Cloud (Data Services, Storage, Synapse, etc.) Databricks & Snowflake (Data Engineering & Warehousing) Power BI (Visualization & Reporting) Microsoft Fabric (Data & AI Integration) SQL (Advanced Querying & Optimization)

Posted 1 week ago

Apply

7.0 - 12.0 years

7 - 12 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

At ZS, we honor the visible and invisible elements of our identities, personal experiences, and belief systemsthe ones that comprise us as individuals, shape who we are, and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What You'll Do Design & implement an enterprise data management strategy aligned with business process, focusing on data model designs, database development standards, and data management frameworks. Develop and maintain data management and governance frameworks to ensure data quality, consistency, and compliance for different Discover domains such as Multi omics, In Vivo, Ex Vivo, In Vitro datasets. Design and develop scalable cloud-based (AWS or Azure) solutions following enterprise standards. Design robust data models for semi-structured/structured datasets by following various modeling techniques. Design & implement complex ETL data-pipelines to handle various semi-structured/structured datasets coming from Labs and scientific platforms. Work with LAB ecosystems (ELNs, LIMS, CDS etc.) to build integration & data solutions around them. Collaborate with various stakeholders, including data scientists, researchers, and IT, to optimize data utilization and align data strategies with organizational goals. Stay abreast of the latest trends in Data management technologies and introduce innovative approaches to data analysis and pipeline development. Lead projects from conception to completion, ensuring alignment with enterprise goals and standards. Communicate complex technical details effectively to both technical and non-technical stakeholders. What You'll Bring Minimum of 7+ years of hands-on experience in developing data management solutions solving problems in the Discovery/Research domain. Advanced knowledge of data management tools and frameworks , such as SQL/NoSQL, ETL/ELT tools, and data visualization tools across various private clouds. Strong experience in the following: Cloud-based DBMS/Data warehouse offerings AWS Redshift, AWS RDS/Aurora, Snowflake, Databricks. ETL tools Cloud-based tools. Well versed with different cloud computing offerings in AWS and Azure. Well aware of Industry-followed data security and governance norms. Building API Integration layers between multiple systems. Hands-on experience with data platforms technologies like Databricks, AWS, Snowflake, HPC (certifications will be a plus). Strong programming skills in languages such as Python, R. Strong organizational and leadership skills. Bachelor's or Master's degree in Computational Biology, Computer Science, or a related field. Ph.D. is a plus. Preferred/Good To Have MLOps expertise leveraging ML Platforms like Dataiku, Databricks, Sagemaker. Experience with other technologies like Data Sharing (e.g., Starburst), Data Virtualization (Denodo), API Management (Mulesoft etc.). Cloud Solution Architect certification (like AWS SA Professional or others).

Posted 1 week ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

We are seeking a highly capable Data Engineer to join our data engineering and integration team. The ideal candidate will bring expertise in Snowflake, Cortex AI, MuleSoft, and Automate by Fortra (RPA). This role requires a strong understanding of data architecture, integration pipelines, and automation, with a focus on enabling seamless product integrations and intelligent data automation across systems. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines using Snowflake and MuleSoft. Build and manage integrations between internal platforms and third-party systems to support product integration needs. Leverage Automate (Fortra) to build and orchestrate RPA bots for process automation, data extraction, and system interoperability. Collaborate with AI/ML teams to operationalize models using Cortex AI, ensuring accurate and timely data delivery. Develop and maintain APIs and microservices using MuleSoft Anypoint Platform for seamless data integration. Create and manage data workflows, ensuring data quality, lineage, and compliance. Work with product teams to understand integration requirements and support feature rollouts involving data exchange. Monitor performance and reliability of data pipelines and automation tasks. Document system architecture, data flow diagrams, and integration strategies. Required Qualifications: 5+ years of experience in data engineering or related roles. Strong hands-on experience with MuleSoft for system and API integration. Proficiency with Automate by Fortra (RPA) for building and managing automation workflows. Proven experience with Snowflake and building optimized SQL-based data transformations. Familiarity with Cortex AI or similar AI platforms for model development , deployment and data flow integration. Solid programming skills (Python, JavaScript, or similar scripting languages). Experience with cloud platforms (AWS). Strong problem-solving skills and ability to work cross-functionally with engineering and product teams. Our Commitment to Diversity & Inclusion: Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental Leaveo Learning & Career Development Employee Wellness Job Location : Hyderabad, India

Posted 1 week ago

Apply

1.0 - 4.0 years

1 - 4 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Drive adoption of cloud technology for data processing and warehousing You will drive SRE strategy for some of GS largest platforms including Lakehouse and Data Lake Engage with data consumers and producers to match reliability and cost requirements You will drive strategy with data Relevant Technologies: Snowflake, AWS, Grafana, PromQL, Python, Java, Open Telemetry, Gitlab BasicQualifications A Bachelor or Masters degree in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline) 1-4+ years of relevant work experience in a team-focused environment 1-2 years hands on developer experience at some point in career Understanding and experience of DevOps and SRE principles and automation, managing technical and operational risk Experience with cloud infrastructure (AWS, Azure, or GCP) Proven experience in driving strategy with data Deep understanding of multi-dimensionality of data, data curation and data quality, such as traceability, security, performance latency and correctness across supply and demand processes In-depth knowledge of relational and columnar SQL databases, including database design Expertise in data warehousing concepts (eg star schema, entitlement implementations, SQL v/s NoSQL modelling, milestoning, indexing, partitioning) Excellent communications skills and the ability to work with subject matter experts to extract critical business concepts Independent thinker, willing to engage, challenge or learn Ability to stay commercially focused and to always push for quantifiable commercial impact Strong work ethic, a sense of ownership and urgency Strong analytical and problem-solving skills Ability to build trusted partnerships with key contacts and users across business and engineering teams Preferred Qualifications Understanding of Data Lake / Lakehouse technologies incl. Apache Iceberg Experience with cloud databases (eg Snowflake, Big Query) Understanding concepts of data modelling Working knowledge of open-source tools such as AWS lambda, Prometheus Experience coding in Java or Python

Posted 1 week ago

Apply

1.0 - 3.0 years

1 - 3 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Overview: We are seeking a skilled Platform Support Engineer to join our team. The ideal candidate will have a strong background in Kubernetes, ETL processes, data integration tools, Linux, and cloud experience. Familiarity with AWS is preferred, though experience with any other cloud provider is also acceptable. We are looking for someone with the enthusiasm to develop new skills and grow within our dynamic environment. Key Qualifications: Experience with Kubernetes. Experience with ETL processes. Proficiency with at least one data integration tool (e.g., Informatica, StreamSets, etc.). Experience with Snowflake or any other database. Proficient in Linux. Cloud experience (AWS preferred, but any other cloud provider is acceptable). Python skills are a significant plus. Ability to develop additional technical skills. Desired Attributes: Minimum of 2-3 years of relevant experience. Strong problem-solving abilities and attention to detail. A collaborative mindset and willingness to learn and grow within the role.

Posted 1 week ago

Apply

1.0 - 4.0 years

2 - 5 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

As an Engineer in the Risk Engineering organization, you will have the opportunity to impact one or more aspects of risk management. You will work with a team of talented engineers to drive the build & adoption of common tools, platforms, and applications. The team builds solutions that are offered as a software product or as a hosted service. We are a dynamic team of talented developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Python, Snowflake , Cloud Computing ( AWS ), S3, Spark, ReactJS, among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing risk computations in limited amount of time using distributed computing, and making data available to enable actionable risk insights through analytical and response user interfaces. Basic Qualifications Bachelor's degree in Computer Science, Mathematics, Electrical Engineering or related technical discipline with 6 months - 2 years of working experience Willingness to learn new things and work with a diverse group of people across multiple geographical locations An eagerness to grow as a Software Engineer Proficiency in Java, Python or another Object-Oriented Programming language A clear understanding of data structures, algorithms, software design and core programming concepts Preferred Qualifications Experience with one or more major relational / object databases including cloud databases like Snowflake Interact with business users for resolving issues with applications. Performance tune applications to improve memory and CPU utilization.

Posted 1 week ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Navi Mumbai, Maharashtra, India

On-site

Foundit logo

We are looking for a Senior Big Data Engineer with deep experience in building scalable, high-performance data processing pipelines using Snowflake (Snowpark) and the Hadoop ecosystem . You'll design and implement batch and streaming data workflows, transform complex datasets, and optimize infrastructure to power analytics and data science solutions. Key Responsibilities: Design, develop, and maintain end-to-end scalable data pipelines for high-volume batch and real-time use cases. Implement advanced data transformations using Spark, Snowpark, Pig , and Sqoop . Process large-scale datasets from varied sources using tools across the Hadoop ecosystem . Optimize data storage and retrieval in HBase , Hive , and other NoSQL stores. Collaborate closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making. Ensure data quality, integrity, and compliance with enterprise security and governance standards. Tune and troubleshoot distributed data applications for performance and efficiency. Must-Have Skills: 5+ years in Data Engineering or Big Data roles Expertise in: Snowflake (Snowpark) Apache Spark MapReduce, Hadoop Sqoop, Pig, HBase Strong knowledge of: ETL/ELT pipeline design Distributed computing principles Big Data architecture & performance tuning Proven experience handling large-scale data ingestion , processing, and transformation Nice-to-Have Skills: Workflow orchestration with Apache Airflow or Oozie Cloud experience: AWS, Azure, or GCP Proficiency in Python or Scala Familiarity with CI/CD pipelines , Git, and DevOps environments Soft Skills: Strong problem-solving and analytical mindset Excellent communication and documentation abilities Ability to work independently and within cross-functional Agile teams

Posted 1 week ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Delhi, India

On-site

Foundit logo

We are looking for a Senior Big Data Engineer with deep experience in building scalable, high-performance data processing pipelines using Snowflake (Snowpark) and the Hadoop ecosystem . You'll design and implement batch and streaming data workflows, transform complex datasets, and optimize infrastructure to power analytics and data science solutions. Key Responsibilities: Design, develop, and maintain end-to-end scalable data pipelines for high-volume batch and real-time use cases. Implement advanced data transformations using Spark, Snowpark, Pig , and Sqoop . Process large-scale datasets from varied sources using tools across the Hadoop ecosystem . Optimize data storage and retrieval in HBase , Hive , and other NoSQL stores. Collaborate closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making. Ensure data quality, integrity, and compliance with enterprise security and governance standards. Tune and troubleshoot distributed data applications for performance and efficiency. Must-Have Skills: 5+ years in Data Engineering or Big Data roles Expertise in: Snowflake (Snowpark) Apache Spark MapReduce, Hadoop Sqoop, Pig, HBase Strong knowledge of: ETL/ELT pipeline design Distributed computing principles Big Data architecture & performance tuning Proven experience handling large-scale data ingestion , processing, and transformation Nice-to-Have Skills: Workflow orchestration with Apache Airflow or Oozie Cloud experience: AWS, Azure, or GCP Proficiency in Python or Scala Familiarity with CI/CD pipelines , Git, and DevOps environments Soft Skills: Strong problem-solving and analytical mindset Excellent communication and documentation abilities Ability to work independently and within cross-functional Agile teams

Posted 1 week ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

We are looking for a Senior Big Data Engineer with deep experience in building scalable, high-performance data processing pipelines using Snowflake (Snowpark) and the Hadoop ecosystem . You'll design and implement batch and streaming data workflows, transform complex datasets, and optimize infrastructure to power analytics and data science solutions. Key Responsibilities: Design, develop, and maintain end-to-end scalable data pipelines for high-volume batch and real-time use cases. Implement advanced data transformations using Spark, Snowpark, Pig , and Sqoop . Process large-scale datasets from varied sources using tools across the Hadoop ecosystem . Optimize data storage and retrieval in HBase , Hive , and other NoSQL stores. Collaborate closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making. Ensure data quality, integrity, and compliance with enterprise security and governance standards. Tune and troubleshoot distributed data applications for performance and efficiency. Must-Have Skills: 5+ years in Data Engineering or Big Data roles Expertise in: Snowflake (Snowpark) Apache Spark MapReduce, Hadoop Sqoop, Pig, HBase Strong knowledge of: ETL/ELT pipeline design Distributed computing principles Big Data architecture & performance tuning Proven experience handling large-scale data ingestion , processing, and transformation Nice-to-Have Skills: Workflow orchestration with Apache Airflow or Oozie Cloud experience: AWS, Azure, or GCP Proficiency in Python or Scala Familiarity with CI/CD pipelines , Git, and DevOps environments Soft Skills: Strong problem-solving and analytical mindset Excellent communication and documentation abilities Ability to work independently and within cross-functional Agile teams

Posted 1 week ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Navi Mumbai, Maharashtra, India

On-site

Foundit logo

We are seeking an experienced Big Data Engineer to design and maintain scalable data processing systems and pipelines across large-scale, distributed environments. This role requires deep expertise in tools such as Snowflake (Snowpark), Spark, Hadoop, Sqoop, Pig, and HBase . You will work closely with data scientists and stakeholders to transform raw data into actionable intelligence and power analytics platforms. Key Responsibilities: Design and develop high-performance, scalable data pipelines for batch and streaming processing. Implement data transformations and ETL workflows using Spark, Snowflake (Snowpark), Pig, Sqoop , and related tools. Manage large-scale data ingestion from various structured and unstructured data sources. Work with Hadoop ecosystem components including MapReduce, HBase, Hive, and HDFS . Optimize storage and query performance for high-throughput, low-latency systems. Collaborate with data scientists, analysts, and product teams to define and implement end-to-end data solutions. Ensure data integrity, quality, governance, and security across all systems. Monitor, troubleshoot, and fine-tune the performance of distributed systems and jobs. Must-Have Skills: Strong hands-on experience with: Snowflake & Snowpark Apache Spark Hadoop, MapReduce Pig, Sqoop, HBase, Hive Expertise in data ingestion, transformation, and pipeline orchestration In-depth knowledge of distributed computing and big data architecture Experience in data modeling, storage optimization , and query performance tuning

Posted 1 week ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Delhi, India

On-site

Foundit logo

We are seeking an experienced Big Data Engineer to design and maintain scalable data processing systems and pipelines across large-scale, distributed environments. This role requires deep expertise in tools such as Snowflake (Snowpark), Spark, Hadoop, Sqoop, Pig, and HBase . You will work closely with data scientists and stakeholders to transform raw data into actionable intelligence and power analytics platforms. Key Responsibilities: Design and develop high-performance, scalable data pipelines for batch and streaming processing. Implement data transformations and ETL workflows using Spark, Snowflake (Snowpark), Pig, Sqoop , and related tools. Manage large-scale data ingestion from various structured and unstructured data sources. Work with Hadoop ecosystem components including MapReduce, HBase, Hive, and HDFS . Optimize storage and query performance for high-throughput, low-latency systems. Collaborate with data scientists, analysts, and product teams to define and implement end-to-end data solutions. Ensure data integrity, quality, governance, and security across all systems. Monitor, troubleshoot, and fine-tune the performance of distributed systems and jobs. Must-Have Skills: Strong hands-on experience with: Snowflake & Snowpark Apache Spark Hadoop, MapReduce Pig, Sqoop, HBase, Hive Expertise in data ingestion, transformation, and pipeline orchestration In-depth knowledge of distributed computing and big data architecture Experience in data modeling, storage optimization , and query performance tuning

Posted 1 week ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

We are seeking an experienced Big Data Engineer to design and maintain scalable data processing systems and pipelines across large-scale, distributed environments. This role requires deep expertise in tools such as Snowflake (Snowpark), Spark, Hadoop, Sqoop, Pig, and HBase . You will work closely with data scientists and stakeholders to transform raw data into actionable intelligence and power analytics platforms. Key Responsibilities: Design and develop high-performance, scalable data pipelines for batch and streaming processing. Implement data transformations and ETL workflows using Spark, Snowflake (Snowpark), Pig, Sqoop , and related tools. Manage large-scale data ingestion from various structured and unstructured data sources. Work with Hadoop ecosystem components including MapReduce, HBase, Hive, and HDFS . Optimize storage and query performance for high-throughput, low-latency systems. Collaborate with data scientists, analysts, and product teams to define and implement end-to-end data solutions. Ensure data integrity, quality, governance, and security across all systems. Monitor, troubleshoot, and fine-tune the performance of distributed systems and jobs. Must-Have Skills: Strong hands-on experience with: Snowflake & Snowpark Apache Spark Hadoop, MapReduce Pig, Sqoop, HBase, Hive Expertise in data ingestion, transformation, and pipeline orchestration In-depth knowledge of distributed computing and big data architecture Experience in data modeling, storage optimization , and query performance tuning

Posted 1 week ago

Apply

11.0 - 16.0 years

11 - 16 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

We are looking for a highly experienced and strategic Data Engineer to drive the design, development, and optimization of our enterprise data platform. This role requires deep technical expertise in AWS, StreamSets, and Snowflake, along with solid experience in Kubernetes, Apache Airflow, and unit testing. The ideal candidate will lead a team of data engineers and play a key role in delivering scalable, secure, and high-performance data solutions for both historical and incremental data loads. Key Responsibilities: Lead the architecture, design, and implementation of end-to-end data pipelines using StreamSets and Snowflake. Oversee the development of scalable ETL/ELT processes for historical data migration and incremental data ingestion. Guide the team in leveraging AWS services (S3, Lambda, Glue, IAM, etc.) to build cloud-native data solutions. Provide technical leadership in deploying and managing containerized applications using Kubernetes. Define and implement workflow orchestration strategies using Apache Airflow. Establish best practices for unit testing, code quality, and data validation. Collaborate with data architects, analysts, and business stakeholders to align data solutions with business goals. Mentor junior engineers and foster a culture of continuous improvement and innovation. Monitor and optimize data workflows for performance, scalability, and cost-efficiency. Required Skills & Qualifications: High proficiency in AWS, including hands-on experience with core services (S3, Lambda, Glue, IAM, CloudWatch). Expert-level experience with StreamSets, including Data Collector, Transformer, and Control Hub. Strong Snowflake expertise, including data modeling, SnowSQL, and performance tuning. Medium-level experience with Kubernetes, including container orchestration and deployment. Working knowledge of Apache Airflow for workflow scheduling and monitoring. Experience with unit testing frameworks and practices in data engineering. Proven experience in building and managing ETL pipelines for both batch and real-time data. Strong command of SQL and scripting languages such as Python or Shell. Experience with CI/CD pipelines and version control tools (e.g., Git, Jenkins). Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics, Solutions Architect). Experience with data governance, security, and compliance frameworks. Familiarity with Agile methodologies and tools like Jira and Confluence. Prior experience in a leadership or mentoring role within a data engineering team. Our Commitment to Diversity & Inclusion: Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental Leaveo Learning & Career Development Employee Wellness Job Location : Bengaluru, India

Posted 1 week ago

Apply

3.0 - 7.0 years

3 - 7 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

We are looking for a highly experienced and strategic Senior Data Engineer to drive the design, development, and optimization of our enterprise data platform. This role requires deep technical expertise in AWS, StreamSets, and Snowflake, along with solid experience in Kubernetes, Apache Airflow, and unit testing. The ideal candidate will lead a team of data engineers and play a key role in delivering scalable, secure, and high-performance data solutions for both historical and incremental data loads. Key Responsibilities: Lead the architecture, design, and implementation of end-to-end data pipelines using StreamSets and Snowflake. Oversee the development of scalable ETL/ELT processes for historical data migration and incremental data ingestion. Guide the team in leveraging AWS services (S3, Lambda, Glue, IAM, etc) to build cloud-native data solutions. Provide technical leadership in deploying and managing containerized applications using Kubernetes. Define and implement workflow orchestration strategies using Apache Airflow. Establish best practices for unit testing, code quality, and data validation. Collaborate with data architects, analysts, and business stakeholders to align data solutions with business goals. Mentor junior engineers and foster a culture of continuous improvement and innovation. Monitor and optimize data workflows for performance, scalability, and cost-efficiency. Required Skills & Qualifications: High proficiency in AWS, including hands-on experience with core services (S3, Lambda, Glue, IAM, CloudWatch). Expert-level experience with StreamSets, including Data Collector, Transformer, and Control Hub. Strong Snowflake expertise, including data modeling, SnowSQL, and performance tuning. Medium-level experience with Kubernetes, including container orchestration and deployment. Working knowledge of Apache Airflow for workflow scheduling and monitoring. Experience with unit testing frameworks and practices in data engineering. Proven experience in building and managing ETL pipelines for both batch and real-time data. Strong command of SQL and scripting languages such as Python or Shell. Experience with CI/CD pipelines and version control tools (eg, Git, Jenkins). Preferred Qualifications: AWS certification (eg, AWS Certified Data Analytics, Solutions Architect). Experience with data governance, security, and compliance frameworks. Familiarity with Agile methodologies and tools like Jira and Confluence. Prior experience in a leadership or mentoring role within a data engineering team. Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and we'll-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental Leaveo Learning & Career Development Employee we'llness

Posted 1 week ago

Apply

4.0 - 7.0 years

4 - 7 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Design and deliver scalable, data-driven solutions and dashboards that empower self-service analytics and enable business decision-making across the organization. Key Responsibilities: Design, develop, and implement data-driven solutions that promote data democratization . Translate business needs into actionable user stories and semantic models . Develop scalable BI assets using tools like Power BI , MicroStrategy , Oracle Analytics Cloud , or similar. Collaborate with stakeholders across departments to understand their data needs and translate them into impactful dashboard elements . Build effective and intuitive dashboards and reports for actionable insights. Implement and maintain best practices in data governance , quality , and security . Document data models , pipelines, and dashboard logic clearly to enable self-service analytics . Promote a culture of data enablement through user training, process mapping, and UX optimization. Collaborate with data engineers, analysts, and product owners to ensure project success. Stay updated on emerging analytics trends and proactively introduce innovative BI solutions. Qualifications: 3+ years of experience in an Analytics Engineer or related data-focused role. Strong understanding of semantic modeling and data modeling principles. Proficient in SQL and familiar with data warehousing platforms like Snowflake . Hands-on experience building reports with Power BI , MicroStrategy , Oracle Analytics Cloud , etc. Strong ability to explain complex data concepts to both technical and non-technical stakeholders . A collaborative problem-solver with a user-first mindset. High attention to detail and strong commitment to data accuracy and quality . Passionate about using data to drive business outcomes . Top Tools & Skills: SQL, Snowflake, Power BI, MicroStrategy, Oracle Analytics Cloud Semantic modeling, Data Governance, Dashboard Development Data Storytelling, Stakeholder Collaboration, Documentation & Enablement

Posted 1 week ago

Apply

9.0 - 13.0 years

15 - 40 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

This role is for one of Weekday's clients Salary range: Rs 1500000 - Rs 4000000 (ie INR 15-40 LPA) Min Experience: 9 years Location: Bengaluru JobType: full-time Our client seeks a Data Engineer for their Data & Insights Department. This role will focus on developing and supporting data integration and analytics solutions to drive key business functions such as Marketing, Sales, Product Development, and Content Management. Ideal candidates are passionate about data and enjoy deriving insights to influence strategic decisions. Hands-on experience with cloud computing and data warehousing technologies is essential. Responsibilities: Implement data ingestion routines for both real-time and batch processes using best practices in data modeling, ETL/ELT processes, and cloud technologies. Gather business and functional requirements to translate into robust, scalable, and adaptable data architecture solutions. Proactively identify, troubleshoot, and resolve strategic issues that may impact the team's ability to meet technical or strategic goals. Requirements: Undergraduate degree in Computer Science. Solid understanding of RDBMS concepts such as relational algebra and normalization. Basic programming skills in popular languages (Python). Strong understanding of Data Structures and Algorithms (DSA) and the ability to recommend optimal data structures for various use cases. 5+ years of experience with SQL. 5+ years of experience with Snowflake. 5+ years of experience with reporting tools such as ThoughtSpot or Tableau. 5+ years of experience with GCP or AWS. Enthusiastic, fast learner with strong communication skills and attention to detail.

Posted 1 week ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

6+ years of total IT experience in development projects. 4+ years experience in cloud-based solutions. 4+ years in solid hands on snowflake development. Hands on experience in Design, build data pipelines on cloud-based infrastructure having extensively worked on AWS, snowflake, Having done end to end build from ingestion, transformation, and extract generation in Snowflake. Strong hand-on experience in writing complex SQL queries. Good understanding and experience in Azure cloud services. Optimize and tune snowflake performance including query optimization and have experience in scaling strategies. Address data issues, root cause analysis and production support. Experience working in a Financial Industry. Understanding Agile methodologies. Certification on Snowflake and Azure will be added advantage.

Posted 1 week ago

Apply

6.0 - 10.0 years

15 - 30 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

We are seeking an experienced AWS, Snowflake, and DBT professional to join our team in India. The ideal candidate will have a strong background in cloud technologies and data management, with a focus on delivering high-quality data solutions that drive business insights. Responsibilities Design, implement, and manage AWS cloud solutions to optimize performance and cost-efficiency. Utilize Snowflake for data warehousing and analytics, ensuring data integrity and security. Develop and maintain ETL pipelines using DBT to transform raw data into actionable insights. Collaborate with cross-functional teams to gather requirements and deliver data solutions. Monitor and troubleshoot cloud-based applications and databases to ensure reliability and performance. Stay updated with the latest trends and advancements in cloud technologies and data management. Skills and Qualifications 6-10 years of experience in cloud technologies, specifically AWS. Strong proficiency in Snowflake, including data modeling, ETL processes, and performance tuning. Experience with DBT for data transformation and analytics. In-depth knowledge of SQL and database management. Familiarity with data governance and data quality practices. Ability to work collaboratively in a team environment and communicate effectively with stakeholders.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies