Jobs
Interviews

2740 Snowflake Jobs - Page 23

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

karnataka

On-site

At PwC, our managed services team focuses on providing outsourced solutions and support to clients across various functions. We help organizations streamline operations, reduce costs, and enhance efficiency by managing key processes and functions on their behalf. Our team is skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC are responsible for transitioning and running services, managing delivery teams, programs, commercials, performance, and delivery risk. Your role will involve continuous improvement and optimization of managed services processes, tools, and services. As an Associate at PwC, you will work as part of a team of problem solvers, assisting in solving complex business issues from strategy to execution. Professional skills and responsibilities at this level include using feedback and reflection to develop self-awareness, demonstrating critical thinking, and bringing order to unstructured problems. You will be involved in ticket quality review, status reporting for projects, adherence to SLAs, incident management, change management, and problem management. Additionally, you will seek opportunities for exposure to different situations, environments, and perspectives, uphold the firm's code of ethics, demonstrate leadership capabilities, and work in a team environment that includes client interactions and cross-team collaboration. Required Skills: - AWS Cloud Engineer - Minimum 2 years of hands-on experience in building advanced data warehousing solutions on leading cloud platforms - Minimum 1-3 years of Operate/Managed Services/Production Support Experience - Extensive experience in developing scalable, repeatable, and secure data structures and pipelines - Designing and implementing data pipelines for data ingestion, processing, and transformation in AWS - Building efficient ETL/ELT processes using industry-leading tools like AWS, PySpark, SQL, Python, etc. - Implementing data validation and cleansing procedures - Monitoring and troubleshooting data pipelines - Implementing and maintaining data security and privacy measures - Strong communication, problem-solving, quantitative, and analytical abilities Nice To Have: - AWS certification In our Managed Services platform, we deliver integrated services and solutions grounded in deep industry experience and powered by talent. Our team provides scalable solutions that add value to our clients" enterprise through technology and human-enabled experiences. We focus on empowering clients to navigate and capture the value of their Data & Analytics portfolio while cost-effectively operating and protecting their solutions. As a member of our Data, Analytics & Insights Managed Service team, you will work on critical offerings, help desk support, enhancement, optimization work, and strategic roadmap and advisory level work. Your contribution will be crucial in supporting customer engagements both technically and relationally.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have a minimum of 3 years of experience in a similar role. You must be proficient in Java and Python programming languages. A strong understanding and working experience in Solidatus is required. Additionally, you should have a solid understanding of XML and JSON data formats. Knowledge of relational SQL and NoSQL databases such as Oracle, MSSQL, and Snowflake is essential. Preferred qualifications include exposure to NLP and LLM technologies and approaches, experience with machine learning and data mining techniques, familiarity with data security and privacy concerns, knowledge of data warehousing and business intelligence concepts, and an advanced degree in Computer Science, Engineering, or a related field. The ideal candidate will have a Bachelor's degree in Computer Science, Engineering, or a related field.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Engineer specializing in Databricks, your primary responsibility will be to develop, support, and drive end-to-end business intelligence solutions using Databricks. You will collaborate with business analysts and data architects to transform requirements into technical implementations. Your role will involve designing, developing, implementing, and maintaining PySpark code through the Databricks UI to facilitate data and analytics use cases for the client. Additionally, you will code, test, and document new or enhanced data systems to build robust and scalable applications for data analytics. You will also delve into performance, scalability, capacity, and reliability issues to identify and address any arising challenges. Furthermore, you will engage in research projects and proof of concepts to enhance data processing capabilities. Key Requirements: - 3+ years of hands-on experience with Databricks and PySpark. - Proficiency in SQL and adept data manipulation skills. - Sound understanding of data warehousing concepts and technologies. - Familiarity with Google Pub sub, Kafka, or Mongo DB is a plus. - Knowledge of ETL processes and tools for data extraction, transformation, and loading would be beneficial. - Experience with cloud platforms such as Databricks, Snowflake, or Google Cloud. - Understanding of data governance and data quality best practices. Qualifications: - Bachelor's degree in computer science, engineering, or a related field. - Continuous learning demonstrated through technical certifications or related methods. - 3+ years of relevant experience in Data Analytics, preferably within the Retail domain. Desired Qualities: - Self-motivated and dedicated to achieving outcomes for a rapidly growing team and organization. - Effective communication skills through verbal, written, and client presentations. Location: India Years of Experience: 3 to 5 years In this role, your expertise in Databricks and data engineering will play a crucial part in driving impactful business intelligence solutions and contributing to the growth and success of the organization.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As a skilled Data Engineer with 7-10 years of experience, you will be a valuable addition to our dynamic team in India. Your primary focus will involve designing and optimizing data pipelines to efficiently handle large datasets and extract valuable business insights. Your responsibilities will include designing, building, and maintaining scalable data pipelines and architecture. You will be expected to develop and enhance ETL processes for data ingestion and transformation, collaborating closely with data scientists and analysts to meet data requirements and deliver effective solutions. Monitoring data integrity through data quality checks and ensuring compliance with data governance and security policies will also be part of your role. Leveraging cloud-based data technologies and services for storage and processing will be crucial to your success in this position. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proficiency in SQL and practical experience with databases such as MySQL, PostgreSQL, or Oracle is essential. Your expertise in programming languages like Python, Java, or Scala will be highly valuable, along with hands-on experience in big data technologies like Hadoop, Spark, or Kafka. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud is preferred. Understanding data warehousing concepts and tools such as Redshift and Snowflake, coupled with experience in data modeling and architecture design, will further strengthen your candidacy.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

You will be working as a Data Engineer with expertise in Python and Pyspark programming. You should have a strong background in utilizing Cloud services such as Snowflake, Databricks, Informatica, Azure, AWS, GCP, as well as proficiency in Reporting technologies like PowerBI, Tableau, Spotfire, Alteryx, and Microstrategy. Your responsibilities will include developing and maintaining data pipelines, optimizing data workflows, and ensuring the efficiency and reliability of data integration processes. You will be expected to possess strong programming skills in Python and Pyspark, along with a deep understanding of SQL. It is essential for you to have experience in utilizing Snowflake, Databricks, PowerBI, Microstrategy, Tableau, and Spotfire. Additionally, familiarity with Informatica and Azure/AWS services would be advantageous. The interview process will be conducted virtually, and the work model for this position is remote. If you have 7-10 years of experience in this field and are available to start within 15 days, please consider applying for this opportunity by sending your resume to netra.s@twsol.com.,

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

hyderabad, telangana

On-site

You are a highly skilled Architect with expertise in Snowflake Data Modeling and Cloud Data solutions. With over 12 years of experience in Data Modeling/Data Warehousing and 5+ years specifically in Snowflake, you will lead Snowflake optimizations at warehouse and database levels. Your role involves setting up, configuring, and deploying Snowflake components efficiently for various projects. You will work with a passionate team of engineers at ValueMomentum's Engineering Center, focused on transforming the P&C insurance value chain through innovative solutions. The team specializes in Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. As part of the team, you will have opportunities for role-specific skill development and contribute to impactful projects. As an Architect, you will be responsible for optimizing Snowflake at both warehouse and database levels, setting up and configuring Snowflake components, and implementing cloud management frameworks. Proficiency in Python, PySpark, SQL, and experience with cloud platforms such as AWS, Azure, and GCP are essential for this role. Key Responsibilities: - Work on Snowflake optimizations at warehouse and database levels. - Setup, configure, and deploy Snowflake components like Databases, Warehouses, and Roles. - Setup and monitor data shares and Snow Pipes for Snowflake projects. - Implement Snowflake Cloud management frameworks for monitoring, alerting, governance, budgets, change management, and cost optimization. - Develop cloud usage reporting for cost-related insights, metrics, and KPIs. - Build and enhance Snowflake forecasting processes and explore cloud spend trends. Requirements: - 12+ years of experience in Data Modeling/Data Warehousing. - 5+ years of experience in Snowflake Data Modeling and Architecture, including expertise in Cloning, Data Sharing, and Search optimization. - Proficiency in Python, PySpark, and complex SQL for analysis. - Experience with cloud platforms like AWS, Azure, and GCP. - Knowledge of Snowflake performance management and cloud-based database role management. ValueMomentum is a leading solutions provider for the global property and casualty insurance industry. It focuses on helping insurers achieve sustained growth, high performance, and stakeholder value. The company has served over 100 insurers and is dedicated to fostering resilient societies. Benefits at ValueMomentum include a competitive compensation package, career advancement opportunities through coaching and mentoring programs, comprehensive training and certification programs, and performance management with goal setting, continuous feedback, and rewards for exceptional performers.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Data Product Analyst at Wells Fargo, you will be responsible for participating in low to moderate complexity data product initiatives. Your role will involve identifying opportunities for data roadmap improvements within your scope of responsibilities to drive data enablement and capabilities across platforms and utilities. You will review and analyze basic business, operational, or technical assignments that require research and evaluation to drive data enablement strategies. Additionally, you will present recommendations for resolving data product situations, collaborate with stakeholders to understand business requirements, and manage datasets focusing on consumer needs and data governance standards. Moreover, you will participate in the creation and maintenance of data product roadmaps, gather data requirements, and communicate data problems and initiatives effectively to all audiences. Required qualifications include 2+ years of data product or data management experience, or equivalent demonstrated expertise in maintaining and improving data quality across the organization. Your responsibilities will also involve participating in analysis to identify and remediate data quality issues, adhering to data governance standards, and designing data governance and data quality policies. Furthermore, you will support regulatory analysis and reporting requirements, work with business and technology partners to document metadata about systems, and assess the current state of data quality. Desired qualifications for this role include experience in large enterprise data initiatives, managing data entry processes, resolving data quality issues, banking business or technology experience, and familiarity with BI tools and cloud concepts. In addition, knowledge of T-SQL, database, data warehousing, ETL concepts, BI solutions, Agile principles, and various technical skills are preferred for this position. As a Data Product Analyst, you are expected to assist in implementing data processes, monitor data flows, ensure consistent data definition across systems, collaborate with data engineers, and resolve data quality issues. The posting end date for this job is 17 Jul 2025, with the possibility of early closure due to the volume of applicants. Wells Fargo values equal opportunity and encourages applications from all qualified candidates. The company maintains a drug-free workplace and requires candidates to represent their own experiences during the recruiting and hiring process. If you require a medical accommodation during the application or interview process, you can visit Disability Inclusion at Wells Fargo for assistance. Third-party recordings are prohibited unless authorized by Wells Fargo, and candidates should adhere to the company's recruitment and hiring requirements.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Senior Data Scientist in the Global Data Science & Advanced Analytics team at Colgate-Palmolive, your role will involve leading projects within the Analytics Continuum. You will be responsible for conceptualizing and developing machine learning, predictive modeling, simulations, and optimization solutions to address business questions with clear dollar objectives. Your work will have a significant impact on revenue growth management, price elasticity, promotion analytics, and marketing mix modeling. Your responsibilities will include: - Conceptualizing and building predictive modeling solutions to address business use cases - Applying machine learning and AI algorithms to develop scalable solutions for business deployment - Developing end-to-end business solutions from data extraction to statistical modeling - Conducting model validations and continuous improvement of algorithms - Deploying models using Airflow and Docker on Google Cloud Platforms - Leading pricing, promotion, and marketing mix initiatives from scoping to delivery - Studying large datasets to discover trends and patterns - Presenting insights in a clear and interpretable manner to business teams - Developing visualizations using frameworks like Looker, PyDash, Flask, PlotLy, and streamlit - Collaborating closely with business partners across different geographies To qualify for this position, you should have: - A degree in Computer Science, Information Technology, Business Analytics, Data Science, Economics, or Statistics - 5+ years of experience in building statistical models and deriving insights - Proficiency in Python and SQL for coding and statistical modeling - Hands-on experience with statistical models such as linear regression, random forest, SVM, logistic regression, clustering, and Bayesian regression - Knowledge of GitHub, Airflow, and visualization frameworks - Understanding of Google Cloud and related services like Kubernetes and Cloud Build Preferred qualifications include experience with revenue growth management, pricing, marketing mix models, and third-party data. Knowledge of machine learning techniques and Google Cloud products will be advantageous for this role. Colgate-Palmolive is committed to fostering an inclusive environment where diversity is valued, and every individual is treated with respect. As an Equal Opportunity Employer, we encourage applications from candidates with diverse backgrounds and perspectives. If you require accommodation during the application process due to a disability, please complete the request form provided. Join us in building a brighter, healthier future for all.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

pune, maharashtra

On-site

Our world is currently undergoing transformation, with PTC at the forefront of this evolution. Our software serves as a bridge between the physical and digital realms, empowering companies to enhance their operations, develop superior products, and equip individuals across all facets of their business. The driving force behind our success is our talented workforce. Presently, we are a diverse global team comprising nearly 7,000 individuals. Our primary goal is to provide our team members with opportunities to expand their horizons, acquire new knowledge, and foster personal growth. We value the realization of their ideas and embrace the unique qualities that define us, enabling us to achieve our collective objectives. Life at PTC transcends mere utilization of cutting-edge technologies to revolutionize the physical landscape. It entails embracing one's true self and collaborating with some of the industry's foremost experts to effect positive change in the world. If you possess a shared ardor for resolving challenges through innovation, you are likely to find immense satisfaction in the PTC experience, mirroring our own enthusiasm. Are you prepared to embark on your next career endeavor with us We hold individual privacy rights in high regard and are dedicated to managing Personal Information ethically and in compliance with all relevant privacy and data protection regulations. Please refer to our Privacy Policy for further details.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a highly skilled Salesforce Developer with over 3 years of experience, possessing comprehensive end-to-end business process knowledge. Your role involves working on enhancement and support projects. Your key responsibilities include managing the data migration process, developing best practices and protocols, evaluating different source systems, coordinating with clients to understand their data needs, establishing testing procedures, providing technical support for the data migration process, and creating documentation of the migration process for future projects. To be successful in this role, you must have a minimum of 2 years of experience in data migration, expertise in Snowflake, knowledge of ETL processes and data deduplication, proficiency in SQL, XML, and JSON, experience with REST API and SOAP, strong problem-solving skills, attention to detail, excellent communication and coordination skills. Knowledge of sales processes such as quoting and Opportunity management in Salesforce is an added advantage.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

As an Associate in the Data Transfer, Integration & Quality II role at BNY, you will have the opportunity to be a key part of the Wealth Management Data Governance team. Located in Pune, MH HYBRID, you will play a crucial role in the transformation of data within Wealth Management, ensuring information is accessible and actionable for our business partners. Your responsibilities will include learning industry best practices for data management, data quality, refining data wrangling skills, and providing data-driven insights to our front-line business partners. Your main focus will be on implementing a collaborative data platform to streamline the movement, transformation, analysis, and communication of information. This will involve building relationships with key stakeholders, understanding data needs of internal clients, and collaborating with IT to deliver data solutions. You will utilize tools such as Collibra, CDQ, and DataIku to perform your duties effectively, including connecting to databases, SQL functions, and creating reports upon request. To excel in this role, we are looking for candidates with a B Tech/BE/BS Degree, with a preference for stats, math, or engineering degrees. You should have at least 3 years of experience in Data Quality and Data Management, along with 2+ years of experience with Collibra and CDQ. Strong interpersonal skills, SQL proficiency, knowledge of Snowflake, and a passion for helping others and learning new skills are essential. Experience with Dataiku, the financial industry, Excel, and Agile methodologies would be advantageous. Joining BNY means becoming part of a culture that has been recognized with numerous awards, including being named America's Most Innovative Company and one of the World's Most Admired Companies by Fortune. We are committed to diversity and inclusivity, as demonstrated by our high scores in the Corporate Equality Index and Best Places to Work for Disability Inclusion. Additionally, we have been recognized for our sustainability efforts and gender equality initiatives. At BNY, you will enjoy equal employment opportunities and affirmative action, with a focus on supporting underrepresented groups, females, individuals with disabilities, and protected veterans.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

You will be responsible for leading and managing the delivery of projects as well as achieving project and team goals. Your tasks will include building and supporting data ingestion and processing pipelines, designing and maintaining machine learning infrastructure, and leading client engagement on technical projects. You will define project scopes, track progress, and allocate work to the team. It will be essential to stay updated on big data technologies and conduct pilots to design scalable data architecture. Collaboration with software engineering teams to drive multi-functional projects to completion will also be a key aspect of your role. To excel in this position, we expect you to have a minimum of 6 years of experience in data engineering with at least 2 years in a leadership role. Experience working with global teams and remote clients is required. Hands-on experience in building data pipelines across various infrastructures, knowledge of statistical and machine learning techniques, and the ability to integrate machine learning into data pipelines are essential. Proficiency in advanced SQL, data warehousing concepts, and DataMart designing is necessary. Strong familiarity with modern data platform components like Spark and Python, as well as experience with Data Warehouses (e.g., Google BigQuery, Redshift, Snowflake) and Data Lakes (e.g., GCS, AWS S3) is expected. Experience in setting up and maintaining data pipelines with AWS Glue, Azure Data Factory, and Google Dataflow, along with relational SQL and NoSQL databases, is also required. Excellent problem-solving and communication skills are essential for this role.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

The role of Staff Engineer - Data in SonyLIV's Digital Business is to lead the data engineering strategy, architect scalable data infrastructure, drive innovation in data processing, ensure operational excellence, and build a high-performance team to enable data-driven insights for OTT content and user engagement. This position is based in Mumbai and requires a minimum of 8 years of experience in the field. Responsibilities include defining the technical vision for scalable data infrastructure using modern technologies like Spark, Kafka, Snowflake, and cloud services, leading innovation in data processing and architecture through real-time data processing and streaming analytics, ensuring operational excellence in data systems by setting and enforcing standards for data reliability and privacy, building and mentoring a high-caliber data engineering team, collaborating with cross-functional teams, and driving data quality and business insights through automated quality frameworks and BI dashboards. The successful candidate should have 8+ years of experience in data engineering, business intelligence, and data warehousing, with expertise in high-volume, real-time data environments. They should possess a proven track record in building and managing large data engineering teams, designing and implementing scalable data architectures, proficiency in SQL, experience with object-oriented programming languages, and knowledge of A/B testing methodologies and statistical analysis. Preferred qualifications include a degree in a related technical field, experience managing the end-to-end data engineering lifecycle, working with large-scale infrastructure, familiarity with automated data lineage and auditing tools, expertise with BI and visualization tools, and advanced processing frameworks. Joining SonyLIV offers the opportunity to drive the future of data-driven entertainment by collaborating with industry professionals, working with comprehensive data sets, leveraging cutting-edge technology, and making a tangible impact on product delivery and user engagement. The ideal candidate will bring a strong foundation in data infrastructure, experience in leading and scaling data teams, and a focus on operational excellence to enhance efficiency.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for an Informatica Admin to join their team in Bangalore, Karnataka, India. As an Informatica Admin, you will be responsible for designing and implementing scalable ETL solutions using Informatica PowerCenter/IICS/IDMC for structured and semi-structured data. You will also define, implement, and monitor data quality rules and scorecards using Informatica Data Quality (IDQ) and work on data governance by implementing data policies, lineage, and metadata management using Axon and Enterprise Data Catalog (EDC) or on Cloud metadata catalog/data governance. You will be involved in integrating on-prem and cloud-based applications through Informatica Application Integration and Cloud Services (IICS) and designing and consuming REST/SOAP APIs within Informatica Cloud for real-time data exchange. Additionally, you will translate business rules into technical specifications, implement profiling, and manage rule-based transformation logic, as well as optimize ETL workflows and data mappings for performance and scalability. Key Responsibilities: - ETL Development using Informatica PowerCenter/IICS/IDMC - Data Quality implementation using IDQ - Data Governance with Axon and EDC/MCC - Application Integration with IICS - API & Web Services design and consumption - Rule Specifications & Occurrence Handling - Performance Tuning for ETL workflows Technical Skills: - Informatica Tools: PowerCenter, IICS/IDMC, IDQ, Axon/Cloud, EDC/MCC - Integration: REST/SOAP API development, Informatica Cloud Application Integration, JSON/XML transformations - Database & Data Warehousing: Strong knowledge of SQL and PL/SQL, experience with Oracle, SQL Server, Snowflake, or similar DW platforms - Data Governance: Understanding of data stewardship, lineage, and metadata standards, exposure to frameworks like DAMA DMBOK is a plus - Other Tools/Technologies: Git, Jira, ServiceNow, Unix/Linux scripting - Cloud Platforms: AWS, Azure, or GCP preferred NTT DATA is a global innovator of business and technology services, serving 75% of the Fortune Global 100. They are committed to helping clients innovate, optimize, and transform for long-term success. With diverse experts in more than 50 countries and a robust partner ecosystem, NTT DATA provides business and technology consulting, data and artificial intelligence, industry solutions, as well as development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure and is part of NTT Group, investing over $3.6 billion each year in R&D to help organizations and society move confidently into the digital future. Visit us at us.nttdata.com.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

The ideal candidate for this role should have at least 4 years of experience as an ETL/Informatica developer. You should also have a minimum of 1 year of experience working with Snowflake and 1 year of experience with IICS. It is essential that you have hands-on experience developing specifications, test scripts, and code coverage for all integrations. Additionally, you should be adept at supporting the migration of integration code from lower to higher environments, such as production. In this role, you will be responsible for full and incremental ETL using Informatica Power Center. Your expertise in developing ETL/Informatica for Data Warehouse Integration from various data sources will be valuable. You should also have experience supporting integration configurations with iPaaS through connected apps or web services. Being able to work with Agile framework is a must for this position. The successful candidate should be willing to be on-call for selected off-shift hours. If you meet the requirements and are interested in this onsite position located in Hyderabad, please share your resumes with bhavana@ketsoftware.com and contact at 91828 22519.,

Posted 1 week ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Pune

Hybrid

Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform . Location: Wipro PAN India Hybrid 3 days in Wipro office JD: Strong - SQL Strong - Python Any cloud technology (AWS, azure, GCP etc) have to be excellent GCP (preferred) PySpark (preferred) Essential Skills: Proficiency in Cloud-PaaS-GCP-Google Cloud Platform. Experience Required: 5-8 years. Position: Cloud Data Engineer. Work Location: Wipro, PAN India. Work Arrangement: Hybrid model with 3 days in Wipro office. Additional Experience: 8-13 years. Job Description: - Strong expertise in SQL. - Proficient in Python. - Excellent knowledge of any cloud technology (AWS, Azure, GCP, etc.), with a preference for GCP. - Familiarity with PySpark is preferred. Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform . JD: Strong - SQL Strong - Python Any cloud technology (AWS, azure, GCP etc) have to be excellent GCP (preferred) PySpark (preferred

Posted 1 week ago

Apply

7.0 - 12.0 years

10 - 14 Lacs

Gurugram

Work from Office

Company Overview Incedo is a US-based consulting, data science and technology services firm with over 3000 people helping clients from our six offices across US, Mexico and India. We help our clients achieve competitive advantage through end-to-end digital transformation. Our uniqueness lies in bringing together strong engineering, data science, and design capabilities coupled with deep domain understanding. We combine services and products to maximize business impact for our clients in telecom, Banking, Wealth Management, product engineering and life science & healthcare industries. Working at Incedo will provide you an opportunity to work with industry leading client organizations, deep technology and domain experts, and global teams. Incedo University, our learning platform, provides ample learning opportunities starting with a structured onboarding program and carrying throughout various stages of your career. A variety of fun activities is also an integral part of our friendly work environment. Our flexible career paths allow you to grow into a program manager, a technical architect or a domain expert based on your skills and interests. Our Mission is to enable our clients to maximize business impact from technology by Harnessing the transformational impact of emerging technologies Bridging the gap between business and technology Role Description As an AWS Data Engineer, your role will be to design, develop, and maintain scalable data pipelines on AWS. You will work closely with technical analysts, client stakeholders, data scientists, and other team members to ensure data quality and integrity while optimizing data storage solutions for performance and cost-efficiency. This role requires leveraging AWS native technologies and Databricks for data transformations and scalable data processing. Technical Skills Responsibilities Lead and support the delivery of data platform modernization projects. Design and develop robust and scalable data pipelines leveraging AWS native services. Optimize ETL processes, ensuring efficient data transformation. Migrate workflows from on-premise to AWS cloud, ensuring data quality and consistency. Design automations and integrations to resolve data inconsistencies and quality issues Perform system testing and validation to ensure successful integration and functionality. Implement security and compliance controls in the cloud environment. Ensure data quality pre- and post-migration through validation checks and addressing issues regarding completeness, consistency, and accuracy of data sets. Collaborate with data architects and lead developers to identify and document manual data movement workflows and design automation strategies. Nice-to-have skills Qualifications 7+ years experience with a core data engineering skillset leveraging AWS native technologies (AWS Glue, Python, Snowflake, S3, Redshift). Experience in the design and development of robust and scalable data pipelines leveraging AWS native services. Proficiency in leveraging Snowflake for data transformations, optimization of ETL pipelines, and scalable data processing. Experience with streaming and batch data pipeline/engineering architectures. Familiarity with DataOps concepts and tooling for source control and setting up CI/CD pipelines on AWS. Hands-on experience with Databricks and a willingness to grow capabilities. Experience with data engineering and storage solutions (AWS Glue, EMR, Lambda, Redshift, S3). Strong problem-solving and analytical skills. Knowledge of Dataiku is needed Graduate/Post-Graduate degree in Computer Science or a related field.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Senior Software Engineer Data Were seeking a Senior Software Engineer or a Lead Software Engineer to join one of our Data Layer teams. As the name implies, the Data Layer is at the core of all things data at Zeta. Our responsibilities include: Developing and maintaining the Zeta Identity Graph platform, which collects billions of behavioural, demographic, locations and transactional signals to power people-based marketing. Ingesting vast amounts of identity and event data from our customers and partners. Facilitating data transfers across systems. Ensuring the integrity and health of our datasets. And much more. As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations. Essential Responsibilities: As a Senior Software Engineer or a Lead Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Spark, Airflow, Snowflake, Hive, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 5-10 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive Experience with web frameworks such as Flask, Django Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in Kafka or any other stream message processing solutions. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience in open table formats such as Iceberg, Hudi or Deltalake

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Remote

Snowflake Developer 5+ years Bangalore/Hyderabad (Hybrid) 1 year Contractual (Extendable) Required Candidate profile Aws Data Services e.g., S3 ,Glue , lambda ( Aws Glue is must ) snowpipe aleast 2-3 core projects in Snowflake Development (SQL Coding has to be strong) NumPy or Pandas

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Navi Mumbai, Pune, Bengaluru

Work from Office

Dear Candidate, Hope you are doing well. Greeting from NAM Info INC. NAM Info Inc. is a technology-forward talent management organization dedicated to bridging the gap between industry leaders and exceptional human resources. They pride themselves on delivering quality candidates, deep industry coverage, and knowledge-based training for consultants. Their commitment to long-term partnerships, rooted in ethical practices and trust, positions them as a preferred partner for many industries. Learn more about their vision, achievements, and services on their website at www.nam-it.com. We have an open position for Data Engineer role with our company for Bangalore, Pune and Mumbai location. Job Description Position: Sr / Lead Data Engineer Location: Bangalore, Pune and Mumbai Experience: 5 + years Required Skills: Azure, Data warehouse, Python, Spark, PySpark, Snowflake / Databricks, Any RDBMS, Any ETL Tool, SQL, Unix Scripting, GitHub Strong experience in Azure / AWS / GCP Permanent with NAM Info Pvt Ltd Work Location: Bangalore, Pune and Mumbai Working time: 12 PM to 9 PM or 2 PM to 11 PM 5 Days work from office, Monday to Friday L1 interview virtual, L2 face to face at Banashankari office (for Bangalore candidate) Notice period immediate to 15 days If you are fine with the above job details then please share your resume to ananya.das@nam-it.com Regards, Recruitment Team NAM Info INC

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: Snowflake Experience : 5-8 Years .

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Job Description Document Job Role: Customer Success EngineerFunction: Level 2 Escalation Support Engineer Location: Bangalore Shift: Rotational. Primarily US time zones (EST/PST support coverage) Job Summary: We are looking for a highly motivated and technically adept Customer Success Engineer (CSE) t o serve as a key escalation point for Zeta Marketing Platform (ZMP). This role will interface directly with enterprise customers and internal teams to resolve complex technical issues, provide proactive guidance, and contribute to the continuous improvement of our customer experience. Key Responsibilities: Handle escalated customer tickets (L2) , perform in-depth root cause analysis, and drive timely resolution . Communicate with customers primarily via e-mail , and also through Slack, MS Teams and phone as needed. Collaborate cross-functionally with Product, Engineering, QA, Design and DevOps teams to investigate and resolve platform-level issues. Apply a structured and data-driven approach to debugging issues in areas such as API integration, campaign workflows, user interface, and data syncing. Provide technical walkthroughs and consultative guidance to customers on platform capabilities and best practices. Document solutions thoroughly in ticketing systems and contribute to the knowledge base for internal and customer use. Identify trends and proactively suggest product or documentation improvements based on recurring customer pain points. Participate in post-incident reviews, RCA documentation , and follow-ups with impacted customers. Provide support during product upgrades or critical incidents , including weekends or holiday coverage on a rotational basis. Required Skills & Experience: 3+ years of experience in a technical support or product support role in a SaaS or MarTech environment . Demonstrated ownership of L2+ escalation issues with strong analytical thinking and troubleshooting depth. Strong written and verbal communication skills with the ability to simplify complex technical concepts. Hands-on experience with web technologies : APIs (REST), HTML, CSS, JavaScript, SQL, JSON, and browser dev tools. Comfortable using tools like Postman, Grafana, Jira, Confluence or similar systems. Prior experience supporting US-based customers and working US time zone hours (minimum 1 year). Customer-first mindset with excellent consultative and advocacy skills. Ability to manage multiple priorities and deliver under pressure in a fast-paced support environment . Experience in writing or reviewing runbooks, playbooks, and RCA documents . Preferred Qualifications: Exposure to marketing automation platforms , customer data platforms (CDPs), or personalization engines. Experience with SQL-based investigation and understanding of event/data pipelines . Familiarity with tools like Honeycomb, AWS, Snowflake or similar platforms is a plus. Experience in incident management or working with on-call rotations using PagerDuty. Expereince in GenAI tools like OpenAI, MS Co-Pilot or Deepseek. Soft Skills: Self-starter who can work independently with minimal supervision. Strong collaboration skills and a positive attitude in cross-team environments. Detail-oriented with a passion for problem-solving and continuous learning.

Posted 1 week ago

Apply

2.0 - 4.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Zeta Global is looking for an experienced Machine Learning Engineer with industry-proven hands-on experience of delivering machine learning models to production to solve business problems. To be a good fit to join our AI/ML team, you should ideally: Be a thought leader that can work with cross-functional partners to foster a data-driven organisation. Be a strong team player, have experience contributing to a large project as part of a collaborative team effort. Have extensive knowledge and expertise with machine learning engineering best-practices and industry standards. Empower the product and engineering teams to make data-driven decisions. What you need to succeed: 2 to 4 years of proven experience as a Machine Learning Engineer in a professional setting. Proficiency in any programming language (Python preferable). Prior experience in building and deploying Machine learning systems. Experience with containerization: Docker & Kubernetes. Experience with AWS cloud services like EKS, ECS, EMR, Lambda, and others. Fluency with workflow management tools like Airflow or dbt. Familiarity with distributed batch compute technologies such as Spark. Experience with modern data warehouses like Snowflake or BigQuery. Knowledge of MLFlow, Feast, and Terraform is a plus.

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Ajmera Infotech builds planet-scale software for NYSE-listed clients, driving decisions that cant afford to fail. Our 120-engineer team specializes in highly regulated domainsHIPAA, FDA, SOC 2and delivers production-grade systems that turn data into strategic advantage. Why Youll Love It End-to-end impact Build full-stack analytics from lake house pipelines to real-time dashboards. Fail-safe engineering TDD, CI/CD, DAX optimization, Unity Catalog, cluster tuning. Modern stack Databricks, PySpark, Delta Lake, Power BI, Airflow. Mentorship culture Lead code reviews, share best practices, grow as a domain expert. Mission-critical context Help enterprises migrate legacy analytics into cloud-native, governed platforms. Compliance-first mindset Work in HIPAA-aligned environments where precision matters. Requirements Key Responsibilities Build scalable pipelines using SQL, PySpark, Delta Live Tables on Databricks. Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. Deliver robust Power BI solutions dashboards, semantic layers, paginated reports, DAX. Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. Document everything from pipeline logic to RLS rulesin Git-controlled formats. Collaborate cross-functionally to convert product analytics needs into resilient BI assets. Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills 5+ years in analytics engineering, with 3+ in production Databricks/Spark contexts. Advanced SQL (incl. windowing), expert PySpark , Delta Lake , Unity Catalog . Power BI mastery DAX optimization, security rules, paginated reports. SSRS-to-Power BI migration experience (RDL logic replication). Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). Communication skills to bridge technical and business audiences. Nice-to-Have Skills Databricks Data Engineer Associate cert. Streaming pipeline experience (Kafka, Structured Streaming).

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Hyderabad

Work from Office

End-to-end impact Build full-stack analytics from lake house pipelines to real-time dashboards. Fail-safe engineering TDD, CI/CD, DAX optimization, Unity Catalog, cluster tuning. Modern stack Databricks, PySpark, Delta Lake, Power BI, Airflow. Mentorship culture Lead code reviews, share best practices, grow as a domain expert. Mission-critical context Help enterprises migrate legacy analytics into cloud-native, governed platforms. Compliance-first mindset Work in HIPAA-aligned environments where precision matters. Requirements Key Responsibilities Build scalable pipelines using SQL, PySpark, Delta Live Tables on Databricks. Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. Deliver robust Power BI solutions dashboards, semantic layers, paginated reports, DAX. Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. Document everything from pipeline logic to RLS rulesin Git-controlled formats. Collaborate cross-functionally to convert product analytics needs into resilient BI assets. Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills 5+ years in analytics engineering, with 3+ in production Databricks/Spark contexts. Advanced SQL (incl. windowing), expert PySpark , Delta Lake , Unity Catalog . Power BI mastery DAX optimization, security rules, paginated reports. SSRS-to-Power BI migration experience (RDL logic replication). Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). Communication skills to bridge technical and business audiences. Nice-to-Have Skills Databricks Data Engineer Associate cert. Streaming pipeline experience (Kafka, Structured Streaming). dbt , Great Expectations , or similar data quality frameworks. BI diversityexperience with Tableau, Looker, or similar platforms.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies