Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
About the Opportunity Job Type: PermanentApplication Deadline: 31 July 2025 Title Senior Test Analyst Department Corporate Enablers Technology Location Gurgaon / Bengaluru (Bangalore) India Reports To Project Manager Level 3 About Fidelity International Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our Corporate Enablers Technology team and feel like youre part of something bigger. About your team Corporate Enablers Technology encompasses four distinct portfolios: Chief Finance Officer (CFO), Chief People Officer (CPO), General Counsel (GC), and Corporate Property Services (CPS). We provide the underlying technology for managing for example Fidelitys core finance, HR, procurement, legal, risk and compliance, statutory reporting, real estate management, global security and business continuity processes. About your role The is the role for an experienced Functional and Automation tester who understands the architecture overall, can understand the systems involved and the integration points etc. The candidate will be required to work closely with the Test lead and the development team to write the functional test scripts and identify the areas which are critical from the end-to-end testing perspective. Need a senior guy who has worked on medium scale projects and programs in past and hence will be able to add value to the overall program. The candidate will also be required to work extensively on the automation of the test solutions hence should have an excellent knowledge of the technologies and tools in the are test automation. Key Responsibilities Design, develop, and execute automated test scripts to improve efficiency and coverage. Perform functional, UI, and database testing to ensure the software meets the specified requirements. Identify, document, and track software defects and ensure their resolution. Collaborate with cross-functional teams to ensure quality throughout the software development lifecycle. Develop and maintain automation frameworks and tools. Continuously improve QA processes and methodologies to enhance product quality. Participate in code reviews and provide feedback on testability and quality. Ensure compliance with industry standards and best practices. Essential Skills Hands on experience in API Automation testing (SOAP and REST). Experience in Mobile testing (iOS and Android) using Appium or similar tool Good hands-on experience in programming languages like Java, Python etc. Experience in UI Automation using JAVA-Selenium or equivalent technologies. Familiarity with continuous integration and continuous deployment (CI/CD) pipelines. Proven experience as an ETL Tester or Quality Assurance Engineer with a focus on Snowflake. Solid understanding of SQL for data validation and querying within Snowflake. Experience in job scheduling with tools like Control-M etc. Experience in UNIX, strong knowledge of unix commands and tools like Putty. Soft Skills Sound analytical & debugging skills. Innovative and enthusiastic about technology and using it appropriately to solve problems. A can-do attitude and a natural curiosity to find better solutions. Can work as part of a small team with independent role. Proven ability to work well under pressure and in a team environment. Ability to work in a fast-paced, dynamic environment. Attention to detail and a commitment to quality. For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team.
Posted 2 weeks ago
6.0 - 9.0 years
12 - 18 Lacs
Bengaluru
Work from Office
Role & responsibilities Good in Talend Developer Snowflake and SQl
Posted 2 weeks ago
1.0 - 4.0 years
2 - 5 Lacs
Gurugram
Work from Office
LocationBangalore/Hyderabad/Pune Experience level8+ Years About the Role We are looking for a technical and hands-on Lead Data Engineer to help drive the modernization of our data transformation workflows. We currently rely on legacy SQL scripts orchestrated via Airflow, and we are transitioning to a modular, scalable, CI/CD-driven DBT-based data platform. The ideal candidate has deep experience with DBT , modern data stack design , and has previously led similar migrations improving code quality, lineage visibility, performance, and engineering best practices. Key Responsibilities Lead the migration of legacy SQL-based ETL logic to DBT-based transformations Design and implement a scalable, modular DBT architecture (models, macros, packages) Audit and refactor legacy SQL for clarity, efficiency, and modularity Improve CI/CD pipelines for DBTautomated testing, deployment, and code quality enforcement Collaborate with data analysts, platform engineers, and business stakeholders to understand current gaps and define future data pipelines Own Airflow orchestration redesign where needed (e.g., DBT Cloud/API hooks or airflow-dbt integration) Define and enforce coding standards, review processes, and documentation practices Coach junior data engineers on DBT and SQL best practices Provide lineage and impact analysis improvements using DBTs built-in tools and metadata Must-Have Qualifications 8+ years of experience in data engineering Proven success in migrating legacy SQL to DBT , with visible results Deep understanding of DBT best practices , including model layering, Jinja templating, testing, and packages Proficient in SQL performance tuning , modular SQL design, and query optimization Experience with Airflow (Composer, MWAA), including DAG refactoring and task orchestration Hands-on experience with modern data stacks (e.g., Snowflake, BigQuery etc.) Familiarity with data testing and CI/CD for analytics workflows Strong communication and leadership skills; comfortable working cross-functionally Nice-to-Have Experience with DBT Cloud or DBT Core integrations with Airflow Familiarity with data governance and lineage tools (e.g., dbt docs, Alation) Exposure to Python (for custom Airflow operators/macros or utilities) Previous experience mentoring teams through modern data stack transitions
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Were looking for a Product Engineer who thrives at the intersection of engineering and storytelling In this role, youll be responsible for helping data engineers and developers deeply understand what makes Firebolt unique and how to use it to build sub-second analytics experiences at scale Youll bring a strong technical foundation and real-world experience building analytics or data-intensive applications Youll use that expertise to craft high-quality content and experiences that resonate with a deeply technical audience - across formats like blog posts, demos, videos, documentation, and conference talks This role is ideal for someone who wants to stay close to the product and technology, while shaping how it is experienced and understood by the outside world Youll work cross-functionally with product, engineering, marketing, and customer-facing teams to translate technical capabilities into clear, compelling narratives REQUIREMENTS5+ years in engineering, solutions engineering, solution architect roles Proven experience building production-grade analytics systems, data pipelines, or data applications Strong understanding of modern data infrastructure, with hands-on experience using cloud data warehouses and/or data lakes Fluent in SQL, and comfortable with performance optimization and data modeling Excellent written and verbal communication skills, with the ability to translate complex technical topics into engaging content Experience creating developer-facing content such as technical blogs, demo apps, product tutorials, or internal enablement Self-starter with strong project management skills and the ability to lead initiatives from concept to execution Collaborative team player who enjoys working across disciplines and contributing to shared goals Curious and connected know whats happening in the industry, what users are building, and what tools they love (or hate) Bonus if you havePrior experience working in startups or fast-paced product organizations Background in AI, machine learning or dev tools Experience speaking at industry conferences, running webinars, or building video tutorials Contributions to open-source projects or active participation in developer/data communities
Posted 2 weeks ago
6.0 - 11.0 years
5 - 15 Lacs
Tirupati
Work from Office
About the Role We are seeking an experienced and driven Technical Project Manager / Technical Delivery Manager to lead complex, high-impact data analytics and data science projects for global clients. This role demands a unique blend of project management expertise, technical depth in cloud and data technologies, and the ability to collaborate across cross-functional teams. You will be responsible for ensuring the successful delivery of data platforms, data products, and enterprise analytics solutions that drive business value. Key Responsibilities Project & Delivery Management Lead the full project lifecycle for enterprise-scale data platformsincluding requirement gathering, development, testing, deployment, and post-production support. Own the delivery of Data Warehousing and Data Lakehouse solutions on cloud platforms (Azure, AWS, or GCP). Prepare and maintain detailed project plans (Microsoft Project Plan), and align them with the Statement of Work (SOW) and client expectations. Utilize hybrid project methodologies (Agile + Waterfall) for managing scope, budget, and timelines. Monitor key project KPIs (e.g., SLA, MTTR, MTTA, MTBF) and ensure adherence using tools like ServiceNow. Data Platform & Architecture Oversight Collaborate with data engineers and architects to guide the implementation of scalable Data Warehouses (e.g., Redshift, Synapse) and Data Lakehouse architectures (e.g., Databricks, Delta Lake). Ensure data platform solutions meet performance, security, and governance standards. Understand and help manage data integration pipelines, ETL/ELT processes, and BI/reporting requirements. Client Engagement & Stakeholder Management Serve as the primary liaison for US/UK clients; manage regular status updates, escalation paths, and expectations across stakeholders. Conduct WSRs, MSRs, and QBRs with clients and internal teams to drive transparency and performance reviews. Facilitate team meetings, highlight risks or blockers, and ensure consistent stakeholder alignment. Technical Leadership & Troubleshooting Provide hands-on support and guidance in data infrastructure troubleshooting using tools like Splunk, AppDynamics, Azure Monitor. Lead incident, problem, and change management processes with data platform operations in mind. Identify automation opportunities and propose technical process improvements across data pipelines and workflows. Governance, Documentation & Compliance Create and maintain SOPs, runbooks, implementation documents, and architecture diagrams. Manage project compliance related to data privacy, security, and internal/external audits. Initiate and track Change Requests (CRs) and look for revenue expansion opportunities with clients. Continuous Improvement & Innovation Participate in and lead at least three internal process optimization or innovation initiatives annually. Work with engineering, analytics, and DevOps teams to improve CI/CD pipelines and data delivery workflows. Monitor production environments to reduce deployment issues and improve time-to-insight. Must-Have Qualifications 10+ years of experience in technical project delivery, with strong focus on data analytics, BI, and cloud data platforms . Strong hands-on experience with SQL and data warehouse technologies like Snowflake, Synapse, Redshift, BigQuery , etc. Proven experience delivering Data Warehouse and Data Lakehouse solutions. Familiarity with tools such as Redshift, Synapse, BigQuery, Databricks, Delta Lake . Strong cloud knowledge with Azure, AWS, or GCP . Proficiency in project management tools like Microsoft Project Plan (MPP) , JIRA, Confluence, and ServiceNow. Expertise in Agile project methodologies. Excellent communication skillsboth verbal and writtenwith no MTI or grammatical errors. Hands-on experience working with global delivery models (onshore/offshore). Preferred Qualifications PMP or Scrum Master certification. Understanding of ITIL processes and DataOps practices. Experience managing end-to-end cloud data transformation projects. Experience in project estimation, proposal writing, and RFP handling. Desired Skills & Competencies Deep understanding of SDLC, data architecture, and data governance principles. Strong leadership, decision-making, and conflict-resolution abilities. High attention to detail and accuracy in documentation and reporting. Ability to handle multiple concurrent projects in a fast-paced, data-driven environment. A passion for data-driven innovation and business impact. Why Join Us? Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions. Work on impactful projects that make a difference across industries. Opportunities for professional growth and continuous learning. Competitive salary and benefits package.
Posted 2 weeks ago
12.0 - 16.0 years
40 - 45 Lacs
Pune
Remote
What You'll Do Job Summary We are looking for a Senior Technical Lead with expertise in designing SAAS applications and integrations for scale and expertise with full stack technologies development to join our globally distributed Electronic Invoicing & Live Reporting (ELR) team and assist us to become a global leader in the e-invoicing market, and part of every transaction in the world! We have a phenomenal team working in an open, collaborative environment that makes taxes and compliance less taxing to deal with. It will be up to you and the team to convert the product vision and requirements into a finished product. You will be Reporting to Senior Engineering Manager. You will work as individual contributor and you will not have managerial Responsibilities. You will work Remotely in India. What Your Responsibilities Will Be Job Responsibilities Avalara e-Invoicing and Platforms: Dig into our multi-patented, cloud-native Avalara product suite. We are building a flexible, platform that can handle any opportunity to create and submit electronic invoices and live reporting processes for any industry in any geography. Work with your team to create that while maximizing performance, scalability, and reliability while making it 'oh-so-simple' to operate. Design With The Vision In Mind Code, Review, Commit Create Industry-Leading Products Automation vs. People Power. Computers are great for process automation, but there's a limit to what they can do. You and the team will the unique challenges at the intersection of software Provide Technical guidance and mentoring the engineers in the team. What Youll Need To Be Successful Qualifications You have experience delivering high-quality features to production with expertise in service-oriented architectures, microservices, and web application development. You understand system performance trade-offs, load balancing, and high availability engineering. We're looking for a full-stack developer with expertise in Java, Node.js, and Python, so adaptability is valued. Experience with Java, React, microservices, web services, and REST APIs. We also use MySQL and PostgresDB as our primary transactional RDBMS We're expanding our cloud tech stack with Redis, Snowflake, Prometheus, Kafka, Kinesis, and Grafana. We use Docker for containerization, Kubernetes for orchestration, and AWS, though Azure and GCP backgrounds are welcomed. Collaborate with other teams to solve challenges and improve code to improve application efficiency. Prior experience working in e Invoicing. A Bachelor in Computer Science, Engineering, or related field is desirable. 12+ Years work experience Required.
Posted 2 weeks ago
3.0 - 5.0 years
5 - 8 Lacs
Bengaluru
Remote
As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics
Posted 2 weeks ago
3.0 - 5.0 years
4 - 6 Lacs
Chennai, Bengaluru
Work from Office
Job Overview: We are seeking a highly skilled Technical Data Analyst for a remote contract position (6 to 12 months) to help build a single source of truth for our high-volume direct-to-consumer accounting and financial data warehouse. You will work closely with Finance & Accounting teams and play a pivotal role in dashboard creation, data transformation, and migration from Snowflake to Databricks. Key Responsibilities: 1. Data Analysis & Reporting Develop month-end accounting and tax dashboards using SQL in Snowflake (Snowsight) Migrate and transition reports/dashboards to Databricks Gather, analyze, and transform business requirements from finance/accounting stakeholders into data products 2. Data Transformation & Aggregation Build transformation pipelines in Databricks to support balance sheet look-forward views Maintain data accuracy and consistency throughout the Snowflake Databricks migration Partner with Data Engineering to optimize pipeline performance 3. ERP & Data Integration Support integration of financial data with NetSuite ERP Validate transformed data to ensure correct ingestion and mapping into ERP systems 4. Ingestion & Data Ops Work with Fivetran for ingestion and resolve any pipeline or data accuracy issues Monitor data workflows and collaborate with engineering teams on troubleshooting Required Skills & Qualifications: 5+ years of experience as a Data Analyst (preferably in Finance/Accounting domain) Strong in SQL, with proven experience in Snowflake and Databricks Experience in building financial dashboards (month-end close, tax reporting, balance sheets) Understanding of financial/accounting data: GL, journal entries, balance sheet, income statements Familiarity with Fivetran or similar data ingestion tools Experience with data transformation in a cloud environment Strong communication and stakeholder management skills Nice to have: Experience working with NetSuite ERP Location: Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 weeks ago
5.0 - 10.0 years
22 - 27 Lacs
Navi Mumbai
Work from Office
As Architect at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Architect, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong understanding data lake approaches, industry standards and industry best practices. Detail level understanding of HADOOP Framework, Ecosystem, MapReduce, and Data on Containers (data in OpenShift). Applies individual experiences / competency and IBM architecting structure thinking model to analyzing client IT systems. Experience with relational SQL, Big Data etc Experienced with Cloud native platforms such as AWS, Azure, Google, IBM Cloud or Cloud Native data platforms like Snowflake Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Unix shell scripting and python
Posted 2 weeks ago
3.0 - 8.0 years
12 - 22 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Life Science Market Data Specialist Job Description We are seeking an analytical and business-oriented Market Data Specialist to join the Commercial Data & Analytics team, within the Pharma and Life Science industry. This role will be responsible for delivering actionable market insights through data analysis and supporting strategic and operational decision-making across global, regional, or business unit levels. The successful candidate will drive a culture of data-driven decisions by leveraging internal and external data sources. Key Responsibilities Conduct comprehensive market sales data analyses using internal and external data (IQVIA, Nielsen, etc.) to deliver key business insights Collaborate with global, regional, and BU stakeholders to support leadership reporting and decision-making Interpret large datasets and identify trends, correlations, and insights relevant to business strategies Promote data-driven decision-making by proactively communicating analytical findings across the organization Support operational analytics, including market definition management and POS (point of sale) data integration Design, update, and manage recurring and ad-hoc reports and dashboards to reflect key business metrics Develop and maintain visual reports and dashboards using tools like Power BI and Snowflake Coordinate and consolidate data collection from countries across Prescription, Consumer & Aesthetics portfolios Ensure timely execution and delivery of data reports from structured databases Assist in building and optimizing data models and workflows to enhance reporting efficiency Minimum Requirements Bachelors Degree in Economics, Business Administration, Technology or comparable 5+ years’ experience as a Market / Sales Data Specialist, preferably supporting international pharma companies Proficient with Excel and SQL Proficient in analyzing data, trends and identifying patterns or arriving at root cause analysis Hands-on experience with Snowflake and Power BI preferred Successful track record in preparing data and querying databases Strong visual and project management skills to represent analyses and findings to the business in a compelling way Good communication skills to present analyses and findings to the business Ability to mult-task, with strong management and stakeholder coordination skills Interested candidates please share your resume at shalini.kanwar@wns.com
Posted 2 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities - Managing large machine learning applications and designing and implementing new frameworks to build scalable and efficient data processing workflows and machine learning pipelines.- Build the tightly integrated pipeline that optimizes and compiles models and then orchestrates their execution.- Collaborate with CPU, GPU, and Neural Engine hardware backends to push inference performance and efficiency- Work closely with feature teams to facilitate and debug the integration of increasingly sophisticated models, including large language models- Automate data processing and extraction- Engage with sales team to find opportunities, understand requirements, and translate those requirements into technical solutions.- Develop reusable ML models and assets into production. Technical and Professional : - Excellent Python programming and debugging skills. (Refer to Pytho JD given below)- Proficiency with SQL, relational databases, & non-relational databases- Passion for API design and software architecture.- Strong communication skills and the ability to naturally explain difficult technical topics to everyone from data scientists to engineers to business partners- Experience with modern neural-network architectures and deep learning libraries (Keras, TensorFlow, PyTorch). - Experience unsupervised ML algorithms. - Experience in Timeseries models and Anomaly detection problems.- Experience with modern large language model (Chat GPT/BERT) and applications.- Expertise with performance optimization.- Experience or knowledge in public cloud AWS services - S3, Lambda.- Familiarity with distributed databases, such as Snowflake, Oracle.- Experience with containerization and orchestration technologies, such as Docker and Kubernetes. Preferred Skills: Technology-Big Data - Data Processing-Spark Technology-Machine Learning-R Technology-Machine Learning-Python
Posted 2 weeks ago
3.0 - 8.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BCS,BBA,BCom,MCA,MSc Service Line Data & Analytics Unit Responsibilities " Has good knowledge on Snowflake Architecture. Understanding Virtual Warehouses - multi-cluster warehouse, autoscaling Metadata and system objects - query history, grants to users, grants to roles, users Micro-partitions Table Clustering, Auto Reclustering Materialized Views and benefits Data Protection with Time Travel in Snowflake - extremely imp Analyzing Queries Using Query Profile - extremely important (Explain plan) Cache architecture Virtual Warehouse(VW) Named Stages Direct Loading SnowPipe, Data Sharing,Streams, JavaScript Procedures & Tasks Strong ability to design and develop workflows in Snowflake in at least one cloud technology (preferably, AWS) Apply Snowflake programming and ETL experience to write Snowflake SQL and maintain complex, internally developed Reporting system. Preferable knowledge in ETL Activities like data processing from multiple source systems. Extensive Knowledge on Query Performance tuning. Apply knowledge of BI tools. Manage time effectively. Accurately estimate effortfortasks and meet agreed-upon deadlines. Effectively juggle ad-hocrequests and longer-term projects.Snowflake performance specialist- Familiar withzero copy cloningand usingtime travelfeatures to clone table- Familiar in understandingSnowflake query profileand what each step does andidentifying performance bottlenecks from query profile- Understanding of when a table needs to be clustered- Choosing the right cluster keyas a part of table design to help query optimization- Working with materialized views andbenefits vs cost scenario- How Snowflake micro partitions are maintained and what are the performance implications wrt micro partitions/ pruningetc- Horizontal vs vertical scaling. When to do what.Concept of multi cluster warehouse and autoscaling- Advanced SQL knowledge including window functions, recursive queriesand ability to understand and rewritecomplex SQLs as a part of performance optimization" Additional Responsibilities: Domain*Data Warehousing, Business IntelligencePrecise Work LocationBhubaneswar, Bangalore, Hyderabad, Pune Technical and Professional : Mandatory skills*SnowflakeDesired skills*Teradata/Python(Not Mandatory) Preferred Skills: Cloud Platform-Snowflake Technology-OpenSystem-Python - OpenSystem
Posted 2 weeks ago
2.0 - 5.0 years
5 - 9 Lacs
Gurugram
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Primary skills:Technology-Machine Learning-Python Preferred Skills: Technology-Machine Learning-Python
Posted 2 weeks ago
5.0 - 7.0 years
20 - 25 Lacs
Hyderabad, Bengaluru, Greater Noida
Work from Office
Develop, implement, and optimize end-to-end data pipelines on the Snowflake platform. Design and maintain ETL workflows to enable seamless data processing across systems. Data Transformation with PySpark: Leverage PySpark for data transformations within the Snowflake environment. Implement complex data cleansing, enrichment, and validation processes using PySpark to ensure the highest data quality. Collaboration: Work closely with cross-functional teams to design data solutions aligned with business requirements. Engage with stakeholders to understand business needs and translate them into technical solutions. Optimization: Continuously monitor and optimize data storage, processing, and retrieval performance in Snowflake. Leverage Snowflakes capabilities for scalable data storage and data processing to ensure efficient performance.Role & responsibilities Preferred candidate profile 5 to 7 years of experience as a Data Engineer, with a strong emphasis on Snowflake. Proven experience in designing, implementing, and optimizing data warehouses on the Snowflake platform. Expertise in PySpark for data processing and analytics.
Posted 2 weeks ago
10.0 - 13.0 years
30 - 32 Lacs
Bengaluru
Remote
Role & responsibilities Title: Team Lead Data Integration & Management Job Overview: The Team Lead – Data Integration & Management is responsible for designing and implementing complex database architectures to support enterprise applications, creating and maintaining data models, schemas, and documentation, optimizing SQL queries and database performance, developing database security standards and implementing data protection measures, collaborating with development teams to ensure proper database integration, providing technical leadership and mentorship to junior database professionals, establishing data governance procedures and best practices, troubleshooting and resolving complex data issues, and evaluating and recommending database technologies and tools. Employer Name: Bridgetree Experience Required: 10 - 13 years Must Haves: - 10+ years of experience with SQL and relational database management systems - Expert knowledge of SQL Server, PostgreSQL, or other major RDBMS - Strong understanding of database design principles and normalization - Experience with database performance tuning and query optimization - Proficiency in writing complex SQL queries, stored procedures, and functions - Knowledge of data warehousing concepts and ETL processes - Knowledge on any ETL process (SSIS, Informatica, RPDM) - Strong analytical and problem-solving abilities - Excellent communication and documentation skills - Ability to translate business requirements into technical specifications - Experience working in Agile development environments - Experience working in high demand projects Overall Skills Needed: - SQL - Relational database management systems - Database design and normalization - Database performance tuning and query optimization - SQL queries, stored procedures, and functions - Data warehousing concepts and ETL processes - ETL tools (SSIS, Informatica, RPDM) - Analytical and problem-solving skills - Communication and documentation skills - Translating business requirements into technical specifications - Agile development experience - High-demand project experience Key Responsibilities: - Design and implement complex database architectures to support enterprise applications - Create and maintain data models, schemas, and documentation - Optimize SQL queries and database performance - Develop database security standards and implement data protection measures - Collaborate with development teams to ensure proper database integration - Provide technical leadership and mentorship to junior database professionals - Establish data governance procedures and best practices - Troubleshoot and resolve complex data issues - Evaluate and recommend database technologies and tools Experience with Volume Scale of Operations: High-demand project Additional Position Requirements: - PostgreSQL (Preferred): Basic working knowledge is desirable. Not mandatory but would be an added advantage. - Strong Data Warehousing Concepts: A solid understanding of data warehouse architecture is essential. - ETL Tool Experience: Proficiency in at least one ETL tool is a must — preferably SSIS, Snowflake, or Redpoint Data Management (RPDM). - Work Flexibility (EST Overlap): As this role supports a high-demand project, the candidate should be comfortable with working under pressure and extending their availability to match EST hours as needed. Recruitment Process: The recruitment process will consist of 2 technical rounds. Probation Period: 3 months Engagement Type: full_time Job Type: remote Employer Industry: Marketing and Advertising Employer Website: https://bridgetree.com/ Employer Description: Bridgetree is a marketing analytics company founded in 1995 and headquartered in Fort Mill, South Carolina. It offers services that deliver actionable insights and develop enterprise planning to improve efficiency, accuracy, and speed of marketing planning and execution processes. With over 28 years of success, Bridgetree builds bridges between data and customer engagement to deliver meaningful marketing outcomes. Preferred candidate profile
Posted 2 weeks ago
6.0 - 10.0 years
40 - 45 Lacs
Gurugram
Work from Office
About the Role Were looking for a skilled Site Reliability Engineer (SRE) with a strong foundation in Java or Python development, infrastructure automation, and application monitoring. You’ll be embedded within engineering teams to drive reliability, scalability, and performance across our systems. If you have a product-first mindset, enjoy solving real-world problems at scale, and love diving into code and systems alike — we’d love to talk to you. What You’ll Work On Enhancing service reliability & availability by implementing robust SLI/SLO-based monitoring and alerting systems Collaborating with developers to optimize service performance and reliability in Java/Spring Boot applications Building infrastructure as code with Terraform and automating provisioning pipelines Conducting chaos testing, capacity planning, and failure analysis Working with cloud-native observability stacks (e.g., CloudWatch, Prometheus, Victoria Metrics) Reporting with Snowflake and Sigma for operational insights Supporting scalable and resilient database operations across RDS and NoSQL systems What We’re Looking For 6–10 years of experience Strong backend coding skills – Java (preferred) or Python (not just scripting) Experience with monitoring tools: CloudWatch, Prometheus, Victoria Metrics Familiarity with Snowflake and Sigma reporting (preferred) Terraform experience for IaC Strong database skills: RDS and any major NoSQL platform Deep understanding of SLI/SLOs, alerting, capacity planning, chaos testing Application/service-oriented mindset, aligned with an embedded SRE approach
Posted 2 weeks ago
3.0 - 8.0 years
0 - 1 Lacs
Bengaluru
Hybrid
Role & responsibilities Strong SQL proficiency: Expert knowledge of SQL syntax, query optimization techniques, and data manipulation. Snowflake platform expertise: In-depth knowledge of Snowflake features including stored procedures, SnowSQL, Snowpipe, stages, data sharing, governance and security configurations. Data warehousing concepts: Understanding of data warehousing principles, Data Pipelines, ETL working with existing CI/CD pipeline to deploy Security and compliance: Implement data security measures within Snowflake Experience with BI tools like SAP BOBJ and Looker
Posted 2 weeks ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are seeking a skilled Lead Data Engineer with extensive experience in Snowflake, ADF, SQL, and other relevant data technologies to join our team. As a key member of our data engineering team, you will play an instrumental role in designing, developing, and managing data pipelines, working closely with cross-functional teams to drive the success of our data initiatives. Key Responsibilities: Design, implement, and maintain data solutions using Snowflake, ADF, and SQL Server to ensure data integrity, scalability, and high performance. Lead and contribute to the development of data pipelines, ETL processes, and data integration solutions, ensuring the smooth extraction, transformation, and loading of data from diverse sources. Work with MSBI, SSIS, and Azure Data Lake Storage to optimize data flows and storage solutions. Collaborate with business and technical teams to identify project needs, estimate tasks, and set intermediate milestones to achieve final outcomes. Implement industry best practices related to Business Intelligence and Data Management, ensuring adherence to usability, design, and development standards. Perform in-depth data analysis to resolve data issues and improve overall data quality. Mentor and guide junior data engineers, providing technical expertise and supporting the development of their skills. Effectively collaborate with geographically distributed teams to ensure project goals are met in a timely manner. Required Technical Skills: T-SQL, SQL Server, MSBI (SQL Server Integration Services, Reporting Services), Snowflake, Azure Data Factory (ADF), SSIS, Azure Data Lake Storage. Proficient in designing and developing data pipelines, data integration, and data management workflows. Strong understanding of Cloud Data Solutions, with a focus on Azure-based tools and technologies. Nice to Have: Experience with Power BI for data visualization and reporting. Familiarity with Azure Databricks for data processing and advanced analytics.
Posted 2 weeks ago
7.0 - 12.0 years
16 - 31 Lacs
Greater Noida
Work from Office
Role : Snowflake Data Engineer Location : Greater Noida Note : 5 days work from office Total 8 + Years of experience with DBT Designer skill Good experience in Snowflake , AWS and Airflow Good Communication Skills Good Client interaction skills Knowledge of Agile Methodology R
Posted 2 weeks ago
5.0 - 10.0 years
20 - 32 Lacs
Bengaluru
Work from Office
SUMMARY OF ROLE As a Data Engineer, you'll focus on analyzing our centralized financial data and implementing machine learning models that drive actionable insights. While maintaining data infrastructure is part of the role, your primary focus will be on extracting value from data through analysis and building ML models that enhance our financial intelligence platform. You'll bridge the gap between data architecture and practical applications, turning complex financial data into predictive models and analytical tools. JOB RESPONSIBILITIES • Analyze financial datasets to identify patterns, trends, and relationships that can drive machine learning applications • Design and implement ML models for financial forecasting, anomaly detection, and predictive analytics • Transform raw financial data into structured formats optimized for analysis and model training • Perform exploratory data analysis to uncover insights and opportunities for new analytical products • Develop and optimize data pipelines in Snowflake to support analytical workloads and ML model training • Create and maintain data models that enable effective cross-client benchmarking and comparative analytics • Implement data validation processes to ensure data quality for analysis & model training • Collaborate with product teams to translate business requirements into technical solutions • Document methodologies, analyses, and modeling approaches • Monitor and improve model performance through continuous evaluation and refinement External Skills And Expertise QUALIFICATIONS • Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or related field • 5+ years of experience in data engineering with a strong focus on data analysis and machine learning implementation • Proven experience analyzing complex datasets and implementing ML models based on findings • Expert level proficiency in Python and its data analysis/ML libraries (pandas, NumPy, scikit-learn, TensorFlow/PyTorch) • Strong SQL skills and experience with Snowflake or similar cloud data warehouse technologies • Experience with ETL/ELT processes and data pipeline development • Familiarity with data lake architectures and best practices • Experience in implementation of modern data warehouse with Star Schema, facts and dimensions, Snowflake Schema and Data Vault • Proven expertise to build scalable, auditable, state of art solutions using modern data stack • Understanding of statistical analysis techniques and their application to financial data • Experience implementing and optimizing machine learning models for production environments • Ability to communicate complex technical concepts to non-technical stakeholders
Posted 2 weeks ago
8.0 - 13.0 years
20 - 30 Lacs
Chennai
Hybrid
Role: Power BI Architect Experience: 10+ Years Location - Chennai (Willing to relocate - Tamil Nadu Region is fine) Looking for immediate Joiners only. Job Description: Bachelor's degree in computer science, information systems, or a related field (or equivalent experience). At least 7+ years of proven experience in developing Power BI solutions, including data modeling and ETL processes. Designed or architected solutions using Power BI connected to Snowflake or Data Lake Experience with performance tuning, data modeling, and DAX optimization in that context Exposure to enterprise-level reporting, preferably with large datasets and cloud data platforms Strong proficiency in DAX and Power Query. Experience with SQL and relational databases. Understanding of data warehousing and dimensional modeling concepts. Experience with data integration tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team. Experience with Azure services (e.g., Azure Data Factory, Azure SQL Database, Azure Databricks) is a plus. Experience with version control systems (e.g., Git) is a plus.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 17 Lacs
Pune
Hybrid
Senior Data engineer: At Acxiom, our vision is to transform data into value for everyone. Our data products and analytical services enable marketers to recognize, better understand, and then deliver highly applicable messages to consumers across any available channel. Our solutions enable true people-based marketing with identity resolution and rich descriptive and predictive audience segmentation. We are seeking an experienced Data Engineer with a versatile skill set to undertake data engineering efforts to build the next-generation ML infrastructure for Acxioms business. As part of the Data Science and Analytics Team, the Sr. Data engineer will partner with Data Scientists and work hands-on with Big Data technologies and build a scalable infrastructure to support development of machine learning based Audience Propensity models and solutions for our domestic and global businesses. The Sr. Data engineer’s responsibilities include collaborating with internal and external stakeholders to identify data ingestion, processing, ETL, data warehousing requirements and develop appropriate solutions using modern data engineering tools in cloud. We want this person to help us build a scalable data lake and EDW using modern tech stack from the ground up. Success in this role comes from combining a strong data engineering background with product and business acumen to deliver scalable data pipeline and database solutions that can enable & support a high performant, large scale modeling infrastructure at Acxiom. The Sr. Data Engineer will be a champion of the latest Cloud database technologies & data engineering tools and will lead by example in influencing adoption and migration to the new stack. What you will do: Partner with ML Architects and data scientists to drive POCs to build a scalable, next generation model development, model management and governance infrastructure in Cloud Be a thought leader and champion for adoption of new cloud-based database technologies and enable migration to new cloud-based modeling stack Collaborate with other data scientists and team leads to define project requirements & build the next generation data source ingestion, ETL, data pipelining, data warehousing solutions in Cloud Build data-engineering solutions by developing strong understanding of business and product data needs Manage environment security permissions and enforce role based compliance Build expert knowledge of the various data sources brought together for audience propensities solutions – survey/panel data, 3rd-party data (demographics, psychographics, lifestyle segments), media content activity (TV, Digital, Mobile), and product purchase or transaction data and develop solutions for seamless ingestion and process of the data Resolve defects/bugs during QA testing, pre-production, production, and post-release patches Contribute to the design and architecture of services across the data landscape Participation in development of the integration team, contributing to reviews of methodologies, standards, and processes Contribute to comprehensive internal documentation of designs and service components Required Skills: Background in data pipelining, warehousing, ETL development solutions for data science and other Big Data applications Experience with distributed, columnar and/or analytic oriented databases or distributed data processing frameworks Minimum of 4 years of experience with Cloud databases –Snowflake, Azure SQL database, AWS Redshift, Google Cloud SQL or similar. Experience with NoSQL databases such as mongoDB, Cassandra or similar is nice to have Snowflake and/or Databricks certification preferred Minimum of 3 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, data lake and data warehouse solutions Minimum of 3 years of hands-on experience in Big Data technologies such as Hadoop, Spark, PySpark, Spark/SparkSQL, Hive, Pig, Oozie and streaming technologies such as Kafka, Spark Streaming Ingestion API, Unix shell/Perl scripting etc. Strong programming skills using Java, Python, PySpark, Scala or similar Experience with public cloud architectures, pros/cons, and migration considerations Experience with container-based application deployment frameworks (Kubernetes, Docker, ECS/EKS or similar) Experience with Data Visualization tools such as Tableau, Looker or similar Outstanding troubleshooting, attention to detail, and communication skills (verbal/written) in a fast paced setting Bachelor's Degree in Computer Science or relevant discipline or 7+ years of relevant work experience. Solid communication skills: Demonstrate ability to explain complex technical issues to technical and non- technical audiences. Strong understanding of the software design/architecture process Experience with unit testing and data quality checks Building Infrastructure-as-code for Public Cloud using Terraform Experience in a Dev Ops engineering or equivalent role. Experience developing, enhancing, and maintaining CI/CD automation and configuration management using tools such as Jenkins, Snyk, and GitHub What will set you apart: Preferred Skills: Ability to work in white space and be able to develop solutions independently. Experience building ETL pipelines with health claims data will be a plus Prior experience with Cloud based ETL tools such as AWS Glue, AWS Data pipeline or similar Experience with building real-time and streaming data pipelines a plus Experience with MLOps tools such as Apache MLFlow/KubeFlow is a plus Exposure to E2E ML platform such as AWS Sagemaker, Azure ML studio, Google AI/ML, Datarobot, Databricks or similar a plus Experience with ingestion, processing and management of 3rd party data
Posted 2 weeks ago
14.0 - 24.0 years
30 - 40 Lacs
Pune, Bengaluru
Hybrid
Job Brief: Data Engineering PM Overview We are looking for a strong and dynamic Data Engineering Project Manager with Strong experience in Production Support, People management skill in a support environment and mid to senior level experience. This is an exciting opportunity to be a part of our transformation program, managing current functionality (including SSAS) Microsoft Data Warehouse to our Cloud Data Platform, Snowflake. Responsibilities Oversee entire Data Management support function Responsible for Strategy, Planning, Resourcing, stakeholder management Point of contact for Escalations, Cross-functional coordination Client relationship management, Shift Management and escalation handling Works with Architecture Teams Technical discussion with cross function teams Technical leadership, guide and suggest best practices Proactive issue resolving Requirements and Experience Technical expertise in AWS Data Services, Python, Scala, SQL Manage Data pipelines maintenance, Resolve Production issues/ tickets Data pipelines Monitoring & Alerting Strong knowledge of, or ability to rapidly adopt our core languages for data engineering Python , SQL and Terraform . Knowledge of analytics platforms like Snowflake, data transformation tools like dbt, scala, AWS lambda, Fivetran . A good understanding of CI/CD and experience with one of the CI/CD tools – Azure DevOps , GitHub, GitLab or Jenkins. Sufficient familiarity with SQL Server, SSIS and SSAS to facilitate understanding of the current system. Strong Knowledge of ITIL / ITSM framework Location and Duration The location will be offshore (India) primarily Bangalore OR Pune locations.
Posted 2 weeks ago
5.0 - 9.0 years
10 - 20 Lacs
Pune, Chennai, Bengaluru
Work from Office
Key Responsibilities Design, build, and maintain robust, scalable, and efficient ETL/ELT pipelines. Implement data ingestion processes using FiveTran and integrate various structured and unstructured data sources into GCP-based environments. Develop data models and transformation workflows using DBT and manage version-controlled pipelines. Build and manage data storage solutions using Snowflake, optimizing for cost, performance, and scalability. Orchestrate workflows and pipeline dependencies using Apache Airflow. Design and support Data Lake architecture for raw and curated data zones. Collaborate with Data Analysts, Scientists, and Product teams to ensure availability and quality of data. Monitor data pipeline performance, ensure data integrity, and handle error recovery mechanisms. Follow best practices in CI/CD, testing, data governance, and security standards. Required Skills 5 - 7 years of professional experience in data engineering roles. Hands-on experience with GCP services: Big Query, Cloud Storage, Pub/Sub, Dataflow, Composer, etc. Proficient in writing modular SQL transformations and data modeling using DBT. Deep understanding of Snowflake warehousing: performance tuning, cost optimization, security. Experience with Airflow for pipeline orchestration and DAG management. Familiarity with designing and implementing Data Lake solutions. Proficient in Python and/or SQL. Send profiles to payal.kumari@nam-it.com Regards, Payal Kumari Senior Executive Staffing NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. Email payal.kumari@nam-it.com Website - www.nam-it.com USA | CANADA | INDIA
Posted 2 weeks ago
6.0 - 10.0 years
11 - 19 Lacs
Noida
Hybrid
QA Automation Engineer As a Senior QA Automation Engineer specializing in Data Warehousing, you will play a critical role in ensuring that our data solutions are of the highest quality. You will work closely with data engineers and analysts to develop, implement, and maintain automated testing frameworks for data validation, ETL processes, data quality, and integration. Your work will ensure that data is accurate, consistent, and performs optimally across our data warehouse systems. Responsibilities Develop and Implement Automation Frameworks : Design, build, and maintain scalable test automation frameworks tailored for data warehousing environments. Test Strategy and Execution : Define and execute automated test strategies for ETL processes, data pipelines, and database integration across a variety of data sources. Data Validation : Implement automated tests to validate data consistency, accuracy, completeness, and transformation logic. Performance Testing : Ensure that the data warehouse systems meet performance benchmarks through automation tools and load testing strategies. Collaborate with Teams : Work closely with data engineers, software developers, and data analysts to understand business requirements and design tests accordingly. Continuous Integration : Integrate automated tests into the CI/CD pipelines, ensuring that testing is part of the deployment process. Defect Tracking and Reporting : Use defect-tracking tools (e.g., JIRA) to log and track issues found during automated testing, ensuring that defects are resolved in a timely manner. Test Data Management : Develop strategies for handling large volumes of test data while maintaining data security and privacy. Tool and Technology Evaluation : Stay current with emerging trends in automation testing for data warehousing and recommend tools, frameworks, and best practices. Job QualificationsJob QualificationsRequirements and skills • At Least 6+ Years Experience Solid understanding of data warehousing concepts (ETL, OLAP, data marts, data vault,star/snowflake schemas, etc.). • Proven experience in building and maintaining automation frameworks using tools like Python, Java, or similar, with a focus on database and ETL testing. • Strong knowledge of SQL for writing complex queries to validate data, test data pipelines, and check transformations. • Experience with ETL tools (e.g., Matillion, Qlik Replicate) and their testing processes. • Performance Testing • Experience with version control systems like Git • Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues. • Strong communication and collaboration skills. • Attention to detail and a passion for delivering high-quality solutions. • Ability to work in a fast-paced environment and manage multiple priorities. • Enthusiastic about learning new technologies and frameworks. Experience with the following tools and technologies are desired. QLIK Replicate Matillion ETL Snowflake Data Vault Warehouse Design Power BI Azure Cloud Including Logic App, Azure Functions, ADF
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France