Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 32 Lacs
Bengaluru
Work from Office
SUMMARY OF ROLE As a Data Engineer, you'll focus on analyzing our centralized financial data and implementing machine learning models that drive actionable insights. While maintaining data infrastructure is part of the role, your primary focus will be on extracting value from data through analysis and building ML models that enhance our financial intelligence platform. You'll bridge the gap between data architecture and practical applications, turning complex financial data into predictive models and analytical tools. JOB RESPONSIBILITIES • Analyze financial datasets to identify patterns, trends, and relationships that can drive machine learning applications • Design and implement ML models for financial forecasting, anomaly detection, and predictive analytics • Transform raw financial data into structured formats optimized for analysis and model training • Perform exploratory data analysis to uncover insights and opportunities for new analytical products • Develop and optimize data pipelines in Snowflake to support analytical workloads and ML model training • Create and maintain data models that enable effective cross-client benchmarking and comparative analytics • Implement data validation processes to ensure data quality for analysis & model training • Collaborate with product teams to translate business requirements into technical solutions • Document methodologies, analyses, and modeling approaches • Monitor and improve model performance through continuous evaluation and refinement External Skills And Expertise QUALIFICATIONS • Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or related field • 5+ years of experience in data engineering with a strong focus on data analysis and machine learning implementation • Proven experience analyzing complex datasets and implementing ML models based on findings • Expert level proficiency in Python and its data analysis/ML libraries (pandas, NumPy, scikit-learn, TensorFlow/PyTorch) • Strong SQL skills and experience with Snowflake or similar cloud data warehouse technologies • Experience with ETL/ELT processes and data pipeline development • Familiarity with data lake architectures and best practices • Experience in implementation of modern data warehouse with Star Schema, facts and dimensions, Snowflake Schema and Data Vault • Proven expertise to build scalable, auditable, state of art solutions using modern data stack • Understanding of statistical analysis techniques and their application to financial data • Experience implementing and optimizing machine learning models for production environments • Ability to communicate complex technical concepts to non-technical stakeholders
Posted 6 days ago
8.0 - 13.0 years
20 - 30 Lacs
Chennai
Hybrid
Role: Power BI Architect Experience: 10+ Years Location - Chennai (Willing to relocate - Tamil Nadu Region is fine) Looking for immediate Joiners only. Job Description: Bachelor's degree in computer science, information systems, or a related field (or equivalent experience). At least 7+ years of proven experience in developing Power BI solutions, including data modeling and ETL processes. Designed or architected solutions using Power BI connected to Snowflake or Data Lake Experience with performance tuning, data modeling, and DAX optimization in that context Exposure to enterprise-level reporting, preferably with large datasets and cloud data platforms Strong proficiency in DAX and Power Query. Experience with SQL and relational databases. Understanding of data warehousing and dimensional modeling concepts. Experience with data integration tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team. Experience with Azure services (e.g., Azure Data Factory, Azure SQL Database, Azure Databricks) is a plus. Experience with version control systems (e.g., Git) is a plus.
Posted 6 days ago
5.0 - 10.0 years
7 - 17 Lacs
Pune
Hybrid
Senior Data engineer: At Acxiom, our vision is to transform data into value for everyone. Our data products and analytical services enable marketers to recognize, better understand, and then deliver highly applicable messages to consumers across any available channel. Our solutions enable true people-based marketing with identity resolution and rich descriptive and predictive audience segmentation. We are seeking an experienced Data Engineer with a versatile skill set to undertake data engineering efforts to build the next-generation ML infrastructure for Acxioms business. As part of the Data Science and Analytics Team, the Sr. Data engineer will partner with Data Scientists and work hands-on with Big Data technologies and build a scalable infrastructure to support development of machine learning based Audience Propensity models and solutions for our domestic and global businesses. The Sr. Data engineer’s responsibilities include collaborating with internal and external stakeholders to identify data ingestion, processing, ETL, data warehousing requirements and develop appropriate solutions using modern data engineering tools in cloud. We want this person to help us build a scalable data lake and EDW using modern tech stack from the ground up. Success in this role comes from combining a strong data engineering background with product and business acumen to deliver scalable data pipeline and database solutions that can enable & support a high performant, large scale modeling infrastructure at Acxiom. The Sr. Data Engineer will be a champion of the latest Cloud database technologies & data engineering tools and will lead by example in influencing adoption and migration to the new stack. What you will do: Partner with ML Architects and data scientists to drive POCs to build a scalable, next generation model development, model management and governance infrastructure in Cloud Be a thought leader and champion for adoption of new cloud-based database technologies and enable migration to new cloud-based modeling stack Collaborate with other data scientists and team leads to define project requirements & build the next generation data source ingestion, ETL, data pipelining, data warehousing solutions in Cloud Build data-engineering solutions by developing strong understanding of business and product data needs Manage environment security permissions and enforce role based compliance Build expert knowledge of the various data sources brought together for audience propensities solutions – survey/panel data, 3rd-party data (demographics, psychographics, lifestyle segments), media content activity (TV, Digital, Mobile), and product purchase or transaction data and develop solutions for seamless ingestion and process of the data Resolve defects/bugs during QA testing, pre-production, production, and post-release patches Contribute to the design and architecture of services across the data landscape Participation in development of the integration team, contributing to reviews of methodologies, standards, and processes Contribute to comprehensive internal documentation of designs and service components Required Skills: Background in data pipelining, warehousing, ETL development solutions for data science and other Big Data applications Experience with distributed, columnar and/or analytic oriented databases or distributed data processing frameworks Minimum of 4 years of experience with Cloud databases –Snowflake, Azure SQL database, AWS Redshift, Google Cloud SQL or similar. Experience with NoSQL databases such as mongoDB, Cassandra or similar is nice to have Snowflake and/or Databricks certification preferred Minimum of 3 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, data lake and data warehouse solutions Minimum of 3 years of hands-on experience in Big Data technologies such as Hadoop, Spark, PySpark, Spark/SparkSQL, Hive, Pig, Oozie and streaming technologies such as Kafka, Spark Streaming Ingestion API, Unix shell/Perl scripting etc. Strong programming skills using Java, Python, PySpark, Scala or similar Experience with public cloud architectures, pros/cons, and migration considerations Experience with container-based application deployment frameworks (Kubernetes, Docker, ECS/EKS or similar) Experience with Data Visualization tools such as Tableau, Looker or similar Outstanding troubleshooting, attention to detail, and communication skills (verbal/written) in a fast paced setting Bachelor's Degree in Computer Science or relevant discipline or 7+ years of relevant work experience. Solid communication skills: Demonstrate ability to explain complex technical issues to technical and non- technical audiences. Strong understanding of the software design/architecture process Experience with unit testing and data quality checks Building Infrastructure-as-code for Public Cloud using Terraform Experience in a Dev Ops engineering or equivalent role. Experience developing, enhancing, and maintaining CI/CD automation and configuration management using tools such as Jenkins, Snyk, and GitHub What will set you apart: Preferred Skills: Ability to work in white space and be able to develop solutions independently. Experience building ETL pipelines with health claims data will be a plus Prior experience with Cloud based ETL tools such as AWS Glue, AWS Data pipeline or similar Experience with building real-time and streaming data pipelines a plus Experience with MLOps tools such as Apache MLFlow/KubeFlow is a plus Exposure to E2E ML platform such as AWS Sagemaker, Azure ML studio, Google AI/ML, Datarobot, Databricks or similar a plus Experience with ingestion, processing and management of 3rd party data
Posted 6 days ago
14.0 - 24.0 years
30 - 40 Lacs
Pune, Bengaluru
Hybrid
Job Brief: Data Engineering PM Overview We are looking for a strong and dynamic Data Engineering Project Manager with Strong experience in Production Support, People management skill in a support environment and mid to senior level experience. This is an exciting opportunity to be a part of our transformation program, managing current functionality (including SSAS) Microsoft Data Warehouse to our Cloud Data Platform, Snowflake. Responsibilities Oversee entire Data Management support function Responsible for Strategy, Planning, Resourcing, stakeholder management Point of contact for Escalations, Cross-functional coordination Client relationship management, Shift Management and escalation handling Works with Architecture Teams Technical discussion with cross function teams Technical leadership, guide and suggest best practices Proactive issue resolving Requirements and Experience Technical expertise in AWS Data Services, Python, Scala, SQL Manage Data pipelines maintenance, Resolve Production issues/ tickets Data pipelines Monitoring & Alerting Strong knowledge of, or ability to rapidly adopt our core languages for data engineering Python , SQL and Terraform . Knowledge of analytics platforms like Snowflake, data transformation tools like dbt, scala, AWS lambda, Fivetran . A good understanding of CI/CD and experience with one of the CI/CD tools – Azure DevOps , GitHub, GitLab or Jenkins. Sufficient familiarity with SQL Server, SSIS and SSAS to facilitate understanding of the current system. Strong Knowledge of ITIL / ITSM framework Location and Duration The location will be offshore (India) primarily Bangalore OR Pune locations.
Posted 6 days ago
5.0 - 9.0 years
10 - 20 Lacs
Pune, Chennai, Bengaluru
Work from Office
Key Responsibilities Design, build, and maintain robust, scalable, and efficient ETL/ELT pipelines. Implement data ingestion processes using FiveTran and integrate various structured and unstructured data sources into GCP-based environments. Develop data models and transformation workflows using DBT and manage version-controlled pipelines. Build and manage data storage solutions using Snowflake, optimizing for cost, performance, and scalability. Orchestrate workflows and pipeline dependencies using Apache Airflow. Design and support Data Lake architecture for raw and curated data zones. Collaborate with Data Analysts, Scientists, and Product teams to ensure availability and quality of data. Monitor data pipeline performance, ensure data integrity, and handle error recovery mechanisms. Follow best practices in CI/CD, testing, data governance, and security standards. Required Skills 5 - 7 years of professional experience in data engineering roles. Hands-on experience with GCP services: Big Query, Cloud Storage, Pub/Sub, Dataflow, Composer, etc. Proficient in writing modular SQL transformations and data modeling using DBT. Deep understanding of Snowflake warehousing: performance tuning, cost optimization, security. Experience with Airflow for pipeline orchestration and DAG management. Familiarity with designing and implementing Data Lake solutions. Proficient in Python and/or SQL. Send profiles to payal.kumari@nam-it.com Regards, Payal Kumari Senior Executive Staffing NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. Email payal.kumari@nam-it.com Website - www.nam-it.com USA | CANADA | INDIA
Posted 6 days ago
6.0 - 10.0 years
11 - 19 Lacs
Noida
Hybrid
QA Automation Engineer As a Senior QA Automation Engineer specializing in Data Warehousing, you will play a critical role in ensuring that our data solutions are of the highest quality. You will work closely with data engineers and analysts to develop, implement, and maintain automated testing frameworks for data validation, ETL processes, data quality, and integration. Your work will ensure that data is accurate, consistent, and performs optimally across our data warehouse systems. Responsibilities Develop and Implement Automation Frameworks : Design, build, and maintain scalable test automation frameworks tailored for data warehousing environments. Test Strategy and Execution : Define and execute automated test strategies for ETL processes, data pipelines, and database integration across a variety of data sources. Data Validation : Implement automated tests to validate data consistency, accuracy, completeness, and transformation logic. Performance Testing : Ensure that the data warehouse systems meet performance benchmarks through automation tools and load testing strategies. Collaborate with Teams : Work closely with data engineers, software developers, and data analysts to understand business requirements and design tests accordingly. Continuous Integration : Integrate automated tests into the CI/CD pipelines, ensuring that testing is part of the deployment process. Defect Tracking and Reporting : Use defect-tracking tools (e.g., JIRA) to log and track issues found during automated testing, ensuring that defects are resolved in a timely manner. Test Data Management : Develop strategies for handling large volumes of test data while maintaining data security and privacy. Tool and Technology Evaluation : Stay current with emerging trends in automation testing for data warehousing and recommend tools, frameworks, and best practices. Job QualificationsJob QualificationsRequirements and skills • At Least 6+ Years Experience Solid understanding of data warehousing concepts (ETL, OLAP, data marts, data vault,star/snowflake schemas, etc.). • Proven experience in building and maintaining automation frameworks using tools like Python, Java, or similar, with a focus on database and ETL testing. • Strong knowledge of SQL for writing complex queries to validate data, test data pipelines, and check transformations. • Experience with ETL tools (e.g., Matillion, Qlik Replicate) and their testing processes. • Performance Testing • Experience with version control systems like Git • Strong analytical and problem-solving skills, with the ability to troubleshoot complex data issues. • Strong communication and collaboration skills. • Attention to detail and a passion for delivering high-quality solutions. • Ability to work in a fast-paced environment and manage multiple priorities. • Enthusiastic about learning new technologies and frameworks. Experience with the following tools and technologies are desired. QLIK Replicate Matillion ETL Snowflake Data Vault Warehouse Design Power BI Azure Cloud Including Logic App, Azure Functions, ADF
Posted 1 week ago
4.0 - 8.0 years
12 - 22 Lacs
Pune
Work from Office
Key Responsibilities Oversight & Optimisation of Data Lakehouse & Architecture, Data Engineering & Pipelines Understand lakehouse architectures that unify structured and semi-structured data at scale Strong experience of Implementing, monitoring job scheduling and orchestration using Airflow , Azure Data Factory , and CI/CD triggers and with Azure Dataflows , Databricks , and Delta Lake for real-time/batch processing, Managing schema evolution , data versioning (e.g., Delta Lake), and pipeline adaptability Pipeline performance tuning for latency, resource usage, and throughput optimization Cloud Infrastructure & Automation Infra automation using Terraform, Azure Bicep, and AWS CDK Setting up scalable cloud storage (Data Lake Gen2, S3, Blob, RDS, etc.) Administering RBAC , secure key vault access, and compliance-driven access controls Tuning infrastructure and services for cost efficiency and compute optimization Full-Stack Cloud Data Platform Design Designing end-to-end Azure/AWS data platforms including ingestion, transformation, storage, and serving layers Interfacing with BI/AI teams to ensure data readiness, semantic modeling, and ML enablement Familiarity with metadata management, lineage tracking, and data catalog integration Enterprise Readiness & Delivery Experience working with MNCs and large enterprises with strict processes, approvals, and data governance Capable of evaluating alternative tools/services across clouds for architecture flexibility and cost-performance balance Hands-on with CI/CD , monitoring , and security best practices in regulated environments (BFSI, Pharma, Manufacturing) Lead cost-performance optimization across Azure and hybrid cloud environments Design modular, scalable infrastructure using Terraform / CDK / Bicep with a DevSecOps mindset Explore alternative cloud tools/services across compute, storage, identity, and monitoring to propose optimal solutions Drive RBAC, approval workflows, and governance controls in line with typical enterprise, MNC deployment security protocols Support BI/data teams with infra tuning , pipeline stability , and client demo readiness Collaborate with client-side architects, procurement, and finance teams for approvals and architectural alignment Ideal Profile 47 years of experience in cloud infrastructure and platform engineering Strong hold on Microsoft Azure , with hands-on exposure to AWS / GCP / Snowflake acceptable Skilled in IaC tools (Terraform, CDK), CI/CD , monitoring (Grafana, Prometheus), and cost optimization tools Comfortable proposing innovative, multi-vendor architectures that balance cost, performance, and compliance Prior experience working with large global clients or regulated environments (e.g., BFSI, Pharma, Manufacturing) Preferred Certifications Microsoft Azure Administrator / Architect (Associate/Expert) AWS Solutions Architect / FinOps Certified Bonus: Snowflake, DevOps Professional, or Data Platform certifications
Posted 1 week ago
5.0 - 8.0 years
10 - 15 Lacs
Bengaluru
Hybrid
Role : Snowflake Developer Experience : 5 years - 8 years Expert in python snowflake SQL and DBT Experience in Dagster or Airflow is Must Should be able to grasp landscape quickly to test and approve Merge requests from Data Engineers Data Modelling and Architectural level knowledge is needed Should be able to establish connectivity from different source systems like SAP Beeline to existing setup and take ownership of it
Posted 1 week ago
8.0 - 11.0 years
20 - 35 Lacs
Bengaluru
Work from Office
• 8+ years of experience in designing and developing enterprise data solutions. • 3+ years of hands-on experience with Snowflake. • 3+ years of experience in Python development. • Strong expertise in SQL and Python for data processing and transformation. • Experience with Spark, Scala, and Python in production environments. • Hands-on experience with data orchestration tools (e.g., Airflow, Informatica, Automic). • Knowledge of metadata management and data lineage. • Strong problem-solving skills with an ability to work in a fast-paced, agile environment. • Excellent communication and collaboration skills.
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Location: Bangalore/Hyderabad/Pune Experience level: 7+ Years About the Role We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.
Posted 1 week ago
8.0 - 10.0 years
10 - 12 Lacs
Bengaluru
Work from Office
Location: Bangalore/Hyderabad/Pune Experience level: 8+ Years About the Role We are looking for a technical and hands-on Lead Data Engineer to help drive the modernization of our data transformation workflows. We currently rely on legacy SQL scripts orchestrated via Airflow, and we are transitioning to a modular, scalable, CI/CD-driven DBT-based data platform. The ideal candidate has deep experience with DBT , modern data stack design , and has previously led similar migrations improving code quality, lineage visibility, performance, and engineering best practices. Key Responsibilities Lead the migration of legacy SQL-based ETL logic to DBT-based transformations Design and implement a scalable, modular DBT architecture (models, macros, packages) Audit and refactor legacy SQL for clarity, efficiency, and modularity Improve CI/CD pipelines for DBT: automated testing, deployment, and code quality enforcement Collaborate with data analysts, platform engineers, and business stakeholders to understand current gaps and define future data pipelines Own Airflow orchestration redesign where needed (e.g., DBT Cloud/API hooks or airflow-dbt integration) Define and enforce coding standards, review processes, and documentation practices Coach junior data engineers on DBT and SQL best practices Provide lineage and impact analysis improvements using DBTs built-in tools and metadata Must-Have Qualifications 8+ years of experience in data engineering Proven success in migrating legacy SQL to DBT , with visible results Deep understanding of DBT best practices , including model layering, Jinja templating, testing, and packages Proficient in SQL performance tuning , modular SQL design, and query optimization Experience with Airflow (Composer, MWAA), including DAG refactoring and task orchestration Hands-on experience with modern data stacks (e.g., Snowflake, BigQuery etc.) Familiarity with data testing and CI/CD for analytics workflows Strong communication and leadership skills; comfortable working cross-functionally Nice-to-Have Experience with DBT Cloud or DBT Core integrations with Airflow Familiarity with data governance and lineage tools (e.g., dbt docs, Alation) Exposure to Python (for custom Airflow operators/macros or utilities) Previous experience mentoring teams through modern data stack transitions
Posted 1 week ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Role: Snowflake Developer with DBT Location: Bangalore/Hyderabad/Pune About the Role : We are seeking a Snowflake Developer with a deep understanding of DBT (data build tool) to help us design, build, and maintain scalable data pipelines. The ideal candidate will have hands-on experience working with Snowflake, DBT, and a passion for optimizing data processes for performance and efficiency. Responsibilities : Design, develop, and optimize Snowflake data models and DBT transformations. Build and maintain CI/CD pipelines for automated DBT workflows. Implement best practices for data pipeline performance, scalability, and efficiency in Snowflake. Contribute to the DBT community or develop internal tools/plugins to enhance the workflow. Troubleshoot and resolve complex data pipeline issues using DBT and Snowflake Qualifications : Must have minimum 4+ years of experience with Snowflake Must have at least 1 year of experience with DBT Extensive experience with DBT, including setting up CI/CD pipelines, optimizing performance, and contributing to the DBT community or plugins. Must be strong in SQL, data modelling, and ELT pipelines. Excellent problem-solving skills and the ability to collaborate effectively in a team environment.
Posted 1 week ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
About the Role: We are seeking a skilled and detail-oriented Data Migration Specialist with hands-on experience in Alteryx and Snowflake. The ideal candidate will be responsible for analyzing existing Alteryx workflows, documenting the logic and data transformation steps and converting them into optimized, scalable SQL queries and processes in Snowflake. The ideal candidate should have solid SQL expertise, a strong understanding of data warehousing concepts. This role plays a critical part in our cloud modernization and data platform transformation initiatives. Key Responsibilities: Analyze and interpret complex Alteryx workflows to identify data sources, transformations, joins, filters, aggregations, and output steps. Document the logical flow of each Alteryx workflow, including inputs, business logic, and outputs. Translate Alteryx logic into equivalent SQL scripts optimized for Snowflake, ensuring accuracy and performance. Write advanced SQL queries , stored procedures, and use Snowflake-specific features like Streams, Tasks, Cloning, Time Travel , and Zero-Copy Cloning . Implement data ingestion strategies using Snowpipe , stages, and external tables. Optimize Snowflake performance through query tuning , partitioning, clustering, and caching strategies. Collaborate with data analysts, engineers, and stakeholders to validate transformed logic against expected results. Handle data cleansing, enrichment, aggregation, and business logic implementation within Snowflake. Suggest improvements and automation opportunities during migration. Conduct unit testing and support UAT (User Acceptance Testing) for migrated workflows. Maintain version control, documentation, and audit trail for all converted workflows. Required Skills: Bachelors or masters degree in computer science, Information Technology, Data Science, or a related field. Must have aleast 4 years of hands-on experience in designing and developing scalable data solutions using the Snowflake Data Cloud platform Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. 1+ years of experience with Alteryx Designer, including advanced workflow development and debugging. Strong proficiency in SQL, with 3+ years specifically working with Snowflake or other cloud data warehouses. Python programming experience focused on data engineering. Experience with data APIs , batch/stream processing. Solid understanding of data transformation logic like joins, unions, filters, formulas, aggregations, pivots, and transpositions. Experience in performance tuning and optimization of SQL queries in Snowflake. Familiarity with Snowflake features like CTEs, Window Functions, Tasks, Streams, Stages, and External Tables. Exposure to migration or modernization projects from ETL tools (like Alteryx/Informatica) to SQL-based cloud platforms. Strong documentation skills and attention to detail. Experience working in Agile/Scrum development environments. Good communication and collaboration skills.
Posted 1 week ago
7.0 - 11.0 years
30 - 35 Lacs
Bengaluru
Hybrid
Lead Data Engineer We're Hiring: Lead Data Engineer | Bangalore | 7 - 11 Years Experience Location: Bangalore(Hybrid) Position Type: Permanent Mode of Interview: Face to Face Experience: 7 - 11yrs. Skills: Snowflake, ETL tools(Informatica/BODS/Datastage), Scripting (Python/Powershell/Shell), SQL, Data Warehousing Candidate who are available for a Face to Face discussion can apply. Interested? Send your updated CV to: radhika@theedgepartnership.com Do connect to me on LinkedIn: https://www.linkedin.com/in/radhika-gm-00b20a254/ Skills and Qualification (Functional and Technical Skills) Functional Skills: Team Player: Support peers, team, and department management. Communication: Excellent verbal, written, and interpersonal communication skills. Problem Solving: Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality. Partnership and Collaboration: Develop and maintain partnership with business and IT stakeholders Attention to Detail: Ensure accuracy and thoroughness in all tasks. Technical/Business Skills: Data Engineering: Experience in designing and building Data Warehouse and Data lakes. Good knowledge of data warehouse principles, and concepts. Technical expertise working in large scale Data Warehousing applications and databases such as Oracle, Netezza, Teradata, and SQL Server. Experience with public cloud-based data platforms especially Snowflake and AWS. Data integration skills: Expertise in design and development of complex data pipelines Solutions using any industry leading ETL tools such as SAP Business Objects Data Services (BODS), Informatica Cloud Data Integration Services (IICS), IBM Data Stage. Experience of ELT tools such as DBT, Fivetran, and AWS Glue Expert in SQL - development experience in at least one scripting language (Python etc.), adept in tracing and resolving data integrity issues. Strong knowledge of data architecture, data design patterns, modeling, and cloud data solutions (Snowflake, AWS Redshift, Google BigQuery). Data Model: Expertise in Logical and Physical Data Model using Relational or Dimensional Modeling practices, high volume ETL/ELT processes. Performance tuning of data pipelines and DB Objects to deliver optimal performance. Experience in Gitlab version control and CI/CD processes. Experience working in Financial Industry is a plus.
Posted 1 week ago
15.0 - 20.0 years
20 - 30 Lacs
Noida, Gurugram
Hybrid
Design architectures using Microsoft SQL Server MongoDB Develop ETLdata lakes, Integrate reporting tools like Power BI, Qlik, and Crystal Reports to data strategy Implement AWS cloud services,PaaS,SaaS, IaaS,SQL and NoSQL databases,data integration
Posted 1 week ago
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
Skill required: Delivery - Adobe Analytics Designation: I&F Decision Sci Practitioner Sr Analyst Qualifications: Any Graduation Years of Experience: 5 to 8 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do So how do organizations sustain themselvesThe key is a new operating modelone that s anchored around the customer and propelled by intelligence to deliver outstanding experiences across the enterprise at speed and at scale. Adobe Analytics is a solution for applying real-time analytics and detailed segmentation across all marketing channels. It gathers structured and unstructured customer data from online and offline sources, applies real-time analytics, and shares insights across an organization. It provides the capabilities of Web & Mobile Analytics, Marketing Analytics, Predictive Analytics. What are we looking for Adobe Analytics & Looker Minimum 4 years of experience in digital analytics. Must have a good understanding of digital marketing landscape and relevant tools Extensive working experience in Adobe Analytics & Looker products Reporting & Analytics, Ad hoc Analysis, Workspace Experience in SQL and Snowflake/GCP Ability to translate the business trends to simple actions visuals Experience in Beatty/CPG/Retail Adaptable and flexible Ability to work well in a team Agility for quick learning Commitment to quality Written and verbal communication Roles and Responsibilities: Identify relevant data sources and determine effective methods for data analysis Transform the raw data from data sources to aggregate/consumable data for visualization tool and build appropriate schema/relation Design, develop, and maintain user-friendly data visualizations that align with project objectives Collaborate with cross-functional teams to understand project requirements and establish data-related project briefs Deployment of dashboard to productions and track for regular refresh Set-up alters and appropriate access controls for dashboards Ability to track and report where data not captured appropriately especially when there are tagging gaps Update, create, and support databases and reports, incorporating key metrics critical for guiding strategic decisions Ensure data accuracy, completeness, and reliability in all reporting activities Continuously refine and improve existing dashboards for optimal performance and user experience Implement user feedback and conduct usability testing to enhance dashboard effectiveness Stay abreast of industry developments and incorporate innovative techniques into visualization strategies Drive adoption of reports Qualification Any Graduation
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
We are seeking a highly skilled Database Tester with extensive experience in ETL QA Testing, SQL, and Python Scripting to join our team. The ideal candidate must have a strong background in Snowflake and Banking domain testing. This is a remote role that requires an analytical mindset and excellent problem-solving skills. Skills : - Python,SQL,Database Testing , ETL QA Testing,Snowflake,Memcache, Redis Location : - Remote
Posted 1 week ago
5.0 - 10.0 years
10 - 15 Lacs
New Delhi, Chennai, Bengaluru
Work from Office
We are looking for an experienced Data Engineer with a strong background in data engineering, storage, and cloud technologies. The role involves designing, building, and optimizing scalable data pipelines, ETL/ELT workflows, and data models for efficient analytics and reporting. The ideal candidate must have strong SQL expertise, including complex joins, stored procedures, and certificate-auth-based queries. Experience with NoSQL databases such as Firestore, DynamoDB, or MongoDB is required, along with proficiency in data modeling and warehousing solutions like BigQuery (preferred), Redshift, or Snowflake. The candidate should have hands-on experience working with ETL/ELT pipelines using Airflow, dbt, Kafka, or Spark. Proficiency in scripting languages such as PySpark, Python, or Scala is essential. Strong hands-on experience with Google Cloud Platform (GCP) is a must. Additionally, experience with visualization tools such as Google Looker Studio, LookerML, Power BI, or Tableau is preferred. Good-to-have skills include exposure to Master Data Management (MDM) systems and an interest in Web3 data and blockchain analytics.
Posted 1 week ago
7.0 - 12.0 years
30 - 45 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
Senior Data Modeller Telecom Domain Job Location: Anywhere in India ( Preferred location - Gurugram , Noida , Hyderabad , Bangalore ) Experience: 7+ Years Domain: Telecommunications Job Summary: We are hiring a Senior Data Modeller with strong telecom domain expertise. You will design and standardize enterprise-wide data models across domains like Customer, Product, Billing, and Network, ensuring alignment with TM Forum standards (SID, eTOM). You'll collaborate with cross-functional teams to translate business needs into scalable, governed data structures, supporting analytics, ML, and digital transformation. Key Responsibilities: Design logical/physical data models for telecom domains Align models with TM Forum SID, eTOM, ODA, and data mesh principles Develop schemas (normalized, Star, Snowflake) based on business needs Maintain data lineage, metadata, and version control Collaborate with engineering teams on Azure, Databricks implementations Tag data for privacy, compliance (GDPR), and data quality Required Skills: 7+ years in data modelling, 3+ years in telecom domain Proficient in TM Forum standards and telecom business processes Hands-on with data modeling tools (SSAS, dbt, Informatica) Expertise in SQL, metadata documentation, schema design Cloud experience: Azure Synapse, Databricks, Snowflake Experience in CRM, billing, network usage, campaign data models Familiar with data mesh, domain-driven design, and regulatory frameworks Education: Bachelors or Masters in CS, Telecom Engineering, or related field Please go the JD and If you are interested, kindly share your updated resume along with the following details:Few bullet points on Current CTC (fixed plus variable) Offer in hand (fixed plus variable) Expected CTC Notice period Few points on relevant skills and experience Email: sp@intellisearchonline.net
Posted 1 week ago
3.0 - 8.0 years
9 - 19 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Snowflake Developer
Posted 1 week ago
2.0 - 7.0 years
11 - 16 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Conducting data analysis using SQL and Python to extract insights from large data sets Conducting exploratory data analysis to identify trends, patterns, and insights from data Developing AI/ML models and algorithms to automate and optimize business processes Staying up-to-date with the latest advancements in AI/ML techniques and tools and identifying opportunities to apply them to enhance existing solutions Documenting and communicating findings, methodologies, and insights to technical and non-technical stakeholders Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors degree in Computer Science, Statistics, or related field 2+ years of experience in SQL, Python, and Snowflake Experience with exploratory data analysis and generating insights from data Knowledge of machine learning algorithms and techniques Proven solid problem-solving skills and attention to detail Proven excellent communication and collaboration skills Proven ability to work in a fast-paced environment and manage multiple projects simultaneously At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Nic
Posted 1 week ago
4.0 - 9.0 years
9 - 14 Lacs
Hyderabad
Work from Office
As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities: Ingest data from multiple on-prem and cloud data sources using various tools & capabilities in Azure Design and develop Azure Databricks processes using PySpark/Spark-SQL Design and develop orchestration jobs using ADF, Databricks Workflow Analyzing data engineering processes being developed and act as an SME to troubleshoot performance issues and suggest solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions etc. Building test framework for the Databricks notebook jobs for automated testing before code deployment Design and build POCs to validate new ideas, tools, and architectures in Azure Continuously explore new Azure services and capabilities; assess their applicability to business needs. Create detailed documentation for cloud processes, architecture, and implementation patterns Work with data & analytics team to build and deploy efficient data engineering processes and jobs on Azure cloud Prepare case studies and technical write-ups to showcase successful implementations and lessons learned Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Ensure solutions adhere to security, compliance, and governance standards Monitor and optimize data pipelines and cloud resources for cost and performance efficiency Identifies solutions to non-standard requests and problems Support and maintain the self-service BI warehouse Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 4+ years of overall experience in Data & Analytics engineering 4+ years of experience working with Azure, Databricks, and ADF, Data Lake Solid experience working with data platforms and products using PySpark and Spark-SQL Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. In-depth understanding of Azure architecture & ability to come up with efficient design & solutions Highly proficient in Python and SQL Proven excellent communication skills Preferred Qualifications Snowflake, Airflow experience Power BI development experience Experience or knowledge of health care concepts – E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #NIC External Candidate Application Internal Employee Application
Posted 1 week ago
4.0 - 9.0 years
15 - 21 Lacs
Bengaluru
Work from Office
About Zscaler Serving thousands of enterprise customers around the world including 45% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. We built the Zscaler architecture from the ground up as a platform that could extend to new features and services. Our Product Management team takes hold of this massive opportunity to deliver our customers a growing portfolio of never-before-seen capabilities in threat prevention, visibility, scalability, and business enablement. Our product managers are champions of innovation with a shared vision for Zscaler and the limitless possibilities of cloud security. Join us to make your mark on the planning and product roadmap at the forefront of the world's cloud security leader. We are looking for an Education Operations Specialist with analytics experience who will be reporting into Platform Training Operations Manager. You will be supporting various cross-functional teams within Zscaler, such as the Partner Technical Enablement Team, Demo & Labs Team, and other key stakeholders. In this role you will be responsible for: Operating as part of the global Platform Training and Certification team and contribute to the tier-1 support of the Partner Academy Program and our demo platform requiring adaptable hours to US time zones Analyzing data to answer key questions for stakeholders or yourself, with an eye on what drives business performance, and investigate and communicate which areas need improvement in efficiency and productivity Assisting with and create rich interactive visualizations through data interpretation and analysis, with reporting components from multiple data sources Providing critical operations support for Technical Management, Business Development, Training, and Curriculum Development functions Assisting in the developmental operations processes as well as maintenance for new and existing initiatives to drive growth, certifications, and contribute to an expanded operations role What We're Looking for (Minimum Qualifications) Bachelor's Degree in business, information technology, or similar Experience with project management 3+ years of experience mining data as a data analyst Experience with SQL with aptitude for learning other analytics tool Experience with project management and focused on delivering strategic solutions, coordinating with teams to improve processes in a scaling environment What Will Make You Stand Out (Preferred Qualifications) Proficiency with business productivity tools like GSuite, Asana, Tableau, Jira, Confluence, ServiceNow, and Salesforce Experience managing Asana or other work management platforms Experience with Salesforce data, Snowflake, database , model design and segmentation techniques #LI-Hybrid #LI-KM8 At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.
Posted 1 week ago
3.0 - 7.0 years
4 - 7 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About the Role: We're hiring 2 Cloud & Data Engineering Specialists to join our fast-paced, agile team. These roles are focused on designing, developing, and scaling modern, cloud-based data engineering solutions using tools like Azure, AWS, GCP, Databricks, Kafka, PySpark, SQL, Snowflake, and ADF. Position 1: Cloud & Data Engineering Specialist Resource 1 Key Responsibilities: Develop and manage cloud-native solutions on Azure or AWS Build real-time streaming apps with Kafka Engineer services using Java and Python Deploy and manage Kubernetes-based containerized applications Process big data using Databricks Administer SQL Server and Snowflake databases, write advanced SQL Utilize Unix/Linux for system operations Must-Have Skills: Azure or AWS cloud experience Kafka, Java, Python, Kubernetes Databricks, SQL Server, Snowflake Unix/Linux commands Location- Remote, Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad, Mumbai, Hyderabad
Posted 1 week ago
5.0 - 10.0 years
20 - 25 Lacs
Mumbai
Work from Office
Entity :- Accenture Strategy & Consulting Team :- Strategy & Consulting Global Network Practice :- Marketing Analytics Title :- Data Science Manager Job location :- Gurgaon About S&C - Global Network :- Accenture Applied Intelligence practice help our clients grow their business in entirely new ways. Analytics enables our clients to achieve high performance through insights from data - insights that inform better decisions and strengthen customer relationships. From strategy to execution, Accenture works with organizations to develop analytic capabilities - from accessing and reporting on data to predictive modelling - to outperform the competition. WHATS IN IT FOR YOU As part of our Analytics practice, you will join a worldwide network of over 20,000 smart and driven colleagues experienced in leading statistical tools, methods and applications. From data to analytics and insights to actions, our forward-thinking consultants provide analytically-informed, issue-based insights at scale to help our clients improve outcomes and achieve high performance. Accenture will continually invest in your learning and growth. You'll work with MMM experts, and Accenture will support you in growing your own tech stack and certifications In Applied intelligence you will understands the importance of sound analytical decision-making, relationship of tasks to the overall project, and executes projects in the context of a business performance improvement initiative. What you would do in this role Working through the phases of project Define data requirements for creating a model and understand the business problem Clean, aggregate, analyze, interpret data and carry out quality analysis of it 5+ years of advanced experience of Market Mix Modeling and related concepts of optimizing promotional channels and budget allocation Experience in working with non linear optimization techniques. Proficiency in Statistical and Probabilistic methods such as SVM, Decision-Trees, Bagging and Boosting Techniques, Clustering Hands on experience in python data-science and math packages such as NumPy , Pandas, Sklearn, Seaborne, Pycaret, Matplotlib Development of AI/ML models Develop and Manage data pipelines Develop and Manage Data within different layers of Azure/ Snowflake Aware of common design patterns for scalable machine learning architectures, as well as tools for deploying and maintaining machine learning models in production. Knowledge of cloud platforms and usage for pipelining and deploying and scaling marketing mix models. Working knowledge of MMM optimizer and its intricacies Awareness of MMM application development and backend engine integration will be preferred Working along with the team and consultant/manager Well versed with creating insights presentations and client ready decks. Should be able to mentor and guide a team of 10-15 people under him/her Manage client relationships and expectations, and communicate insights and recommendations effectively Capability building and thought leadership Logical Thinking Able to think analytically, use a systematic and logical approach to analyze data, problems, and situations. Notices discrepancies and inconsistencies in information and materials. Task Management Advanced level of task management knowledge and experience. Should be able to plan own tasks, discuss and work on priorities, track and report progress Qualification Who we are looking for 5+ years of work experience in consulting/analytics with reputed organization is desirable. Master degree in Statistics/Econometrics/ Economics or B Tech/M Tech or Masters/M Tech in Computer Science or M.Phil/Ph.D in statistics/econometrics or related field from reputed college Must have knowledge of SQL and Python language and at-least one cloud-based technologies (Azure, AWS, GCP) Must have good knowledge of Market mix modeling techniques and optimization algorithms and applicability to industry data Must have data migration experience from cloud to snowflake (Azure, GCP, AWS) Managing sets of XML, JSON, and CSV from disparate sources. Manage documentation of data models, architecture, and maintenance processes Have an understanding of econometric/statistical modeling and analysis techniques such as regression analysis, hypothesis testing, multivariate statistical analysis, time series techniques, optimization techniques, and statistical packages such as R, Python, Java, SQL, Spark etc. Working knowledge in Machine Learning algorithms like Random Forest, Gradient Boosting, Neural Network etc. Proficient in Excel, MS word, PowerPoint, etc. Strong client and team management and planning of large-scale projects with risk assessment Accenture is an equal opportunities employer and welcomes applications from all sections of society and does not discriminate on grounds of race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, or any other basis as protected by applicable law.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31300 Jobs | Dublin
Wipro
16502 Jobs | Bengaluru
EY
10539 Jobs | London
Accenture in India
10399 Jobs | Dublin 2
Uplers
8481 Jobs | Ahmedabad
Amazon
8475 Jobs | Seattle,WA
IBM
7957 Jobs | Armonk
Oracle
7438 Jobs | Redwood City
Muthoot FinCorp (MFL)
6169 Jobs | New Delhi
Capgemini
5811 Jobs | Paris,France