Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
13.0 - 18.0 years
14 - 19 Lacs
Pune
Hybrid
So, what’s the role all about? We are seeking a highly skilled and motivated Engineering Tech Manager to be a part of our SURVEIL-X team, focused on building scalable compliance solutions for financial markets. You’ll drive R&D delivery, technical excellence, quality & manage a high-performing team, and ensure delivery of robust surveillance systems aligned with regulatory requirements How will you make an impact? Lead and mentor a team of software engineers in building scalable surveillance systems. Drive the design, development, and maintenance of applications. Collaborate with cross-functional teams including App OPS, DevOps, Professional Services and product Own project delivery timelines, code quality, and system architecture. Ensure best practices in software engineering, including CI/CD, code reviews, and testing. Have you got what it takes? Key Technical Skills: Strong expertise in Python– architecture, development, and optimization. Strong expertise in building data and ETL pipelines. Strong expertise in Message Oriented applications. Technical knowhow of AWS services and cloud-native development. Technical knowhow of No SQL and Object Storage. Good knowledge in RDBMS – MS SQL, Postgresql Technical experience with indexing/search technologies (preferably Elasticsearch). Experience with containerization. Good to Have: Experience in a financial markets compliance domain. Experience with Helm and Kubernetes container orchestration. Qualifications: Bachelor’s or Master’s degree in Computer Science or a related field. 13-15 years of total experience with at least 2–3 years in a leadership or managerial role. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere Requisition ID: 7672 Reporting into: Director Role Type: Manager
Posted 1 week ago
3.0 - 6.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.
Posted 1 week ago
3.0 - 5.0 years
5 - 13 Lacs
Chennai
Hybrid
Job Title: Quality Engineer Location: Chennai Reports To: Senior AI Engineer / Data Architect Job Summary : The Quality Engineer will be responsible for ensuring the reliability, functionality, and performance of software products, particularly in AI/data-driven applications. The role includes test planning, automation, and manual validation of data and models. Key Responsibilities: Design and execute test plans and test cases for web, backend, and AI systems. Develop automated tests for data pipelines, APIs, and model outputs. Design and execute comprehensive test plans for AI-driven ETL workflows, data pipelines, and orchestration agents. Collaborate with AI engineers and data architects to define measurable acceptance criteria for agent behavior Monitor performance benchmarks and provide feedback for optimization & collaborate with developers, AI Engineers, and DevOps teams. Required Qualifications: Bachelors degree in computer science or related field 3+ years of experience in software quality assurance or test automation Strong understanding of data processing, ETL pipelines, and data quality validation Proficient in Python, with experience writing automated tests using PyTest or similar frameworks Experience with data testing tools (e.g., dbt tests) Familiarity with cloud-based data platforms (Azure preferred) and orchestration tools Exposure to AI or ML systems, especially those involving autonomous workflows or agent-based reasoning
Posted 1 week ago
6.0 - 11.0 years
6 - 11 Lacs
Delhi, India
On-site
Developing ETL pipelines involving big data. Developing data processinganalytics applications primarily using PySpark. Experience of developing applications on cloud(AWS) mostly using services related to storage, compute, ETL, DWH, Analytics and streaming. Clear understanding and ability to implement distributed storage, processing and scalable applications. Experience of working with SQL and NoSQL database. Ability to write and analyze SQL, HQL and other query languages for NoSQL databases. Proficiency is writing disitributed scalable data processing code using PySpark, Python and related libraries. Data Engineer AEP Comptency Experience of developing applications that consume the services exposed as ReST APIs. Special Consideration given forExperience of working with Container-orchestration systems like Kubernetes. Experience of working with any enterprise grade ETL tools. Experience knowledge with Adobe Experience Cloud solutions. Experience knowledge withWeb AnalyticsorDigital Marketing. Experience knowledge withGoogle Cloudplatforms. Experience knowledge withData Science, ML/AI, RorJupyter.
Posted 1 week ago
6.0 - 11.0 years
25 - 37 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Azure Expertise, Proven experience with Azure Cloud services especially Azure Data Factory, Azure SQL Database & Azure Databricks Expert in PySpark data processing & analytics Strong background in building and optimizing data pipelines and workflows. Required Candidate profile Solid exp with data modeling,ETL processes & data warehousing Performance Tuning Ability to optimize data pipelines & jobs to ensure scalability & performance troubleshooting & resolving performance
Posted 1 week ago
7.0 - 12.0 years
20 - 35 Lacs
Pune
Hybrid
Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 3+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 3+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 2+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 3+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 2+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 3+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn. Bonus Experience (optional) Experience with Agile environment Experience operating in a CI/CD environment Experience building HTTP/REST APIs using popular frameworks Healthcare experience
Posted 1 week ago
2.0 - 6.0 years
0 - 1 Lacs
Pune
Work from Office
As Lead Data Engineer , you'll design and manage scalable ETL pipelines and clean, structured data flows for real-time retail analytics. You'll work closely with ML engineers and business teams to deliver high-quality, ML-ready datasets. Responsibilities: Develop and optimize large-scale ETL pipelines Design schema-aware data flows and dashboard-ready datasets Manage data pipelines on AWS (S3, Glue, Redshift) Work with transactional and retail data for real-time insights
Posted 1 week ago
3.0 - 6.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Roles & Responsibilities: Exp level: 10 years Analyzing raw data Developing and maintaining datasets Improving data quality and efficiency Interpret trends and patterns Conduct complex data analysis and report on results Prepare data for prescriptive and predictive modeling Build algorithms and prototypes Combine raw information from different sources Explore ways to enhance data quality and reliability Identify opportunities for data acquisition Develop analytical tools and programs Collaborate with data scientists and architects on several projects Technical Skills: Implementing data governance with monitoring, alerting, reporting Technical Writing capability: documenting standards, templates and procedures. Databricks Knowledge on patterns for scaling ETL pipelines effectively Orchestrating data analytics workloads – Databricks jobs and workflows Integrating Azure DevOps CI/CD practices with data pipeline development ETL modernization, Data modelling Strong exposure to Azure Data services, Synapse, Data orchestration & Visualization Data warehousing & Data lakehouse architecrures Data streaming & Ream time analytics Python PySpark Library, Pandas Azure Data Factory Data orchestration Azure SQL Scripting, Querying, stored procedures
Posted 1 week ago
5.0 - 8.0 years
7 - 15 Lacs
Hyderabad
Work from Office
Required skill : Experience over 5 years on DevOps methodologies and tools, including CI/CD pipelines(like Jenkins, Git) Application migrations. Hands-on System management scripting languages ( Python, shell, Bash). Hands-on experience in AWS & Azure/GCP DevOps stack. AWS services (EC2, VPC, ELB, S3 Cloud Formation, Cloud Trail, Route 53 RDS, SQS etc..) Deep knowledge of Cloud Security and Application Security Concepts with prior experience in toolchains Good hands-on experience on tools Ansible, Docker, Kubernetes, Apache Kafka, SonarQube, etc. Advanced knowledge in Operating systems (Windows and various Linux distributions), Virtualization, Networking, Monitoring, Storage & security. Experience in multiple platform integrations. Working experience on DNS management. Should have strong fundamentals in IT Infrastructure related configurations. Understanding across AWS/Azure and cloud infrastructure components (server, storage, network, database, and applications) to deliver end-to-end Cloud Infrastructure engagements that includes assessment, design, deployment and migrations. Strong knowledge on relational databases, NoSQL databases and ETL operations. Experience in performing secure architecture review, DevSecOps pipeline build, Penetration Testing. Knowledge on security of data at rest and data in-transit. Experience in setting up web server configurations in different use cases.
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Navi Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 1 week ago
2.0 - 4.0 years
7 - 9 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
POSITION Senior Data Engineer / Data Engineer LOCATION Bangalore/Mumbai/Kolkata/Gurugram/Hyd/Pune/Chennai EXPERIENCE 2+ Years JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business-critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high-quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). Implement efficient solutions for high-volume, batch, real-time streaming, and event-driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code reviews, technical discussions, and peer mentoring as needed. Skills & Experience: Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). Strong SQL development skills for ETL, analytics, and performance optimization. Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus.
Posted 1 week ago
8.0 - 13.0 years
8 - 13 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary ETL Experience: Build and optimize ETL pipelines, integrate data from multiple sources, and handle data transformations. (Informatica- IDQ /Cloud) Python Expertise: Advanced knowledge in Python, including libraries like pandas, requests. API Development Soap/Rest API Database Management: Proficient in SQL and data warehousing
Posted 1 week ago
1.0 - 3.0 years
14 - 20 Lacs
Bengaluru
Work from Office
We're looking for a backend software engineer who is passionate about building robust systems around data not just analyzing it. You'll be part of a small, high-impact team working directly with the founders to drive business value through reliable data infrastructure, web automation, and scalable backend services. This is not a data science only role ; strong computer science fundamentals and real-world engineering experience are essential. Key Responsibilities: You have two key responsibilities in this Role: New Products & Features: Collaborate directly with the founding team to build products where data is the core feature powering everything from enterprise insights to AI-enhanced automation. Design and implement backend services that extract, clean, and structure messy real-world data for downstream use in analytical or AI systems. Ensure System Reliability & Scale Maintain high uptime and performance of our backend services used by thousands of customers across industries. Evolve systems architecture to support more data, faster pipelines, and future AI features all without compromising maintainability. Other responsibilities include: Build and maintain Python-based backend systems optimized for speed, scale, and extensibility. Model, optimize, and query PostgreSQL databases for large, evolving datasets. Design data pipelines in Python to ingest and process web and API data for real-time and batch systems. Implement web scraping and reverse engineering strategies using tools like Selenium to access hard-to-reach data sources. Support AI workflows by preparing clean, structured datasets for model consumption (internal or 3rd-party tools). Deploy and manage backend services on Linux-based cloud servers , configuring tools like Nginx , uWSGI , and monitoring systems. Occasionally build internal data visualizations and dashboards to validate and expose AI/data insights. Qualifications: 1+ years of backend software engineering experience, preferably in data-heavy environments. B.Tech in Computer Science or related field. Strong in Python and data engineering solutions specifically ETL pipelines. Exposure to AI workflows (e.g., data pre-processing for ML, model APIs, or vector databases), even if not hands-on ML. Proficiency in PostgreSQL (or similar RDBMS), including query optimization and schema design. Experience with scraping , data ingestion , and external API integration . Bonus: Familiarity with Python based web backends and Linux server environments and cloud deployment.
Posted 1 week ago
3.0 - 5.0 years
2 - 7 Lacs
Bengaluru
Remote
Requirements 3 years of work experience in performance marketing tech, data analytics, technical solutions architecture or other technical positions. BS in Engineering, Computer Science, Math, Economics, Statistics, Advertising, Marketing or other disciplines with demonstrated technical exposure. Excellent quantitative skills and comfort with tools such as SQL, Excel, Python, Hive, Spark etc . Proficiency with Tableau or other dashboard building tools and JIRA . Excellent organizational skills and attention to detail. Preferred Qualifications Support or service desk experience. Outstanding communication skills, particularly in conveying technical concepts in a manner that is easy for marketers and non-technical partners to understand and vice versa. Basic understanding of app and web tracking data for performance marketing. Background in AdTech or Ad Platforms Strong experience with SQL and ETL pipeline building
Posted 1 week ago
5.0 - 10.0 years
6 - 10 Lacs
Delhi, India
On-site
Responsibilities: Data Analytics & Insight Generation (30%) Analyze marketing, digital, and campaign data to uncover patterns and deliver actionable insights. Support performance measurement, experimentation, and strategic decision-making across the marketing funnel. Translate business questions into structured analyses and data-driven narratives. Data Infrastructure & Engineering (30%) Design and maintain scalable data pipelines and workflows using SQL , Python , and Databricks . Build and evolve a marketing data lake , integrating APIs and data from multiple platforms and tools. Work across cloud environments (Azure, AWS) to support analytics-ready data at scale. Project & Delivery Ownership (25%) Serve as project lead or scrum owner across analytics initiatives planning sprints, managing delivery, and driving alignment. Use tools like JIRA to manage work in an agile environment and ensure timely execution. Collaborate with cross-functional teams to align priorities and execute on roadmap initiatives. Visualization & Platform Enablement (15%) Build high-impact dashboards and data products using Tableau , with a focus on usability, scalability, and performance. Enable stakeholder self-service through clean data architecture and visualization best practices. Experiment with emerging tools and capabilities, including GenAI for assisted analytics. Experience 5+ years of experience in data analytics, digital analytics, or data engineering, ideally in a marketing or commercial context. Hands-on experience with Responsibilities: Data Analytics & Insight Generation (30%) Analyze marketing, digital, and campaign data to uncover patterns and deliver actionable insights. Support performance measurement, experimentation, and strategic decision-making across the marketing funnel. Translate business questions into structured analyses and data-driven narratives. Data Infrastructure & Engineering (30%) Design and maintain scalable data pipelines and workflows using SQL , Python , and Databricks . Build and evolve a marketing data lake , integrating APIs and data from multiple platforms and tools. Work across cloud environments (Azure, AWS) to support analytics-ready data at scale. Project & Delivery Ownership (25%) Serve as project lead or scrum owner across analytics initiatives planning sprints, managing delivery, and driving alignment. Use tools like JIRA to manage work in an agile environment and ensure timely execution. Collaborate with cross-functional teams to align priorities and execute on roadmap initiatives. Visualization & Platform Enablement (15%) Build high-impact dashboards and data products using Tableau , with a focus on usability, scalability, and performance. Enable stakeholder self-service through clean data architecture and visualization best practices. Experiment with emerging tools and capabilities, including GenAI for assisted analytics. Experience 5+ years of experience in data analytics, digital analytics, or data engineering, ideally in a marketing or commercial context. Hands-on experience with SQL , Python , and tools such as Databricks , Azure , or AWS . Proven track record of building and managing data lakes , ETL pipelines , and API integrations . Strong proficiency in Tableau ; experience with Tableau Prep is a plus. Familiarity with Google Analytics (GA4) , GTM , and social media analytics platforms. Experience working in agile teams , with comfort using JIRA for sprint planning and delivery. Exposure to predictive analytics , modeling, and GenAI applications is a plus. Strong communication and storytelling skills able to lead high-stakes meetings and deliver clear insights to senior stakeholders. Excellent organizational and project management skills; confident in managing competing priorities. High attention to detail, ownership mindset, and a collaborative, delivery-focused approach. and tools such as Databricks , Azure , or AWS . Proven track record of building and managing data lakes , ETL pipelines , and API integrations . Strong proficiency in Tableau ; experience with Tableau Prep is a plus. Familiarity with Google Analytics (GA4) , GTM , and social media analytics platforms. Experience working in agile teams , with comfort using JIRA for sprint planning and delivery. Exposure to predictive analytics , modeling, and GenAI applications is a plus. Strong communication and storytelling skills able to lead high-stakes meetings and deliver clear insights to senior stakeholders. Excellent organizational and project management skills; confident in managing competing priorities. High attention to detail, ownership mindset, and a collaborative, delivery-focused approach.
Posted 1 week ago
5.0 - 9.0 years
7 - 17 Lacs
Pune
Work from Office
Job Overview: Diacto is looking for a highly capable Data Architect with 5 to 9 years of experience to lead cloud data platform initiatives with a primary focus on Snowflake and Azure Data Hub. This individual will play a key role in defining the data architecture strategy, implementing robust data pipelines, and enabling enterprise-grade analytics solutions. This is an on-site role based in our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design and implement enterprise-level data architecture with a strong focus on Snowflake and Azure Data Hub Define standards and best practices for data ingestion, transformation, and storage Collaborate with cross-functional teams to develop scalable, secure, and high-performance data pipelines Lead Snowflake environment setup, configuration, performance tuning, and optimization Integrate Azure Data Services with Snowflake to support diverse business use cases Implement governance, metadata management, and security policies Mentor junior developers and data engineers on cloud data technologies and best practices Experience and Skills Required: 5 to 9 years of overall experience in data architecture or data engineering roles Strong, hands-on expertise in Snowflake , including design, development, and performance tuning Solid experience with Azure Data Hub and Azure Data Services (Data Lake, Synapse, etc.) Understanding of cloud data integration techniques and ELT/ETL frameworks Familiarity with data orchestration tools such as DBT, Airflow , or Azure Data Factory Proven ability to handle structured, semi-structured, and unstructured data Strong analytical, problem-solving, and communication skills Nice to Have: Certifications in Snowflake and/or Microsoft Azure Experience with CI/CD tools like GitHub for code versioning and deployment Familiarity with real-time or near-real-time data ingestion Why Join Diacto Technologies? Work with a cutting-edge tech stack and cloud-native architectures Be part of a data-driven culture with opportunities for continuous learning Collaborate with industry experts and build transformative data solutions Competitive salary and benefits with a collaborative work environment in Baner, Pune How to Apply: Option 1 (Preferred) Copy and paste the following link on your browser and submit your application for automated interview process : - https://app.candidhr.ai/app/candidate/gAAAAABoRrcIhRQqJKDXiCEfrQG8Rtsk46Etg4-K8eiwqJ_GELL6ewSC9vl4BjaTwUAHzXZTE3nOtgaiQLCso_vWzieLkoV9Nw==/ Option 2 1. Please visit our website's career section at https://www.diacto.com/careers/ 2. Scroll down to the " Who are we looking for ?" section 3. Find the listing for " Data Architect (Snowflake) " and 4. Proceed with the virtual interview by clicking on " Apply Now ."
Posted 2 weeks ago
2.0 - 5.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 weeks ago
2.0 - 5.0 years
14 - 17 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 weeks ago
2.0 - 5.0 years
14 - 17 Lacs
Navi Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 weeks ago
2.0 - 5.0 years
14 - 17 Lacs
Bengaluru
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 weeks ago
4.0 - 9.0 years
15 - 30 Lacs
Gurugram, Chennai
Work from Office
Role & responsibilities • Assume ownership of Data Engineering projects from inception to completion. Implement fully operational Unified Data Platform solutions in production environments using technologies like Databricks, Snowflake, Azure Synapse etc. Showcase proficiency in Data Modelling and Data Architecture Utilize modern data transformation tools such as DBT (Data Build Tool) to streamline and automate data pipelines (nice to have). Implement DevOps practices for continuous integration and deployment (CI/CD) to ensure robust and scalable data solutions (nice to have). Maintain code versioning and collaborate effectively within a version-controlled environment. Familiarity with Data Ingestion & Orchestration tools such as Azure Data Factory, Azure Synapse, AWS Glue etc. Set up processes for data management, templatized analytical modules/deliverables. Continuously improve processes with focus on automation and partner with different teams to develop system capability. Proactively seek opportunities to help and mentor team members by sharing knowledge and expanding skills. Ability to communicate effectively with internal and external stakeholders. Coordinating with cross-functional team members to make sure high quality in deliverables with no impact on timelines Preferred candidate profile • Expertise in computer programming languages such as: Python and Advance SQL • Should have working knowledge of Data Warehousing, Data Marts and Business Intelligence with hands-on experience implementing fully operational data warehouse solutions in production environments. • 3+ years of Working Knowledge of Big data tools (Hive, Spark) along with ETL tools and cloud platforms. • 3+ years of relevant experience in either Snowflake or Databricks. Certification in Snowflake or Databricks would be highly recommended. • Proficient in Data Modelling and ELT techniques. • Experienced with any of the ETL/Data Pipeline Orchestration tools such as Azure Data Factory, AWS Glue, Azure Synapse, Airflow etc. • Experience working with ingesting data from different data sources such as RDBMS, ERP Systems, APIs etc. • Knowledge of modern data transformation tools, particularly DBT (Data Build Tool), for streamlined and automated data pipelines (nice to have). • Experience in implementing DevOps practices for CI/CD to ensure robust and scalable data solutions (nice to have). • Proficient in maintaining code versioning and effective collaboration within a versioncontrolled environment. • Ability to work effectively as an individual contributor and in small teams. Should have experience mentoring junior team members. • Excellent problem-solving and troubleshooting ability with experience of supporting and working with cross functional teams in a dynamic environment. • Strong verbal and written communication skills with ability to communicate effectively, articulate results and issues to internal and client team.
Posted 2 weeks ago
5.0 - 7.0 years
9 - 11 Lacs
Hyderabad
Work from Office
Role: PySpark DeveloperLocations:MultipleWork Mode: Hybrid Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark, Python, and working with modern data engineering tools in cloud environments such as AWS. Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments
Posted 2 weeks ago
4.0 - 9.0 years
4 - 9 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Responsibilities Data Pipeline Engineers are expected to be involved from inception of projects, understand requirements, architect, develop, deploy, and maintain data pipelines (ETL / ELT). Typically, they work in a multi-disciplinary squad (we follow Agile!) which involves partnering with program and product managers to expand product offering based on business demands. Design is an iterative process, whether for UX, services or infrastructure. Our goal is to drive up user engagement and adoption of the platform while constantly working towards modernizing and improving platform performance and scalability. Deployment and maintenance require close interaction with various teams. This requires maintaining a positive and collaborative working relationship with teams within DOE as well as with wider Aladdin developer community. Production support for applications is usually required for issues that cannot be resolved by operations team. Creative and inventive problem-solving skills for reduced turnaround times are highly valued. Preparing user documentation to maintain both development and operations continuity is integral to the role. And Ideal candidate would have At least 4+ years experience as a data engineer Experience in SQL, Sybase, Linux is a must Experience coding in two of these languages for server side/data processing is required Java, Python, C++ 2+ years experience using modern data stack (spark, snowflake, Big Query etc.) on cloud platforms (Azure, GCP, AWS) Experience building ETL/ELT pipelines for complex data engineering projects (using Airflow, dbt, Great Expectations would be a plus) Experience with Database Modeling, Normalization techniques Experience with object-oriented design patterns Experience with dev ops tools like Git, Maven, Jenkins, Gitlab CI, Azure DevOps Experience with Agile development concepts and related tools Ability to trouble shoot and fix performance issues across the codebase and database queries Excellent written and verbal communication skills Ability to operate in a fast-paced environment Strong interpersonal skills with a can-do attitude under challenging circumstances BA/BS or equivalent practical experience Skills that would be a plus Perl, ETL tools (Informatica, Talend, dbt etc.) Experience with Snowflake or other Cloud Data warehousing products Exposure with Workflow management tools such as Airflow Exposure to messaging platforms such as Kafka Exposure to NoSQL platforms such as Cassandra, MongoDB Building and Delivering REST APIs
Posted 2 weeks ago
11.0 - 17.0 years
20 - 35 Lacs
Indore, Hyderabad
Work from Office
Greetings of the Day !! We have job opening for Microsoft Fabric + ADF with one of our clients. If you are interested in this position, please share update resume in this email id : shaswati.m@bct-consulting.com . * Primary Skill Microsoft Fabric Secondary Skill 1 Azure Data Factory (ADF) 12+ years of experience in Microsoft Azure Data Engineering for analytical projects. Proven expertise in designing, developing, and deploying high-volume, end-to-end ETL pipelines for complex models, including batch, and real-time data integration frameworks using Azure, Microsoft Fabric and Databricks. Extensive hands-on experience with Azure Data Factory, Databricks (with Unity Catalog), Azure Functions, Synapse Analytics, Data Lake, Delta Lake, and Azure SQL Database for managing and processing large-scale data integrations. Experience in Databricks cluster optimization and workflow management to ensure cost-effective and high-performance processing. Sound knowledge of data modelling, data governance, data quality management, and data modernization processes. Develop architecture blueprints and technical design documentation for Azure-based data solutions. Provide technical leadership and guidance on cloud architecture best practices, ensuring scalable and secure solutions. Keep abreast of emerging Azure technologies and recommend enhancements to existing systems. Lead proof of concepts (PoCs) and adopt agile delivery methodologies for solution development and delivery.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane