Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
10 - 17 Lacs
Chennai
Work from Office
Data Engineer, Chennai, India. About the job: The Data Engineer is a cornerstone of Vendasta's R&D team, driving the efficient processing, organization, and delivery of clean, structured data in support of business intelligence and decision-making. By developing and maintaining scalable ELT pipelines, they ensure data reliability and scalability, adhering to Vendasta's commitment to delivering data solutions aligned with evolving business needs. Your Impact: Design, implement, and maintain scalable ELT pipelines within a Kimball Architecture data warehouse. Ensure robustness against failures and data entry errors, managing data conformation, de-duplication, survivorship, and coercion. Manage historical and hierarchical data structures, ensuring usability for the Business Intelligence (BI) team and scalability for future growth. Partner with BI teams to prioritize and deliver data solutions while maintaining alignment with business objectives. Work closely with source system owners to extract, clean, and integrate data into the data warehouse. Advocate for and influence improvements in source data integrity. Champion best practices in data engineering, including governance, lineage tracking, and quality assurance. Collaborate with Site Reliability Engineering (SRE) teams to optimize cloud infrastructure usage. Operate within an Agile framework, contributing to team backlogs via Kanban or Scrum processes as appropriate. Balance short-term deliverables with long-term technical investments in collaboration with BI and engineering management. What you bring to the table: 5 - 8 years of proficiency in ETL, SQL and experience with cloud-based platforms like Google Cloud (BigQuery, DBT, Looker). In-depth understanding of Kimball data warehousing principles, including the 34-subsystems of ETL. Strong problem-solving skills for diagnosing and resolving data quality issues. Ability to engage with BI teams and source system owners to prioritize and deliver data solutions effectively. Eagerness to advocate for data integrity improvements while respecting the boundaries of data mesh principles. Ability to balance immediate needs with long-term technical investments. Understanding of cloud infrastructure for effective resource management in partnership with SRE teams. About Vendasta: So what do we actually do? Vendasta is a SaaS company composed of a company of global brands including MatchCraft, Yesware, and Broadly, that builds and sells software and services to help small businesses operate more efficiently as a team, meet more client needs, and provide incredible client experiences. We have offices in Saskatoon, Saskatchewan, Boston and Boca Raton, Florida, and Chennai, India. Perks: Benefits of health insurance Paid time off Training & Career Development: Professional development plans, leadership workshops, mentorship programs, and more! Free Snacks, hot beverages, and catered lunches on Fridays Culture - comprised of our core values: Drive, Innovation, Respect, and Agility Night Shift Premium Provident Fund
Posted 9 hours ago
6.0 - 10.0 years
1 - 1 Lacs
Bengaluru
Remote
We are looking for a highly skilled Senior ETL Consultant with strong expertise in Informatica Intelligent Data Management Cloud (IDMC) components such as IICS, CDI, CDQ, IDQ, CAI, along with proven experience in Databricks.
Posted 12 hours ago
2.0 - 6.0 years
5 - 9 Lacs
Pune
Work from Office
Data Engineer1 Common Skils - SQL, GCP BQ, ETL pipelines using Pythin/Airflow,Experience on Spark/Hive/HDFS,Data modeling for Data conversion Resources (4) Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Common Skils - SQL, GCP BQ, ETL pipelines using Pythin/Airflow, Experience on Spark/Hive/HDFS, Data modeling for Data conversion Resources (4) Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Data Engineer - Knows HR Knowledge , all other requirement from Functional Area given by UBER Customer Name Customer Nameuber
Posted 1 day ago
8.0 - 13.0 years
14 - 24 Lacs
Hyderabad
Hybrid
Key Responsibilities: • Designing and building a scalable Datawarehouse using Azure Data Factory (ADF) and Azure Synapse Pipelines and SSIS. • Creating visually appealing BI dashboards using Power BI and other reporting tools to deliver data-driven insights. • Collaborating with cross-functional teams, communicating complex concepts, and ensuring data governance and quality standards. Basic Qualifications: • 9-12 years of strong Business Intelligence/Business Analytics experience or equivalency is preferred. • B. Tech/B.E. - Any Specialization, M.E/M.Tech - Any Specialization • Strong proficiency in SQL and experience with database technologies (e.g., SQL Server). • Solid understanding of data modeling, data warehousing, and ETL concepts. • Excellent analytical and problem-solving skills, with the ability to translate complex business requirements into practical solutions. • Strong communication and collaboration skills, with the ability to effectively interact with stakeholders at all levels of the organization. • Proven ability to work independently and manage multiple priorities in a fast-paced environment. • Must have worked on ingesting data in the Enterprise Data Warehouse. • Good experience in the areas of Business Intelligence and Reporting, including but not limited to On-Prem and Cloud Technologies • Must have exposure to complete MSBI stack including Power BI and deliver end to end BI solutions independently. • Must have technical expertise in creating data pipelines/ data integration strategy using SSIS/ ADF/ Synapse Pipeline Preferred Qualifications • Hands-on experience on DBT and Fabric will be preferred. • Proficiency in programming languages such as Python or R is a plus.
Posted 1 day ago
3.0 - 8.0 years
5 - 11 Lacs
Pune, Mumbai (All Areas)
Hybrid
Overview: TresVista is looking to hire an Associate in its Data Intelligence Group team, who will be primarily responsible for managing clients as well as monitor/execute projects both for the clients as well as internal teams. The Associate may be directly managing a team of up to 3-4 Data Engineers & Analysts across multiple data engineering efforts for our clients with varied technologies. They would be joining the current team of 70+ members, which is a mix of Data Engineers, Data Visualization Experts, and Data Scientists. Roles and Responsibilities: Interacting with the client (internal or external) to understand their problems and work on solutions that address their needs Driving projects and working closely with a team of individuals to ensure proper requirements are identified, useful user stories are created, and work is planned logically and efficiently to deliver solutions that support changing business requirements Managing the various activities within the team, strategizing how to approach tasks, creating timelines and goals, distributing information/tasks to the various team members Conducting meetings, documenting, and communicating findings effectively to clients, management and cross-functional teams Creating Ad-hoc reports for multiple internal requests across departments Automating the process using data transformation tools Prerequisites Strong analytical, problem-solving, interpersonal, and communication skills Advanced knowledge of DBMS, Data Modelling along with advanced querying capabilities using SQL Working experience in cloud technologies (GCP/ AWS/Azure/Snowflake) Prior experience in building and deploying ETL/ELT pipelines using CI/CD, and orchestration tools such as Apache Airflow, GCP workflows, etc. Proficiency in Python for building ETL/ELT processes and data modeling Proficiency in Reporting and Dashboards creation using Power BI/Tableau Knowledge in building ML models and leveraging Gen AI for modern architectures. Experience working with version control platforms like GitHub Familiarity with IaC tools like Terraform and Ansible is good to have Stakeholder Management and client communication experience would be preferred Experience in the Financial Services domain will be an added plus Experience in Machine Learning tools and techniques will be good to have Experience 3-7 years Education BTech/MTech/BE/ME/MBA in Analytics Compensation The compensation structure will be as per industry standards
Posted 1 day ago
8.0 - 13.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Role: Senior Data Engineer Location: Bangalore - Hybrid Experience : 10+ Years Job Requirements: ETL & Data Pipelines: Experience building and maintaining ETL pipelines with large data sets using AWS Glue, EMR, Kinesis, Kafka, CloudWatch Programming & Data Processing: Strong Python development experience with proficiency in Spark or PySpark Experience in using APIs Database Management: Strong skills in writing SQL queries and performance tuning in AWS Redshift Proficient with other industry-leading RDBMS such as MS SQL Server and PostgreSQL AWS Services: Proficient in working with AWS services including AWS Lambda, Event Bridge, Step Functions, SNS, SQS, S3, and MI models Interested candidates can share their resume at Neesha1@damcogroup.com
Posted 1 day ago
2.0 - 4.0 years
10 - 18 Lacs
Bengaluru
Work from Office
Role & responsibilities : Design and Build Data Infrastructure : Develop scalable data pipelines and data lake/warehouse solutions for real-time and batch data using cloud and open-source tools. Develop & Automate Data Workflows : Create Python-based ETL/ELT processes for data ingestion, validation, integration, and transformation across multiple sources. Ensure Data Quality & Governance : Implement monitoring systems, resolve data quality issues, and enforce data governance and security best practices. Collaborate & Mentor : Work with cross-functional teams to deliver data solutions, and mentor junior engineers as the team grows. Explore New Tech : Research and implement emerging tools and technologies to improve system performance and scalability.
Posted 1 day ago
4.0 - 9.0 years
10 - 14 Lacs
Pune
Work from Office
Job Title - Sales Excellence -Client Success - Data Engineering Specialist - CF Management Level :ML9 Location:Open Must have skills:GCP, SQL, Data Engineering, Python Good to have skills:managing ETL pipelines. Job Summary : We are: Sales Excellence. Sales Excellence at Accenture empowers our people to compete, win and grow. We provide everything they need to grow their client portfolios, optimize their deals and enable their sales talent, all driven by sales intelligence. The team will be aligned to the Client Success, which is a new function to support Accentures approach to putting client value and client experience at the heart of everything we do to foster client love. Our ambition is that every client loves working with Accenture and believes were the ideal partner to help them create and realize their vision for the future beyond their expectations. You are: A builder at heart curious about new tools and their usefulness, eager to create prototypes, and adaptable to changing paths. You enjoy sharing your experiments with a small team and are responsive to the needs of your clients. The work: The Center of Excellence (COE) enables Sales Excellence to deliver best-in-class service offerings to Accenture leaders, practitioners, and sales teams. As a member of the COE Analytics Tools & Reporting team, you will help in building and enhancing data foundation for reporting tools and Analytics tool to provide insights on underlying trends and key drivers of the business. Roles & Responsibilities: Collaborate with the Client Success, Analytics COE, CIO Engineering/DevOps team, and stakeholders to build and enhance Client success data lake. Write complex SQL scripts to transform data for the creation of dashboards or reports and validate the accuracy and completeness of the data. Build automated solutions to support any business operation or data transfer. Document and build efficient data model for reporting and analytics use case. Assure the Data Lake data accuracy, consistency, and timeliness while ensuring user acceptance and satisfaction. Work with the Client Success, Sales Excellence COE members, CIO Engineering/DevOps team and Analytics Leads to standardize Data in data lake. Professional & Technical Skills: Bachelors degree or equivalent experience in Data Engineering, analytics, or similar field. At least 4 years of professional experience in developing and managing ETL pipelines. A minimum of 2 years of GCP experience. Ability to write complex SQL and prepare data for dashboarding. Experience in managing and documenting data models. Understanding of Data governance and policies. Proficiency in Python and SQL scripting language. Ability to translate business requirements into technical specification for engineering team. Curiosity, creativity, a collaborative attitude, and attention to detail. Ability to explain technical information to technical as well as non-technical users. Ability to work remotely with minimal supervision in a global environment. Proficiency with Microsoft office tools. Additional Information: Masters degree in analytics or similar field. Data visualization or reporting using text data as well as sales, pricing, and finance data. Ability to prioritize workload and manage downstream stakeholders. About Our Company | AccentureQualification Experience: Minimum 5+ year(s) of experience is required Educational Qualification: Bachelors degree or equivalent experience in Data Engineering, analytics, or similar field
Posted 1 day ago
3.0 - 7.0 years
10 - 18 Lacs
Pune
Work from Office
Roles and Responsibilities Design, develop, and maintain automated testing frameworks using AWS services such as Glue, Lambda, Step Functions, etc. Develop data pipelines using Delta Lake and ETL processes to extract insights from large datasets. Collaborate with cross-functional teams to identify requirements for test cases and create comprehensive test plans. Ensure high-quality deliverables by executing thorough testing procedures and reporting defects. Desired Candidate Profile 3-7 years of experience in QA Automation with a focus on AWS native stack (Glue, Lambda). Strong understanding of SQL concepts and ability to write complex queries. Experience working with big data technologies like Hadoop/Hive/Pyspark is an added advantage.
Posted 1 day ago
4.0 - 8.0 years
10 - 18 Lacs
Hyderabad
Hybrid
About the Role: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. Implement and enforce data quality checks, validation rules, and monitoring. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: 4-6 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture, data integration, and transformation techniques. Experience in working with version control systems like GitHub and knowledge of CI/CD practices. Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): Experience working with Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools.
Posted 2 days ago
8.0 - 12.0 years
5 - 10 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Teradata to Snowflake and Databricks on Azure Cloud,data migration projects, including complex migrations to Databricks,Strong expertise in ETL pipeline design and optimization, particularly for cloud environments and large-scale data migration
Posted 2 days ago
2.0 - 5.0 years
6 - 10 Lacs
Chennai
Work from Office
Job Title: Junior AI Engineer / Data Engineer Location: Chennai Reports To : Senior AI Engineer / Data Architect Job Summary: This role is ideal for an early-career engineer eager to develop robust data pipelines and support the development of AI/ML models. The Junior AI Engineer will primarily focus on data preparation, transformation, and infrastructure to support scalable AI systems. Key Responsibilities: Build and maintain ETL pipelines for AI applications. Assist in data wrangling, cleaning, and feature engineering. Support data scientists and AI engineers with curated, high-quality datasets. Contribute to data governance and documentation. Collaborate on proof-of-concepts and prototypes of AI solutions. Required Qualifications: Bachelors degree in computer science, Engineering, or a related field. 2+ years of experience in data engineering. Proficient in Python, SQL; exposure to Azure platform is a plus. Basic understanding of AL/ML concepts.
Posted 2 days ago
6.0 - 8.0 years
6 - 8 Lacs
Pune, Maharashtra, India
On-site
Role Description You will be joining the Anti-Financial Crime (AFC) Technology team and will work as part of a multi-skilled agile squad, specializing in designing, developing, and testing engineering solutions, as well as troubleshooting and resolving technical issues to enable the Transaction Monitoring (TM) systems to identify Money Laundering or Terrorism Financing. You will have the opportunity to work on challenging problems, with large complex datasets and play a crucial role in managing and optimizing the data flows within Transaction Monitoring. You will have the opportunity to work across Cloud and BigData technologies, optimizing the performance of existing data pipelines as well as designing and creating new ETL Frameworks and solutions. You will have the opportunity to work on challenging problems, building high-performance systems to process large volumes of data, using the latest technologies. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities As a Vice President, your role will include management and leadership responsibilities, such as: Leading by example, by creating efficient ETL workflows to extract data from multiple sources, transform it according to business requirements, and load it into the TM systems. Implementing data validation and cleansing techniques to maintain high data quality and detective controls to ensure the integrity and completeness of data being prepared through our Data Pipelines. Work closely with other developers and architects to design and implement solutions that meet business needs whilst ensuring that solutions are scalable, supportable and sustainable. Ensuring that all engineering work complies with industry and DB standards, regulations, and best practices Your skills and experience Good analytical problem-solving capabilities with excellent communication skills written and oral enabling authoring of documents that will support a technical team in performing development work. Experience in Google Cloud Platform is preferred but other the cloud solutions such as AWS would be considered 5+ years experience in Oracle, Control M, Linux and Agile methodology and prior experience of working in an environment using internally engineered components (database, operating system, etc.) 5+ years experience in Hadoop, Hive, Oracle, Control M, Java development is required whilst experience in OpenShift, PySpark is preferred Strong understanding of designing and delivering complex ETL pipelines in a regulatory space
Posted 2 days ago
3.0 - 5.0 years
10 - 15 Lacs
Ahmedabad
Work from Office
Design, deliver & maintain the appropriate data solution to provide the correct data for analytical development to address key issues within the organization Gather detailed data requirements with a cross-functional team to deliver quality results. Required Candidate profile Strong experience with cloud services within Azure, AWS, or GCP platforms (preferably Azure) Strong experience with analytical tool (preferably SQL, dbt, Snowflake, BigQuery, Tableau)
Posted 4 days ago
5.0 - 7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant- AWS Developer We are seeking an experienced Developer with expertise in AWS-based big data solutions, particularly leveraging Apache Spark on AWS EMR, along with strong backend development skills in Java and Spring. The ideal candidate will also possess a solid background in data warehousing, ETL pipelines, and large-scale data processing systems.. Responsibilities . Design and implement scalable data processing solutions using Apache Spark on AWS EMR. . Develop microservices and backend components using Java and the Spring framework. . Build, optimize, and maintain ETL pipelines for structured and unstructured data. . Integrate data pipelines with AWS services such as S3, Lambda, Glue, Redshift, and Athena. . Collaborate with data architects, analysts, and DevOps teams to support data warehousing initiatives. . Write efficient, reusable, and reliable code following best practices. . Ensure data quality, governance, and lineage across the architecture. . Troubleshoot and optimize Spark jobs and cloud-based processing workflows. . Participate in code reviews, testing, and deployments in Agile environments. Qualifications we seek in you! Minimum Qualifications . Bachelor&rsquos degree Preferred Qualifications/ Skills . Strong experience with Apache Spark and AWS EMR in production environments. . Solid understanding of AWS ecosystem, including services like S3, Lambda, Glue, Redshift, and CloudWatch. . Proven experience in designing and managing large-scale data warehousing systems. . Expertise in building and maintaining ETL pipelines and data transformation workflows. . Strong SQL skills and familiarity with performance tuning for analytical queries. . Experience working in Agile development environments using tools such as Git, JIRA, and CI/CD pipelines. . Familiarity with data modeling concepts and tools (e.g., Star Schema, Snowflake Schema). . Knowledge of data governance tools and metadata management. . Experience with containerization (Docker, Kubernetes) and serverless architectures. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 5 days ago
8.0 - 12.0 years
16 - 27 Lacs
Chennai, Bengaluru
Work from Office
Role & responsibilities Design, develop, and optimize scalable ETL pipelines using PySpark and AWS data services Work with structured and semi-structured data from various sources and formats (CSV, JSON, Parquet) Build reusable data transformations using Spark DataFrames, RDDs, and Spark SQL Implement data validation, quality checks, and ensure schema evolution across data sources Manage deployment and monitoring of Spark jobs using AWS EMR, Glue, Lambda, and CloudWatch Collaborate with product owners, architects, and data scientists to deliver robust data workflows Tune job performance, manage partitioning strategies, and reduce job latency/cost Contribute to version control, CI/CD processes, and production support Preferred candidate profile Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 5+ years of experience in PySpark, Spark SQL, RDDs, UDFs, and Spark optimization Strong experience in building ETL workflows for large-scale data processing Solid understanding of AWS cloud ecosystem, especially S3, EMR, Glue, Lambda, Athena Proficiency in Python, SQL, and shell scripting Experience with data lakes, partitioning strategies, and file formats (e.g., Parquet, ORC) Familiarity with Git, Jenkins, and automated testing frameworks (e.g., PyTest) Experience with Redshift, Snowflake, or other DW platforms Exposure to data governance, cataloging, or DQ frameworks Terraform or infrastructure-as-code experience Understanding of Spark internals, DAGs, and caching strategies
Posted 5 days ago
3.0 - 6.0 years
10 - 17 Lacs
Pune
Hybrid
Software Engineer Baner, Pune, Maharashtra Department Software & Automation Employee Type Permanent Experience Range 3 - 6 Years Qualification: Bachelor's or master's degree in computer science, IT, or related field. Roles & Responsibilities: Tasks Facilitate Agile ceremonies and lead Scrum practices. Support the Product Owner in backlog management and team organization. Promote Agile best practices (Scrum, SAFe) and continuous delivery improvements. Develop and maintain scalable data pipelines using AWS and Databricks (secondary focus). Collaborate with architects and contribute to solution design (support role). Occasionally travel for global team collaboration. Scrum Master or Agile team facilitation experience. Familiarity with Python and Databricks (PySpark, SQL). Good AWS cloud exposure (S3, EC2 basics Good to Have: Certified Scrum Master (CSM) or equivalent. Experience with ETL pipelines or data engineering concepts. Multi-cultural team collaboration experience. Software Skills: JIRA Confluence Python (basic to intermediate) Databricks (basic)
Posted 5 days ago
13.0 - 18.0 years
14 - 19 Lacs
Pune
Hybrid
So, what’s the role all about? We are seeking a highly skilled and motivated Engineering Tech Manager to be a part of our SURVEIL-X team, focused on building scalable compliance solutions for financial markets. You’ll drive R&D delivery, technical excellence, quality & manage a high-performing team, and ensure delivery of robust surveillance systems aligned with regulatory requirements How will you make an impact? Lead and mentor a team of software engineers in building scalable surveillance systems. Drive the design, development, and maintenance of applications. Collaborate with cross-functional teams including App OPS, DevOps, Professional Services and product Own project delivery timelines, code quality, and system architecture. Ensure best practices in software engineering, including CI/CD, code reviews, and testing. Have you got what it takes? Key Technical Skills: Strong expertise in Python– architecture, development, and optimization. Strong expertise in building data and ETL pipelines. Strong expertise in Message Oriented applications. Technical knowhow of AWS services and cloud-native development. Technical knowhow of No SQL and Object Storage. Good knowledge in RDBMS – MS SQL, Postgresql Technical experience with indexing/search technologies (preferably Elasticsearch). Experience with containerization. Good to Have: Experience in a financial markets compliance domain. Experience with Helm and Kubernetes container orchestration. Qualifications: Bachelor’s or Master’s degree in Computer Science or a related field. 13-15 years of total experience with at least 2–3 years in a leadership or managerial role. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere Requisition ID: 7672 Reporting into: Director Role Type: Manager
Posted 6 days ago
3.0 - 6.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.
Posted 6 days ago
3.0 - 5.0 years
5 - 13 Lacs
Chennai
Hybrid
Job Title: Quality Engineer Location: Chennai Reports To: Senior AI Engineer / Data Architect Job Summary : The Quality Engineer will be responsible for ensuring the reliability, functionality, and performance of software products, particularly in AI/data-driven applications. The role includes test planning, automation, and manual validation of data and models. Key Responsibilities: Design and execute test plans and test cases for web, backend, and AI systems. Develop automated tests for data pipelines, APIs, and model outputs. Design and execute comprehensive test plans for AI-driven ETL workflows, data pipelines, and orchestration agents. Collaborate with AI engineers and data architects to define measurable acceptance criteria for agent behavior Monitor performance benchmarks and provide feedback for optimization & collaborate with developers, AI Engineers, and DevOps teams. Required Qualifications: Bachelors degree in computer science or related field 3+ years of experience in software quality assurance or test automation Strong understanding of data processing, ETL pipelines, and data quality validation Proficient in Python, with experience writing automated tests using PyTest or similar frameworks Experience with data testing tools (e.g., dbt tests) Familiarity with cloud-based data platforms (Azure preferred) and orchestration tools Exposure to AI or ML systems, especially those involving autonomous workflows or agent-based reasoning
Posted 6 days ago
6.0 - 11.0 years
6 - 11 Lacs
Delhi, India
On-site
Developing ETL pipelines involving big data. Developing data processinganalytics applications primarily using PySpark. Experience of developing applications on cloud(AWS) mostly using services related to storage, compute, ETL, DWH, Analytics and streaming. Clear understanding and ability to implement distributed storage, processing and scalable applications. Experience of working with SQL and NoSQL database. Ability to write and analyze SQL, HQL and other query languages for NoSQL databases. Proficiency is writing disitributed scalable data processing code using PySpark, Python and related libraries. Data Engineer AEP Comptency Experience of developing applications that consume the services exposed as ReST APIs. Special Consideration given forExperience of working with Container-orchestration systems like Kubernetes. Experience of working with any enterprise grade ETL tools. Experience knowledge with Adobe Experience Cloud solutions. Experience knowledge withWeb AnalyticsorDigital Marketing. Experience knowledge withGoogle Cloudplatforms. Experience knowledge withData Science, ML/AI, RorJupyter.
Posted 6 days ago
6.0 - 11.0 years
25 - 37 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Azure Expertise, Proven experience with Azure Cloud services especially Azure Data Factory, Azure SQL Database & Azure Databricks Expert in PySpark data processing & analytics Strong background in building and optimizing data pipelines and workflows. Required Candidate profile Solid exp with data modeling,ETL processes & data warehousing Performance Tuning Ability to optimize data pipelines & jobs to ensure scalability & performance troubleshooting & resolving performance
Posted 1 week ago
7.0 - 12.0 years
20 - 35 Lacs
Pune
Hybrid
Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 3+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 3+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 2+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 3+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 2+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 3+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn. Bonus Experience (optional) Experience with Agile environment Experience operating in a CI/CD environment Experience building HTTP/REST APIs using popular frameworks Healthcare experience
Posted 1 week ago
2.0 - 6.0 years
0 - 1 Lacs
Pune
Work from Office
As Lead Data Engineer , you'll design and manage scalable ETL pipelines and clean, structured data flows for real-time retail analytics. You'll work closely with ML engineers and business teams to deliver high-quality, ML-ready datasets. Responsibilities: Develop and optimize large-scale ETL pipelines Design schema-aware data flows and dashboard-ready datasets Manage data pipelines on AWS (S3, Glue, Redshift) Work with transactional and retail data for real-time insights
Posted 1 week ago
3.0 - 6.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Roles & Responsibilities: Exp level: 10 years Analyzing raw data Developing and maintaining datasets Improving data quality and efficiency Interpret trends and patterns Conduct complex data analysis and report on results Prepare data for prescriptive and predictive modeling Build algorithms and prototypes Combine raw information from different sources Explore ways to enhance data quality and reliability Identify opportunities for data acquisition Develop analytical tools and programs Collaborate with data scientists and architects on several projects Technical Skills: Implementing data governance with monitoring, alerting, reporting Technical Writing capability: documenting standards, templates and procedures. Databricks Knowledge on patterns for scaling ETL pipelines effectively Orchestrating data analytics workloads – Databricks jobs and workflows Integrating Azure DevOps CI/CD practices with data pipeline development ETL modernization, Data modelling Strong exposure to Azure Data services, Synapse, Data orchestration & Visualization Data warehousing & Data lakehouse architecrures Data streaming & Ream time analytics Python PySpark Library, Pandas Azure Data Factory Data orchestration Azure SQL Scripting, Querying, stored procedures
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France