Jobs
Interviews

241 Oozie Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

kochi, kerala

On-site

You will be responsible for big data development and support for production deployed applications, analyzing business and functional requirements for completeness, and developing code with minimum supervision. Working collaboratively with team members, you will ensure accurate and timely communication and delivery of assigned tasks to guarantee the end-products" performance upon release to production. Handling software defects or issues within production timelines and SLA is a key aspect of the role. Your responsibilities will include authoring test cases within a defined testing strategy, participating in test strategy development for Configuration and Custom reports, creating test data, assisting in code merge peer reviews, reporting status and progress to stakeholders, and providing risk assessment throughout development cycles. You should have a strong understanding of system and big data strategies/approaches adopted by IQVIA, stay updated on software applications development industry knowledge, and be open to production support roles within the project. To excel in this role, you should have 5-8 years of overall experience, with at least 2-3 years in Big Data, proficiency in Big Data Technologies such as HDFS, Hive, Pig, Sqoop, HBase, and Oozie, strong experience in SQL Queries and Airflow, familiarity with PSql, CI-CD, Jenkins, and UNIX commands, excellent communication skills, comprehensive skills, good confidence level, proven analytical, logical, and problem-solving techniques. Experience in Spark Application Development, ETL, and ELT tools is preferred. Possessing fine-tuned analytical skills, attention to detail, and the ability to work effectively with colleagues from diverse backgrounds is essential. The minimum educational requirement for this position is a Bachelor's Degree in Information Technology or a related field, along with 5-8 years of development experience or an equivalent combination of education, training, and experience. IQVIA is a leading global provider of clinical research services, commercial insights, and healthcare intelligence, facilitating the acceleration of innovative medical treatments" development and commercialization to enhance patient outcomes and population health worldwide. To learn more, visit https://jobs.iqvia.com.,

Posted 17 hours ago

Apply

3.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You should have strong experience in PySpark, Python, Unix scripting, SparkSQL, and Hive. You must be proficient in writing SQL queries, creating views, and possess excellent oral and written communication skills. Prior experience in the Insurance domain would be beneficial. A good understanding of the Hadoop Ecosystem including HDFS, Map Reduce, Pig, Hive, Oozie, and Yarn is required. Knowledge of AWS services such as Glue, AWS S3, Lambda function, Step Function, and EC2 is essential. Experience in data migration from platforms like Hive/S3 to Data Bricks is a plus. You should be able to prioritize, plan, organize, and manage multiple tasks efficiently while delivering high-quality work. As a candidate, you should have 6-8 years of technical experience in PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), with at least 3 years of experience in Big Data/ETL using Python, Spark, and Hive, along with 3+ years of experience in AWS. Your primary key skills should include PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), and Big Data with Python, Spark, and Hive experience. Exposure to Big Data migration is also important. Secondary key skills that would be beneficial for this role include Informatica BDM/Power center, Data Bricks, and MongoDB.,

Posted 2 days ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

The ideal candidate for this position should possess a strong expertise in programming/scripting languages and a proven ability to debug challenges across various Operating Systems. A certification in the relevant specialization is required along with a proficiency in using designing and automation tools. In addition, the candidate should have excellent knowledge of CI and agile frameworks. Moreover, the successful candidate must demonstrate strong communication, negotiation, networking, and influencing skills. Stakeholder management and conflict management skills are also essential for this role. The candidate should be proficient in setting up tools/infrastructure, defect metrics, and traceability metrics. A solid understanding of CI practices and agile frameworks is necessary. Furthermore, the candidate should be able to promote a strategic mindset to ensure the use of the right tools and coach and mentor the team to follow best practices. Expertise in Big Data and Hadoop ecosystems is required, along with the ability to build real-time stream-processing systems on large Scala data. Proficiency in data ingestion frameworks/data sources and data structures is also crucial for this role. The profile required for this position includes 10+ years of expertise and hands-on experience in Spark with Scala and Big data technologies. The candidate should have a good working experience in Scala and object-oriented concepts, as well as in HDFS, Spark, Hive, and Oozie. Technical expertise with data models, data mining, and partitioning techniques is also necessary. Additionally, hands-on experience with SQL databases and a good understanding of CI/CD tools such as Maven, Git, Jenkins, and SONAR are required. Knowledge of Kafka and ELK stack is a plus, and familiarity with data visualization tools like PowerBI will be an added advantage. Strong communication and coordination skills with multiple stakeholders are essential, along with the ability to assess existing situations, propose improvements, and follow up on action plans. In conclusion, the ideal candidate should have a professional attitude, be self-motivated, a fast learner, and a team player. The ability to work in international/intercultural environments and interact with onsite stakeholders is crucial for this role. If you are looking to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis, and develop or strengthen your expertise, you will find a perfect fit in this position.,

Posted 2 days ago

Apply

2.0 - 4.0 years

25 - 30 Lacs

Pune

Work from Office

Rapid7 is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 3 days ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql

Posted 4 days ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary As a Software Engineer at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”. Job Requirements Design and build our Big Data Platform, and understand scale, performance and fault-tolerance • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. • Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums • Work on technologies related to NoSQL, SQL and in-memory databases • Conduct code reviews to ensure code quality, consistency and best practices adherence. Technical Skills • Big Data hands-on development experience is required. • Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. • Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built. • Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) • Experience with one or more of Python/Java/Scala. • Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantage. Education • A minimum of 5 years of experience is required. 5-8 years of experience is preferred. • A Bachelor of Science Degree in Electrical Engineering or Computer Science, or a Master Degree; or equivalent experience is required.

Posted 5 days ago

Apply

8.0 - 13.0 years

7 - 11 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band M3/M4 (8 to 14 years) Role Description: Job TitleSenior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement strategic or optimal solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid

Posted 5 days ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Pune / Bangalore Band M3/M4 (7 to 14 years) Role Description: Must Have Skills: Should have experience in PySpark and Scala + Spark for 4+ years (Min experience). Proficient in debugging and data analysis skills. Should have Spark experience of 4+ years Should have understanding of SDLC and Big Data Application Life Cycle Should have experience in GIT HUB and GIT commands Good to have experience in CICD tools such Jenkins and Ansible Fast problem solving and self-starter Should have experience in using Control-M and Service Now (for Incident management ) Positive attitude, good communication skills (written and verbal both), should not have mother tongue interference. WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid

Posted 5 days ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mysuru

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 5 days ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Hiring Data Engineer in Bangalore with 6+ years experience in below skills: Must Have: - Big Data technologies: Hadoop, MapReduce, Spark, Kafka, Flink - Programming languages: Java/ Scala/ Python - Cloud: Azure, AWS, Google Cloud - Docker/Kubernetes Required Candidate profile - Strong in Communication Skills - Experience with relational SQL/ NoSQL databases- Postgres & Cassandra - Experience with ELK stack - Immediate Join is plus - Must be ready to work from office

Posted 5 days ago

Apply

4.0 - 7.0 years

25 - 30 Lacs

Ahmedabad

Work from Office

ManekTech is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 1 week ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are looking for a Big Data Developer to build and maintain scalable data processing systems. The ideal candidate will have experience handling large datasets and working with distributed computing frameworks. Key Responsibilities: Design and develop data pipelines using Hadoop, Spark, or Flink. Optimize big data applications for performance and reliability. Integrate various structured and unstructured data sources. Work with data scientists and analysts to prepare datasets. Ensure data quality, security, and lineage across platforms. Required Skills & Qualifications: Experience with Hadoop ecosystem (HDFS, Hive, Pig) and Apache Spark. Proficiency in Java, Scala, or Python. Familiarity with data ingestion tools (Kafka, Sqoop, NiFi). Strong understanding of distributed computing principles. Knowledge of cloud-based big data services (e.g., EMR, Dataproc, HDInsight). Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Visakhapatnam

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Surat

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Varanasi

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Ludhiana

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Lucknow

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Agra

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Vadodara

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 12.0 years

35 - 50 Lacs

Hyderabad

Work from Office

Job Description: Spark, Java Strong SQL writing skills, data discovery, data profiling, Data exploration, Data wrangling skills Kafka, AWS s3, lake formation, Athena, glue, Autosys or similar tools, FastAPI (secondary) Strong SQL skills to support data analysis and imbedded business logic in SQL, data profiling and gap assessment Collaborate with development and business SMEs within technology to understand data requirements, perform data analysis to support and Validate business logic, data integrity and data quality rules within a centralized data platform Experience working within the banking/financial services industry with solid understanding of financial products and business processes

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Faridabad

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Jaipur

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Nagpur

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

5.0 - 9.0 years

12 - 17 Lacs

Noida

Work from Office

Spark/PySpark Technical hands on data processing Table designing knowledge using Hive - similar to RDBMS knowledge Database SQL knowledge for retrieval of data - transformation queries such as joins (full, left, right), ranking, group by Good Communication skills. Additional skills - GitHub, Jenkins, shell scripting would be added advantage Mandatory Competencies Big Data - Big Data - Pyspark Big Data - Big Data - SPARK Big Data - Big Data - Hadoop Big Data - Big Data - HIVE DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Beh - Communication and collaboration Database - Database Programming - SQL DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Basic Bash/Shell script writing

Posted 1 week ago

Apply

4.0 - 6.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Design, development and testing of components / modules in TOP (Trade Open Platform) involving Spark, Java, Hive and related big-data technologies in a datalake architecture Contribute to the design, development and deployment of new features new components in Azure public cloud Contribute to the evolution of REST APIs in TOP enhancement, development and testing of new APIs Ensure the processes in TOP provide an optimal performance and assist in performance tuning and optimization Release Deployment Deploy using CD/CI practices and tools in various environments development, UAT and production and follow production processes. Ensure Craftsmanship practices are followed Follow Agile at Scale process in terms of participation in PI Planning and follow-up, Sprint planning, Back-log maintenance in Jira. Organize training sessions on the core platform and related technologies for the Tribe / Business line to ensure the platform evolution is continuously updated to relevant stakeholders Profile required Around 4-6 years of experience in IT industry, preferably banking domain Expertise and experience in Java (java 1.8 (building API, Java thread, collections, Streaming, dependency injection/inversion), Junit, Big-data (Spark, Oozie, Hive) and Azure (AKS, CLI, Event, Key valut) and should have been part of digital transformation initiatives with knowledge of Unix, SQL/RDBMS and Monitoring Development experience in REST APIs Experience in managing tools GIT/BIT Bucket, Jenkins, NPM, Docket/Kubernetes, Jira, Sonar Knowledge of Agile practices and Agile at Scale Good communication / collaboration skills

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies