Home
Jobs

25 Bigdata Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 - 3 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 5 to Maximum 8 Yrs of exp Location : Chennai / Hyderabad / Bangalore / Mumbai / Pune Mandatory Skills : Big Data | Hadoop | SCALA | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer #hadoop #spark #python #hive #pysaprk

Posted 6 days ago

Apply

12.0 - 20.0 years

45 - 65 Lacs

Chennai

Hybrid

Naukri logo

Key Skills: Core Java, Java, NLP, SCALA, Bigdata Roles and Responsibilities: Develop LLM solutions for querying structured data with natural language, including RAG architectures on enterprise knowledge bases. Build, scale, and optimize data science workloads, applying best MLOps practices for production. Lead the design and development of LLM-based tools to increase data accessibility, focusing on text-to-SQL platforms. Train and fine-tune LLM models to accurately interpret natural language queries and generate SQL queries. Provide technical leadership and mentorship to junior data scientists and developers. Stay updated with advancements in AI and NLP to incorporate best practices and new technologies. Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements. Skills Required: 12+ years of experience in Apps Development or systems analysis role and experience with Big Data Development technologies in a huge environment like 500 to 800 GB/TB/PB along with AI and NLP (Natural Language Processing) Extensive experience system analysis and in programming of software applications Experience in managing and implementing successful projects Expert in coding Python in building Machine Learning and developing LLM based application in a professional environment SQL skills able to perform data interrogations is must Proficiency in enterprise level application development using Java 8, SCALA, Oracle (or comparable database), and Messaging infrastructure like Solace, Kafka, Tibco EMS Working experience in Microservices/Kubernetes Good knowledge on no-SQL DB like Redis, Couchbase, HBase etc Experience working and architecting solutions on Big Data technologies Experience in systems analysis and programming of software applications Experience in managing and implementing successful projects 5+ years of experience in leading a small to medium-size development teams Consistently demonstrates clear and concise written and verbal communication Education: Bachelor's Degree/ Master's Degree in related field

Posted 6 days ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 3 to Maximum 8 Yrs of exp Location : Chennai / Hyderabad / Bangalore / Gurgaon / Pune Mandatory Skills : Big Data | Hadoop | Java | spark | spark Sql | Hive | Python Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer #hadoop #spark #python #hive #pysaprk

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Technical and Professional Requirements: Primary skills:Bigdata->Scala,Bigdata->Spark,Technology->Java->Play Framework,Technology->Reactive Programming->Akka Preferred Skills: Bigdata->Spark Bigdata->Scala Technology->Reactive Programming->Akka Technology->Java->Play Framework

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Educational Requirements Bachelor of Engineering Service Line Infosys Quality Engineering Responsibilities A day in the life of an Infoscion As part of the Infosys testing team, your primary role would be to Develop test plan, prepare effort estimation and schedule for project execution You will prepare test cases, review test case result and anchor defects prevention activities and interface with customers for issue resolution You will ensure effective test execution by reviewing knowledge management activities and adhere to the organizational guidelines and processes Additionally, you will anchor testing requirements, develop test strategy, track, monitor project plans and prepare solution delivery of projects along with reviewing of test plans, test cases and test scripts You will develop project quality plans, validate defective prevention plansIf you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skills:Cloud testing->AWS Testing,Data Services->DWT (Data Warehouse Testing)/ (ETL),Data Services->TDM (Test Data Management),Data Services->TDM (Test Data Management)->Delphix,Data Services->TDM (Test Data Management)->IBM Optim , Database->PL / SQL , Package testing->MDM,Python Desirables:Bigdata->Python Preferred Skills: Technology->ETL & Data Quality->ETL & Data Quality - ALL

Posted 1 week ago

Apply

3.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Technical and Professional Requirements: Primary skills:Bigdata->Scala,Bigdata->Spark,Technology->Java->Play Framework,Technology->Reactive Programming->Akka Preferred Skills: Bigdata->Spark Bigdata->Scala Technology->Reactive Programming->Akka Technology->Java->Play Framework

Posted 1 week ago

Apply

5.0 - 10.0 years

25 - 35 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Job Description Data Engineer/Lead Required Minimum Qualifications Bachelors degree in computer science, CIS, or related field 5-10 years of IT experience in software engineering or related field Experience on project(s) involving the implementation of software development life cycles (SDLC) Primary Skills : PySpark, SQL, GCP EcoSystem(Biq Query, Cloud Composer, DataProc) Design and develop data-ingestion frameworks, real-time processing solutions, and data processing and transformation frameworks leveraging open source tools and data processing frameworks. Hands-on on technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, Hadoop Platform, Hive, airflow Experience in GCP Cloud Composer, Big Query, DataProc Offer system support as part of a support rotation with other team members. Operationalize open-source data-analytic tools for enterprise use. Ensure data governance policies are followed by implementing or validating data lineage, quality checks, and data classification. Understand and follow the company development lifecycle to develop, deploy and deliver.

Posted 1 week ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Pune, Chennai, Mumbai (All Areas)

Hybrid

Naukri logo

Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 6 to Maximum 9 Yrs of exp Location : Chennai / Hyderabad / Bangalore / Gurgaon / Pune Mandatory Skills : Big Data | Hadoop | pyspark | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer#hadoop#spark #python #hive #pysaprk

Posted 2 weeks ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

10+ years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems. Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation Strong experience in programming languages Java/J2EE/Scala. Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database. Experience with Batch Processing and AutoSys Job Scheduling and Monitoring Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs) Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc Knowledge of Hadoop Security, Data Management and Governance Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD

Posted 2 weeks ago

Apply

4.0 - 9.0 years

17 - 27 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Role & responsibilities • Experience with big data technologies (Hadoop, Spark, Hive) • Proven experience as a development data engineer or similar role, with ETL background. • Experience with data integration / ETL best practices and data quality principles. • Play a crucial role in ensuring the quality and reliability of the data by designing, implementing, and executing comprehensive testing. • By going over the User Stories build the comprehensive code base and business rules for testing and validation of the data. • Knowledge of continuous integration and continuous deployment (CI/CD) pipelines. • Familiarity with Agile/Scrum development methodologies. • Excellent analytical and problem-solving skills. • Strong communication and collaboration skills.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Gurugram, Chennai

Hybrid

Naukri logo

We are looking for energetic, high-performing and highly skilled Java + Big Data Engineers to help shape our technology and product roadmap. You will be part of the fast-paced, entrepreneurial Enterprise Personalization portfolio focused on delivering the next generation global marketing capabilities. This team is responsible for building products that power Merchant Offers personalization for Amex card members. Job Description: - Demonstrated leadership in designing sustainable software products, setting development standards, automated code review process, continuous build and rigorous testing etc - Ability to effectively lead and communicate across 3rd parties, technical and business product managers on solution design - Primary focus is spent writing code, API specs, conducting code reviews & testing in ongoing sprints or doing proof of concepts/automation tools - Applies visualization and other techniques to fast track concepts - Functions as a core member of an Agile team driving User story analysis & elaboration, design and development of software applications, testing & builds automation tools - Works on a specific platform/product or as part of a dynamic resource pool assigned to projects based on demand and business priority - Identifies opportunities to adopt innovative technologies Qualification: - Bachelor's degree in computer science, computer engineering, other technical discipline, or equivalent work experience - 7+ years of software development experience - 3-5 years of experience leading teams of engineers - Demonstrated experience with Agile or other rapid application development methods - Demonstrated experience with object-oriented design and coding - Demonstrated experience on these core technical skills (Mandatory) - Core Java, Spring Framework, Java EE - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark - Relational Database (PostGreS / MySQL / DB2 etc) - Data Serialization techniques (Avro) - Cloud development (Micro-services) - Parallel & distributed (multi-tiered) systems - Application design, software development and automated testing - Demonstrated experience on these additional technical skills (Nice to Have) - Unix / Shell scripting - Python / Scala - Message Queuing, Stream processing (Kafka) - Elastic Search - AJAX tools/ Frameworks. - Web services , open API development, and REST concepts - Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing and Junit.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

S&P Dow Jones Indices is seeking a Python/Bigdata developer to be a key player in the implementation and support of data Platforms for S&P Dow Jones Indices. This role requires a seasoned technologist who contributes to application development and maintenance. The candidate should actively evaluate new products and technologies to build solutions that streamline business operations. The candidate must be delivery-focused with solid financial applications experience. The candidate will assist in day-to-day support and operations functions, design, development, and unit testing. Responsibilities and Impact: Lead the design and implementation of EMR Spark workloads using Python, including data access from relational databases and cloud storage technologies. Implement new powerful functionalities using Python, Pyspark, AWS and Delta Lake. Independently come up with optimal designs for the business use cases and implement the same using big data technologies. Enhance existing functionalities in Oracle/Postgres procedures, functions. Performance tuning of existing Spark jobs. Respond to technical queries from operations and product management team. Implement new functionalities in Python, Spark, Hive. Enhance existing functionalities in Postgres procedures, functions. Collaborate with cross-functional teams to support data-driven initiatives. Mentor junior team members and promote best practices. Respond to technical queries from the operations and product management team. What Were Looking For: Basic Required Qualifications: Bachelors degree in computer science, Information Systems, or Engineering, or equivalent work experience. 5 - 8 years of IT experience in application support or development. Hands on development experience on writing effective and scalable Python programs. Deep understanding of OOP concepts and development models in Python. Knowledge of popular Python libraries/ORM libraries and frameworks. Exposure to unit testing frameworks like Pytest. Good understanding of spark architecture as the system involves data intensive operations. Good amount of work experience in spark performance tuning. Experience/exposure in Kafka messaging platform. Experience in Build technology like Maven, Pybuilder. Exposure with AWS offerings such as EC2, RDS, EMR, lambda, S3,Redis. Hands on experience in at least one relational database (Oracle, Sybase, SQL Server, PostgreSQL). Hands on experience in SQL queries and writing stored procedures, functions. A strong willingness to learn new technologies. Excellent communication skills, with strong verbal and writing proficiencies. Additional Preferred Qualifications: Proficiency in building data analytics solutions on AWS Cloud. Experience with microservice and serverless architecture implementation.

Posted 2 weeks ago

Apply

6.0 - 9.0 years

20 - 25 Lacs

Hyderabad

Hybrid

Naukri logo

Role & re Design, build, and measure complex ELT jobs to process disparate data sources and form a high integrity, high quality, clean data asset. Executes and provides feedback for data modeling policies, procedure, processes, and standards. Assists with capturing and documenting system flow and other pertinent technical information about data, database design, and systems. Develop data quality standards and tools for ensuring accuracy. Work across departments to understand new data patterns. Translate high-level business requirements into technical specs sponsibilities Bachelors degree in computer science or engineering. years of experience with data analytics, data modeling, and database design. years of experience with Vertica. years of coding and scripting (Python, Java, Scala) and design experience. years of experience with Airflow. Experience with ELT methodologies and tools. Experience with GitHub. Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and issues. Ability to adapt to fast changing environment. Experience designing and implementing automated ETL processes. Experience with MicroStrategy reporting tool. Preferred candidate profile

Posted 2 weeks ago

Apply

6.0 - 8.0 years

10 - 15 Lacs

Hyderabad

Hybrid

Naukri logo

Mega Walkin Drive for Senior Software Engineer - Informatica Developer Your future duties and responsibilities: Job Summary: CGI is seeking a skilled and detail-oriented Informatica Developer to join our data engineering team. The ideal candidate will be responsible for designing, developing, and implementing ETL (Extract, Transform, Load) workflows using Informatica PowerCenter (or Informatica Cloud), as well as optimizing data pipelines and ensuring data quality and integrity across systems. Key Responsibilities: Develop, test, and deploy ETL processes using Informatica PowerCenter or Informatica Cloud. Work with business analysts and data architects to understand data requirements and translate them into technical solutions. Integrate data from various sources including relational databases, flat files, APIs, and cloud-based platforms. Create and maintain technical documentation for ETL processes and data flows. Optimize existing ETL workflows for performance and scalability. Troubleshoot and resolve ETL and data-related issues in a timely manner. Implement data validation, transformation, and cleansing techniques. Collaborate with QA teams to support data testing and verification. Ensure compliance with data governance and security policies. Required qualifications to be successful in this role: Minimum 6 years of experience with Informatica PowerCenter or Informatica Cloud. Proficiency in SQL and experience with databases like Oracle, SQL Server, Snowflake, or Teradata. Strong understanding of ETL best practices and data integration concepts. Experience with job scheduling tools like Autosys, Control-M, or equivalent. Knowledge of data warehousing concepts and dimensional modeling. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Good to have Python or any programming knowledge. Bachelors degree in Computer Science, Information Systems, or related field. Preferred Qualifications : Experience with cloud platforms like AWS, Azure, or GCP. Familiarity with Bigdata/ Hadoop tools (e.g., Spark, Hive) and modern data architectures. Informatica certification is a plus. Experience with Agile methodologies and DevOps practices. Skills: Hadoop Hive Informatica Oracle Teradata Unix Notice Period- 0-45 Days Pre requisites : Aadhar Card a copy, PAN card copy, UAN Disclaimer : The selected candidates will initially be required to work from the office for 8 weeks before transitioning to a hybrid model with 2 days of work from the office each week.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 4-6 yrs Location: Chennai/Hyderabad/Bangalore/Pune/Bhubaneshwar/Kochi Skill: Pyspark Implementing data ingestion pipelines from different types of data sources i.e Databases, S3, Files etc.. Experience in building ETL/ Data Warehouse transformation process. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries, Developing scalable and re-usable, self-service frameworks for data ingestion and processing, Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data, Processing performance analysis and optimization, Bringing best practices in following areas: Design & Analysis, Automation (Pipelining, IaC), Testing, Monitoring, Documentation. Experience working with structured and unstructured data. Good to have (Knowledge) 1.Experience in cloud-based solutions, 2.Knowledge of data management principles. Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Pyspark: Rel Exp in Python: Rel Exp in ETL/Bigdata: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:

Posted 3 weeks ago

Apply

4.0 - 6.0 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 4-6 yrs Location: Chennai/Hyderabad/Bangalore/Pune/Bhubaneshwar/Kochi Skill: Pyspark Implementing data ingestion pipelines from different types of data sources i.e Databases, S3, Files etc.. Experience in building ETL/ Data Warehouse transformation process. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries, Developing scalable and re-usable, self-service frameworks for data ingestion and processing, Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data, Processing performance analysis and optimization, Bringing best practices in following areas: Design & Analysis, Automation (Pipelining, IaC), Testing, Monitoring, Documentation. Experience working with structured and unstructured data. Good to have (Knowledge) 1.Experience in cloud-based solutions, 2.Knowledge of data management principles. Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Pyspark: Rel Exp in Python: Rel Exp in AWS Glue: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:

Posted 3 weeks ago

Apply

5.0 - 8.0 years

10 - 16 Lacs

Pune, Chennai

Work from Office

Naukri logo

Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 5 to Maximum 8 Yrs of exp Location : Chennai / Pune Mandatory Skills : Big Data | Hadoop | pyspark | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer #hadoop #spark #python #hive #pysaprk

Posted 3 weeks ago

Apply

5.0 - 10.0 years

5 - 15 Lacs

Chennai

Hybrid

Naukri logo

Role & responsibilities Bigdata, Hadoop, Hive, SQL, Cloudera, Impala, Python, Pyspark Fundamentals of: Big data Cloudera Platform Unix Python Expertise in: SQL/HIVE Pyspark Nice to have: Django/Flask frameworks

Posted 4 weeks ago

Apply

15.0 - 24.0 years

30 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

Position Title: Pro Vice-Chancellor Computer Engineering Background Location: Bengaluru North, Karnataka, India Role Overview: The Pro Vice-Chancellor (PVC) will play a pivotal role in shaping and leading the academic and research vision of technology and engineering schools with a core emphasis on Computer Science and allied disciplines. The position requires an accomplished academician with strong subject knowledge, technological foresight, and the ability to lead cutting-edge research, foster innovation, and build robust academia-industry linkages. Key Responsibilities: Academic and Technical Leadership: Research and Innovation Leadership: Technology Incubation and Start-Up Ecosystem: Academic-Industry Collaboration: Digital Transformation and Smart Campus Initiatives: Internationalization: Faculty and Talent Development: Strategic Planning and Policy Implementation: Eligibility Criteria: Mandatory Qualifications: Engineering Graduation (B.E. / B.Tech), Post-Graduation (M.E. / M.Tech), and Doctorate (Ph.D.) in any one of the following disciplines only: Computer Science Information Science Information Technology Data Science Artificial Intelligence & Machine Learning Note: Candidates with engineering graduation in any other specialization will not be considered. Candidates with qualifications such as B.Sc., BCA, MCA, or other non-engineering degrees will also not be eligible. Experience Requirements: Minimum 15 years of academic experience, including teaching, research, and academic administration. Demonstrated leadership in funded research projects, Ph.D. guidance, patents, and high-impact publications. Experience in establishing or leading research labs, innovation centers, or CoEs. Preferred Attributes: Academic qualifications from premier national/international institutions (e.g., IITs, NITs, IIITs, global universities). Strong industry interface with a track record in consulting, technology advisory, or product development. Global exposure through research collaborations, academic visits, or international program management.

Posted 1 month ago

Apply

10.0 - 16.0 years

35 - 60 Lacs

Bengaluru

Hybrid

Naukri logo

At least 5 years of experience in a complex business environment or international organisation matrix. Must have experience and knowledge in data governance. Strong IT background, including expertise in big data, cloud technology, monitoring solutions, machine learning ML, and artificial intelligence AI. Familiarity with data governance tools such as Collibra (preferred) or similar alternatives. Proven track record in product management, data management, and information technology systems and tools. Experience with the SAFe Agile framework. Knowledge of data analytics/dashboard tools like Qlik and Microsoft PowerBI is a plus. Nice to have experience in travel domain.

Posted 1 month ago

Apply

3 - 7 years

3 - 7 Lacs

Bengaluru

Hybrid

Naukri logo

Hello everyone , we are hiring for Specialist Software Engineer Bigdata -Snowflake (snowpark ) & scala, python, linux 3 to 7years if any one are interested please share your cv to tjagadishwarachari@primusglobal.com Thanks & Regards, Thanu shree j Associate -TA PRIMUS Global Technologies Pvt. Ltd.

Posted 1 month ago

Apply

4 - 7 years

20 - 22 Lacs

Pune, Gurugram

Work from Office

Naukri logo

Core skills and Competencies 1. Design, develop, and maintain data pipelines, ETL/ELT processes, and data integrations to support efficient and reliable data ingestion, transformation, and loading. 2. Collaborate with API developers, and other stakeholders to understand data requirements and ensure the availability, reliability, and accuracy of the data. 3. Optimize and tune performance of data processes and workflows to ensure efficient data processing and analysis at scale. 4. Implement data governance practices, including data quality monitoring, data lineage tracking, and metadata management. 5. Work closely with infrastructure and DevOps teams to ensure the scalability, security, and availability of the data platform and data storage systems. 6. Continuously evaluate and recommend new technologies, tools, and frameworks to improve the efficiency and effectiveness of data engineering processes. 7. Collaborate with software engineers to integrate data engineering solutions with other systems and applications. 8. Document and maintain data engineering processes, including data pipeline configurations, job schedules, and monitoring and alerting mechanisms. 9. Stay up-to-date with industry trends and advancements in data engineering, cloud technologies, and data processing frameworks. 10. Provide mentorship and guidance to junior data engineers, promoting best practices in data engineering and ensuring the growth and development of the team. 11. Able to implement and troubleshoot Rest services in Python.

Posted 1 month ago

Apply

4 - 6 years

16 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing!! Role: Pyspark Developer Experience Required :4 to 6 yrs Work Location: Hyderabad/Bangalore/Pune/Chennai/Kochi Required Skills, pyspark/python/spark sql/ETL Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 1 month ago

Apply

4 - 6 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 4-6 yrs Location: Chennai/Hyderabad/Bangalore/Pune/Bhubaneshwar/Kochi Skill: Pyspark Implementing data ingestion pipelines from different types of data sources i.e Databases, S3, Files etc.. Experience in building ETL/ Data Warehouse transformation process. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries, Developing scalable and re-usable, self-service frameworks for data ingestion and processing, Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data, Processing performance analysis and optimization, Bringing best practices in following areas: Design & Analysis, Automation (Pipelining, IaC), Testing, Monitoring, Documentation. Experience working with structured and unstructured data. Good to have (Knowledge) 1.Experience in cloud-based solutions, 2.Knowledge of data management principles. Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Pyspark: Rel Exp in Python: Rel Exp in ETL/Bigdata: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:

Posted 1 month ago

Apply

5 - 8 years

15 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing!! Role: Pyspark Developer Experience Required :5 to 8 yrs Work Location :Bangalore Required Skills, Pyspark/SQL Interested candidates can send resumes to nandhini.s@spstaffing.in

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies