Home
Jobs

8451 Spark Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

8 - 15 Lacs

Pune

Hybrid

Naukri logo

Role : Developer Location: Pune Hybrid Excellent Communication skills NP: Immediate Joiners to 1 month 9 (Only serving NP candidates apply) Exp: 3 to 9 yrs All Mandatory Skills : ( Must be in the roles and responsibilities) Data Platform Java Python Spark Kafka Cloud technologies (Azure / AWS) Databricks Interested Candidate Share Resume at dipti.bhaisare@in.experis.com

Posted -1 days ago

Apply

1.0 - 4.0 years

1 - 3 Lacs

Salem, Edappadi, Erode

Work from Office

Naukri logo

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications.

Posted -1 days ago

Apply

1.0 - 4.0 years

1 - 3 Lacs

Puducherry, Mayiladuthurai, Karaikal

Work from Office

Naukri logo

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications.

Posted -1 days ago

Apply

1.0 - 4.0 years

1 - 3 Lacs

Chidambaram, Mayiladuthurai, Cuddalore

Work from Office

Naukri logo

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications. Location: Mayiladuthurai,Chidambaram,Cuddalore,Tittakudi

Posted -1 days ago

Apply

1.0 - 5.0 years

1 - 3 Lacs

Madurai, Dindigul, Oddanchatram

Work from Office

Naukri logo

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, or receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications.

Posted -1 days ago

Apply

9.0 - 14.0 years

32 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Python, Golang, Rust, GCP, Airflow, Docker, Containerization, Hadoop, Hive, SQL, Spark, Generative AI, Agentic Workflows, Machine learning (ML) About the job Candidates for this position are preferred to be based in Bangalore, India and will be expected to comply with their team's hybrid work schedule requirements. We are seeking an experienced Senior Machine Learning Manager to lead the Notifications Science team, focused on building intelligent, ML-driven systems for personalized notifications. These systems ensure that we send the right message to the right customer, at the right time, through the right channel (Push, Email, SMS), and at the right cadence, while balancing incremental revenue with customer engagement health. In this role, youll be accountable for the technical roadmap, driving innovation to build the next generation of Wayfairs communications ML stack. Youll work closely with a high-performing team of ML scientists and engineers to solve some of Wayfairs most complex challenges in personalization, latency, and scale with direct impact on customer experience and company revenue. What Youll do: Own the strategy, roadmap, and execution of notification intelligence and automation solutions. Lead the development of GenAI-powered content generation, send-time optimization, and cross-channel orchestration systems. Build intelligent systems that drive significant incremental revenue while minimizing customer fatigue and unsubscribes. Develop and grow technical leadership within the team, modeling a culture of continuous research and innovation. Collaborate with Engineering and Product teams to scale decisioning systems to millions of notifications daily. Act as a subject matter expert, providing mentorship and technical guidance across the broader Data Science and Engineering organization. We Are a Match Because You Have: Bachelor's or Masters degree in Computer Science, Mathematics, Statistics, or related field. 9+ years of industry experience, with at least 12 years of experience managing teams, and 5+ years as an individual contributor working on production ML systems. Strategic thinker with a customer-centric mindset and a desire for creative problem-solving, looking to make a big impact in a growing organization. Demonstrated success influencing senior-level stakeholders on strategic direction, based on recommendations backed by in-depth analysis, and excellent written and verbal communication skills. Ability to partner cross-functionally to own and shape technical roadmaps and the organizations required to drive them. Proficient in one or more programming languages, e.g., Python, Golang, or Rust. Nice to have: Experience with GCP, Airflow, and containerization (Docker). Experience building scalable data processing pipelines with big data tools such as Hadoop, Hive, SQL, Spark, etc. Experience in Bayesian Learning, Multi-armed Bandits, or Reinforcement Learning. Familiarity with Generative AI and agentic workflows.

Posted -1 days ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

Naukri logo

Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Python, Node.js Position Overview: We are looking for a highly skilled Senior Backend Developer to join our team. The ideal candidate will bring extensive expertise in backend systems, cloud-native applications, and microservices, along with a strong track record of building scalable systems. If you are passionate about developing robust architectures and driving technical innovation, wed love to hear from you. Responsibilities: Design, develop, and maintain backend systems and cloud-native applications. Architect and implement scalable microservices using Go, Node.js, or Spring Boot. Leverage AWS cloud services to build, deploy, and monitor applications. Optimise systems for high availability, scalability, and performance. Work with Kafka, Redis, and Spark to manage real-time data pipelines and caching mechanisms. Design database solutions using MySQL and NoSQL technologies for efficient data storage and retrieval. Collaborate with cross-functional teams to integrate payment gateways and ensure seamless transaction processing (experience desirable). Contribute to the architectural design of systems to meet eCommerce and high-scale system demands. Write and maintain clean, reusable code with Python (desirable but not mandatory). Drive best practices for CI/CD pipelines and automated deployments. Mentor junior engineers and actively contribute to the teams technical growth. Required Qualifications: 3-6 years of experience in software engineering, with a focus on backend development and microservices architecture. Proficiency in one or more of the following: Go, Node.js, Python or Spring Boot. Deep understanding of AWS services (e.g., S3, RDS, Lambda, EC2). Proven experience in designing systems for scale and high performance. Hands-on experience with Kafka, Redis, Spark, and other distributed technologies. Strong knowledge of MySQL and NoSQL databases. Experience with system architecture design and implementation. Familiarity with e-commerce platforms is highly desirable. Experience with payment gateway integration is a plus. Strong problem-solving skills and the ability to work in fast-paced environments.

Posted -1 days ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Naukri logo

Job Location: Pune, Bangalore, or Gurugram Available to join immediately or within a notice period of up to 30 days. MANDATE SKILLS -Python, Spark, Airflow, SQL, Snowflake Over 5 years of overall experience in the data engineering and analytics industry. 3+ years of hands-on experience with Python, Apache Spark, and Apache Airflow for building scalable data pipelines and ETL workflows. Proficient in SQL with strong knowledge of data querying and transformation; experience with Snowflake is a plus. Solid experience working with both relational (e.g., PostgreSQL, MySQL) and non-relational databases (e.g., MongoDB, Cassandra). Strong understanding of data modeling principles and the design of both batch and real-time data pipelines. Proven track record of developing robust, scalable solutions in cloud environments such as AWS, Azure, or GCP. Well-versed in DevOps practices including CI/CD, infrastructure as code, and containerization. Experienced in Agile development methodologies with active participation in sprint planning, standups, and retrospectives. For more information, please share your updated CV at admin@spearheadps.com or contact me via call/WhatsApp at 9899080360

Posted -1 days ago

Apply

6.0 - 11.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Machine Learning, ML, ml architectures and lifecycle, Airflow, Kubeflow, MLFlow, Spark, Kubernetes, Docker, Python, SQL, machine learning platforms, BigQuery, GCS, Dataproc, AI Platform, Search Ranking, Deep Learning, Deep Learning Frameworks, PyTorch, TensorFlow About the job Candidates for this position are preferred to be based in Bangalore, India and will be expected to comply with their team's hybrid work schedule requirements. Who We Are Wayfairs Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. What youll do Provide technical leadership in the development of an automated and intelligent advertising system by advancing the state-of-the-art in machine learning techniques to support recommendations for Ads campaigns and other optimizations. Design, build, deploy and refine extensible, reusable, large-scale, and real-world platforms that optimize our ads experience. Work cross-functionally with commercial stakeholders to understand business problems or opportunities and develop appropriately scoped machine learning solutions Collaborate closely with various engineering, infrastructure, and machine learning platform teams to ensure adoption of best-practices in how we build and deploy scalable machine learning services Identify new opportunities and insights from the data (where can the models be improved? What is the projected ROI of a proposed modification?) Research new developments in advertising, sort and recommendations research and open-source packages, and incorporate them into our internal packages and systems. Be obsessed with the customer and maintain a customer-centric lens in how we frame, approach, and ultimately solve every problem we work on. We Are a Match Because You Have: Bachelor's or Masters degree in Computer Science, Mathematics, Statistics, or related field. 6-9 years of industry experience in advanced machine learning and statistical modeling, including hands-on designing and building production models at scale. Strong theoretical understanding of statistical models such as regression, clustering and machine learning algorithms such as decision trees, neural networks, etc. Familiarity with machine learning model development frameworks, machine learning orchestration and pipelines with experience in either Airflow, Kubeflow or MLFlow as well as Spark, Kubernetes, Docker, Python, and SQL. Proficiency in Python or one other high-level programming language Solid hands-on expertise deploying machine learning solutions into production Strong written and verbal communication skills, ability to synthesize conclusions for non-experts, and overall bias towards simplicity Nice to have Familiarity with Machine Learning platforms offered by Google Cloud and how to implement them on a large scale (e.g. BigQuery, GCS, Dataproc, AI Notebooks). Experience in computational advertising, bidding algorithms, or search ranking Experience with deep learning frameworks like PyTorch, Tensorflow, etc.

Posted -1 days ago

Apply

3.0 - 6.0 years

0 - 0 Lacs

Ahmedabad

Work from Office

Naukri logo

IT training and specialising in equipping professionals with cutting-edge skills in data engineering. Our mission is to bridge the talent gap in the tech industry by providing comprehensive training programs that align with current market demands.

Posted -1 days ago

Apply

5.0 - 9.0 years

12 - 14 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Naukri logo

We are seeking a skilled ETL Data Tester to join our dynamic team on a 6-month contract. The ideal candidate will focus on implementing ETL processes, creating comprehensive test suites using Python, and validating data quality through advanced SQL queries. The role involves collaborating with Data Scientists, Engineers, and Software teams to develop and monitor data tools, frameworks, and infrastructure changes. Proficiency in Hive QL, Spark QL, and Big Data concepts is essential. The candidate should also have experience in data testing tools like DBT, iCEDQ, and QuerySurge, along with expertise in Linux/Unix and messaging systems such as Kafka or RabbitMQ. Strong analytical and debugging skills are required, with a focus on continuous automation and integration of data from multiple sources. Location- Remote,Delhi NCR,Bengaluru, Chennai,Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad

Posted -1 days ago

Apply

5.0 - 9.0 years

12 - 16 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Naukri logo

The role involves hands-on experience with data testing, data integration, and supporting data quality in big data environments. Key responsibilities include selecting and integrating data tools and frameworks, providing technical guidance for software engineers, and collaborating with data scientists, data engineers, and other stakeholders. This role requires implementing ETL processes, monitoring performance, advising on infrastructure, and defining data retention policies. Candidates should be proficient in Python, advanced SQL, Hive QL, and Spark QL, with hands-on experience in data testing tools like DBT, iCEDQ, QuerySurge, Denodo, or Informatica. Strong experience with NoSQL, Linux/Unix, and messaging systems (Kafka or RabbitMQ) is also required. Additional responsibilities include troubleshooting, debugging, UAT with business users in Agile environments, and automating tests to increase coverage and efficiency. Location: Chennai, Hyderabad, Pune, Kolkata, Ahmedabad, RemotE

Posted Just now

Apply

3.0 - 8.0 years

12 - 14 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Naukri logo

We are seeking a highly skilled Data Engineer with expertise in ETL, PySpark,AWS and big data technologies. The ideal candidate will have in-depth knowledge of Apache Spark, Python, and Java programming (Java 8 and above, including Lambda, Streams, Exception Handling, Collections, etc.). Responsibilities include developing data processing pipelines using PySpark, creating Spark jobs for data transformation and aggregation, and optimizing query performance using file formats like ORC, Parquet, and AVRO. Candidates must also have hands-on experience with Spring Core, Spring MVC, Spring Boot, REST APIs, and cloud services like AWS. This role involves designing scalable pipelines for batch and real-time analytics, performing data enrichment, and integrating with SQL databases. Location-Pan india,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted Just now

Apply

3.0 - 6.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Data Engineer II (Python, SQL) Experience: 3 to 6 years Location: Bangalore, Karnataka (Work from office, 5 days a week) Role: Data Engineer II (Python, SQL) As a Data Engineer II, you will work on designing, building, and maintaining scalable data pipelines. Youll collaborate across data analytics, marketing, data science, and product teams to drive insights and AI/ML integration using robust and efficient data infrastructure. Key Responsibilities: Design, develop and maintain end-to-end data pipelines (ETL/ELT). Ingest, clean, transform, and curate data for analytics and ML usage. Work with orchestration tools like Airflow to schedule and manage workflows. Implement data extraction using batch, CDC, and real-time tools (e.g., Debezium, Kafka Connect). Build data models and enable real-time and batch processing using Spark and AWS services. Collaborate with DevOps and architects for system scalability and performance. Optimize Redshift-based data solutions for performance and reliability. Must-Have Skills & Experience: 3+ years in Data Engineering or Data Science with strong ETL and pipeline experience. Expertise in Python and SQL . Strong experience in Data Warehousing , Data Lakes , Data Modeling , and Ingestion . Working knowledge of Airflow or similar orchestration tools. Hands-on with data extraction techniques like CDC , batch-based, using Debezium, Kafka Connect, AWS DMS . Experience with AWS Services : Glue, Redshift, Lambda, EMR, Athena, MWAA, SQS, etc. Knowledge of Spark or similar distributed systems. Experience with queuing/messaging systems like SQS , Kinesis , RabbitMQ .

Posted Just now

Apply

5.0 - 10.0 years

22 - 37 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Naukri logo

Experience: 5-8 Years (Lead-23 LPA), 8-10 Years (Senior Lead 35 LPA), 10+ Years (Architect- 42 LPA)- Max Location : Bangalore as 1 st preference , We can also go for Hyderabad, Chennai, Pune, Gurgaon Notice: Immediate to max 15 Days Joiner Mode of Work: Hybrid Job Description: Athena, Step Functions, Spark - Pyspark, ETL Fundamentals, SQL (Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2, Lake Formation, Aurora, S3, Modern Data Platform Fundamentals, PLSQL, Cloud front We are looking for an experienced AWS Data Engineer to design, build, and manage robust, scalable, and high-performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands-on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation. Key Responsibilities: Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions. Work with structured and semi-structured data using Athena, S3, and Lake Formation to enable efficient querying and access control. Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration. Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning. Build data lakes and data warehouses using S3, Aurora, and Athena. Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM. Develop and maintain metadata, lineage, and data cataloging capabilities. Participate in data modeling exercises for both OLTP and OLAP environments. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Monitor, debug, and optimize data pipelines for reliability and performance. Required Skills & Experience: Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront. Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL. Solid understanding of ETL/ELT processes and data warehousing concepts. Familiarity with modern data platform fundamentals and distributed data processing. Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases. Experience with orchestration and workflow management tools within AWS. Strong debugging and performance tuning skills across the data stack.

Posted Just now

Apply

8.0 - 10.0 years

15 - 18 Lacs

Pune

Work from Office

Naukri logo

Hiring a Solution Architect with 810 yrs experience in data, AI & GenAI. Must have strong cloud (AWS/Azure/GCP), LLM, ML (TensorFlow/PyTorch), and full-stack skills. Role involves designing scalable architectures and leading technical teams.

Posted Just now

Apply

7.0 - 12.0 years

12 - 22 Lacs

Pune, Chennai, Bengaluru

Work from Office

Naukri logo

Dear Candidate, Greetings of the day!!! Location- Bangalore, Hyderabad , Pune and Chennai Experience- 3.5 years to 13 years Short JD- Job Description Key skills- Spark or Pyspark or Scala (Any big data skill is fine) All skills are good to have. Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than 3-5) 1. Minimum 3-12 years of experience in build & deployment of Bigdata applications using SparkSQL, SparkStreaming in Python; 2. Minimum 2 years of extensive experience in design, build and deployment of Python-based applications; 3. Design and develop ETL integration patterns using Python on Spark. Develop framework for converting existing PowerCenter mappings and to PySpark(Python and Spark) Jobs. Expertise on graph algorithms and advanced recursion techniques. Hands-on experience in generating/parsing XML, JSON documents, and REST API request/responses. Good-to-Have Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities.

Posted Just now

Apply

8.0 - 13.0 years

85 - 90 Lacs

Noida

Work from Office

Naukri logo

About the Role We are looking for a Staff Engineer to lead the design and development of a scalable, secure, and robust data platform. You will play a key role in building data platform capabilities for data quality, metadata management, lineage tracking, and compliance across all data layers. If youre passionate about building foundational data infrastructure that accelerates innovation in healthcare, wed love to talk. A Day in the Life Architect, design, and build scalable data governance tools and frameworks. Collaborate with cross-functional teams to ensure data compliance, security, and usability. Lead initiatives around metadata management, data lineage, and data cataloging. Define and evangelize standards and best practices across data engineering teams. Own the end-to-end lifecycle of governance tooling from prototyping to production deployment. Mentor and guide junior engineers and contribute to technical leadership across the organization. Drive innovation in privacy-by-design, regulatory compliance (e.g., HIPAA), and data observability solutions. What You Need 8+ years of experience in software engineering. Strong experience building distributed systems for metadata management, data lineage, and quality tracking. Proficient in backend development (Python, Java, or Scala or Go) and familiar with RESTful API design. Expertise in modern data stacks: Kafka, Spark, Airflow, Snowflake etc. Experience with open-source data governance frameworks like Apache Atlas, Amundsen, or DataHub is a big plus. Familiarity with cloud platforms (AWS, Azure, GCP) and their native data governance offerings. Prior experience in building metadata management frameworks for scale. Bachelor's or Masters degree in Computer Science, Engineering, or a related field.

Posted Just now

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

GlassDoor logo

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Location Type: In-person Work Location: In person

Posted 2 hours ago

Apply

10.0 years

2 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

What You'll Do Lead the transition to RBAC across Oracle HCM (Core HR, Payroll, Absence, Time, Talent) and downstream systems with complex integrations. Architect an end-to-end access governance framework , covering application, integration, and data warehouse layers — including Databricks, OAC/OTBI, and 3rd-party data hubs. Define and standardize personas, access tiers, and Areas of Responsibility (AOR) with business process owners. Partner with data platform and analytics teams to align access policies across structured/unstructured data sources used for reporting, workforce intelligence, and ML. Integrate security policies with Okta and identity management tools , ensuring consistent enforcement across apps and data endpoints. Enable secure self-service analytics by implementing column- and row-level security within platforms like OTBI and Databricks, ensuring compliance with SOX, GDPR, and HIPAA. Manage security lifecycle for Oracle HCM and connected platforms: provisioning, auditing, change control, and SoD enforcement. Serve as the employee & candidate data access security authority , participating in solution design, release planning, and cross-functional governance reviews, consulting with legal, HRBPs, comms, and engineering security where applicable Basic Qualifications 10+ years of experience in enterprise security, application governance, or architecture roles with deep expertise in Oracle Fusion HCM and SaaS integration landscapes. Proven experience designing and implementing enterprise RBAC frameworks , with hands-on involvement across apps and data layers. Deep understanding of big data platforms (Databricks, Snowflake, etc.) and how access, classification, and lineage apply in modern data environments. Experience with analytics platform security including OTBI, OAC, and integration with business intelligence tools. Familiarity with identity federation and access policy integration via Okta, Azure AD, or similar tools. Strong understanding of compliance frameworks (SOX, GDPR, HIPAA) and ability to translate policies into technical access controls. Skilled communicator, capable of aligning technical security strategy with business priorities and presenting to senior leadership. Preferred Qualifications Experience with multi-phase Oracle HCM deployments or Workday-to-Oracle transitions. Exposure to data mesh or federated data ownership models . Background in data pipeline security and governance , especially in Databricks, Spark, or similar platforms. Strong knowledge of RACI, persona-based design , and data domain ownership strategies in global organizations. Demonstrated ability to build security into the SDLC , with tools and controls supporting agile SaaS environments.

Posted 2 hours ago

Apply

5.0 - 8.0 years

6 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

About the Role: Grade Level (for internal use): 10 The Team: We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact: Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. What’s in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of data—ideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk, Spark ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications: Bachelor's/Master’s Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Kafka, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks , SPARK , Scala technologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Spark, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316914 Posted On: 2025-06-23 Location: Hyderabad, Telangana, India

Posted 2 hours ago

Apply

0 years

0 Lacs

India

On-site

GlassDoor logo

Job Description: ARKA ELIIT SCHOOL is looking for a dynamic and knowledgeable Primary GK (General Knowledge) Teacher for Classes 1 to 5 who can spark curiosity and awareness among young learners. The candidate must have excellent English communication skills and the ability to engage children through interactive teaching methods. Telugu-speaking candidates are not preferred for this role. Key Responsibilities: Teach General Knowledge topics in an engaging, age-appropriate, and interactive way Encourage curiosity and awareness of the world, current affairs, environment, and moral values Conduct quizzes, group discussions, and activity-based learning sessions Prepare class materials, worksheets, and assessments Monitor student progress and provide feedback to parents Participate in school assemblies and events focused on current topics and awareness Job Type: Full-time Work Location: In person

Posted 2 hours ago

Apply

8.0 years

6 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

About the Role: Grade Level (for internal use): 11 The Team Our team is on an exciting journey to build Kensho Spark Assist, S&P Global’s internal conversational AI platform, designed to support colleagues across all departments. We work collaboratively with internal and external partners, using data-driven decisions and continuous improvement to create value. Forward-thinking in nature, we leverage modern generative AI models and cloud services. Our focus is on the creation of scalable systems over customized solutions, all while prioritizing the needs of our stakeholders. What You Stand to Gain: Build a rewarding career with a leading global company in an international team. Develop relevant solutions that enhance efficiency and drive innovation across S&P Global's diverse departments. Enhance your skills by engaging with enterprise-level products and cutting-edge genAI technologies. Work alongside experts in AI and technology, gaining insights and experience that will propel your career forward. Responsibilities: Develop clean, high-quality Python code that is easy to read and maintain. Solve complex problems by analyzing and isolating issues efficiently. Champion best practices in coding and serve as a subject matter expert. Design and implement solutions to support key business needs. Engineer components and API functions using Python. Produce system design documents and lead technical walkthroughs. Collaborate effectively with both technical and non-technical partners to achieve project goals. Continuously improve the architecture to enhance system performance and scalability. Provide technical guidance and mentorship to team members, fostering a culture of continuous improvement. Basic Qualifications: 8+ years of experience in designing and building solutions using distributed computing. Proven experience in implementing and maintaining web applications in large-scale environments. Experience working with business stakeholders and users, providing research direction and solution design. Experience with CI/CD pipelines to automate the deployment and testing of software. Proficient programming skills in high-level languages, particularly Python. Solid knowledge of cloud platforms such as Azure and AWS. Experience with SQL and NoSQL such as Azure Cosmos DB and PostgreSQL Ability to quickly define and prototype solutions with continual iteration within challenging timelines. Strong communication and documentation skills for both technical and non-technical audiences. Preferred Qualifications: Generative AI Expertise : Deep understanding of generative AI models, including experience with large language models (LLMs) such as GPT, BERT, and Transformer architectures. Embedding Techniques : Proficiency in creating and utilizing embeddings for various applications, including semantic search and recommendation systems. Machine Learning and NLP : Experience with machine learning models and natural language processing techniques to enhance AI-driven solutions. Vector Search and Retrieval: Familiarity with vector search techniques and embedding models for efficient data retrieval and analysis. Cloud Platforms : Knowledge of cloud services such as AWS, Azure, or Google Cloud for deploying and managing AI solutions. Collaboration and Leadership: Ability to lead, train, and mentor team members effectively. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317163 Posted On: 2025-06-23 Location: Hyderabad, Telangana, India

Posted 2 hours ago

Apply

2.0 years

3 - 15 Lacs

Hyderābād

Remote

GlassDoor logo

About us: At Data Unveil, we believe in delivering the best for our clients (Pharma Companies). We use the latest technology and tools to aggregate and analyze specialty healthcare data received from various data partners. We provide clear and hassle-free business insights to enhance the client’s vision and drive business success. Job Description: Position Title: Python Developer Experience: 2 Years to 5 Years Location: Hyderabad, Telangana Hire Type: Full Time, On-site Start Date: Immediate Job Summary: We are seeking a Python Developer with 2-5 years of experience, skilled in Python, PySpark, Oracle, MySQL, and basic AWS services. The candidate will work on data processing, ETL pipelines, and database integration to support business analytics and reporting. Key Responsibilities: Develop and optimize ETL workflows and data pipelines using Python and PySpark. Write and optimize SQL queries for Oracle and MySQL databases. Use PySpark for distributed data processing and large-scale transformations. Implement data ingestion, transformation, and validation processes. Integrate AWS services like S3, Lambda, EC2, RDS, Glue, etc. Ensure data integrity, security, and performance optimization. Collaborate with data teams to understand requirements. Debug and optimize Spark jobs and database queries. Use version control (Git) and participate in code reviews. Required Skills: Python Pyspark Linux AWS services such as S3, Lambda, EC2, RDS, etc. Git and CI/CD practices. Oracle, Mysql Data extraction, transformation, and loading (ETL). Industry IT Services and IT Consulting To know more about us , please visit our website Data Unveil - Main Page Job Type: Full-time Pay: ₹340,447.97 - ₹1,511,793.75 per year Benefits: Health insurance Leave encashment Life insurance Provident Fund Work from home Schedule: Fixed shift Monday to Friday Supplemental Pay: Yearly bonus Application Question(s): What is your total Work Experience ? What is your relevant exp on Pyspark ? How many years of work exp do you have with AWS / AWS Services ? What is your Current CTC ? What is your Expected CTC ? What is your Notice Period (In days) / LWD ? What is your Current Location ? Are you able to re-locate ? Are you able to Work from Office ? Work Location: In person

Posted 2 hours ago

Apply

0 years

1 - 1 Lacs

India

Remote

GlassDoor logo

Join the Social Media Revolution at Denary Media! Role: Social Media Manager (Fresher) Vibe: Full-Time, In-Office, Start ASAP! Pay: INR 10,000/month (Your creative spark deserves a start!) Where: Denary Media Pvt Ltd, 3rd Floor, plot no, 15, Rd Number 71, Phase III, Jubilee Hills, Hyderabad, Telangana 500033 Who Are We? Denary Media Pvt Ltd is Hyderabad’s go-to digital marketing powerhouse, crafting scroll-stopping campaigns for brands in medical, gaming, food, and real estate (think KIMS, CARE, and more!). We’re all about bold ideas, viral moments, and epic storytelling. Ready to join our creative crew? What You’ll Do Spark Content Magic: Whip up posts, stories, and reels for Instagram, Facebook, LinkedIn, and Twitter that make followers hit ❤ Ride the Trends: Stay ahead of the curve with Insta filters thought pieces to keep our clients trending. Track the Buzz: Dive into analytics (likes, shares, impressions) to see what’s popping and what’s not. Chat with Fans: Reply to comments and DMs with brand flair, keeping the convo lively. Team Up : Brainstorm with our content and design rockstars to nail client goals. Who You Are Fresh grad (2024/2025) or final-year student in any field (Marketing, Arts, or even Engineering—creativity knows no bounds!). Obsessed with social media (you know Reels > Stories, right?). Words flow like poetry, and your grammar’s on point (English is key). Canva newbie? No worries—we’ll teach you! Photoshop dabbler? Show off! Ready to dive in now and work from our Hyderabad office. Hungry to learn, create, and shine in the digital world. Skills That Pop Content Creation & Caption Wizardry Social Media Savvy (Insta, FB, LinkedIn, Twitter) Basic Analytics (e.g., Instagram Insights) Creativity & Hustle Why Denary? Work with big names like KIMS and MedOne from day one! Grow fast in a fun, creative team that celebrates your ideas. Be part of campaigns that light up Hyderabad’s digital scene. How to Apply: Click on Apply.. P.S. : This isn’t a WFH gig—come vibe with us in-office! Denary Media Pvt Ltd is all about equal vibes. Freshers from every background, bring your spark! Say Hi: HR Crew Email: meghna.hr@denary.agency Website: www.denary.agency Job Types: Full-time, Permanent, Fresher Pay: ₹9,000.00 - ₹10,000.00 per month Benefits: Paid sick time Paid time off Schedule: Day shift Fixed shift Monday to Friday Weekend availability Ability to commute/relocate: Film Nagar, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you a fresher? Work Location: In person

Posted 2 hours ago

Apply

Exploring Spark Jobs in India

The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities have a high concentration of tech companies and startups actively hiring for Spark roles.

Average Salary Range

The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum

Salaries may vary based on the company, location, and specific job requirements.

Career Path

In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect

Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.

Related Skills

Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases

Having a combination of these skills can make a candidate more competitive in the job market.

Interview Questions

  • What is Apache Spark and how is it different from Hadoop? (basic)
  • Explain the difference between RDD, DataFrame, and Dataset in Spark. (medium)
  • How does Spark handle fault tolerance? (medium)
  • What is lazy evaluation in Spark? (basic)
  • Explain the concept of transformations and actions in Spark. (basic)
  • What are the different deployment modes in Spark? (medium)
  • How can you optimize the performance of a Spark job? (advanced)
  • What is the role of a Spark executor? (medium)
  • How does Spark handle memory management? (medium)
  • Explain the Spark shuffle operation. (medium)
  • What are the different types of joins in Spark? (medium)
  • How can you debug a Spark application? (medium)
  • Explain the concept of checkpointing in Spark. (medium)
  • What is lineage in Spark? (basic)
  • How can you monitor and manage a Spark application? (medium)
  • What is the significance of the Spark Driver in a Spark application? (medium)
  • How does Spark SQL differ from traditional SQL? (medium)
  • Explain the concept of broadcast variables in Spark. (medium)
  • What is the purpose of the SparkContext in Spark? (basic)
  • How does Spark handle data partitioning? (medium)
  • Explain the concept of window functions in Spark SQL. (advanced)
  • How can you handle skewed data in Spark? (advanced)
  • What is the use of accumulators in Spark? (advanced)
  • How can you schedule Spark jobs using Apache Oozie? (advanced)
  • Explain the process of Spark job submission and execution. (basic)

Closing Remark

As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies