Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You should have 5+ years of experience with skills in DataBricks, Delta Lake, pyspark or scala spark, unity catalog. Good to have skills in Azure/AWS Cloud. You will be responsible for ingesting and transforming batch and streaming data on the Databricks Lakehouse Platform. Excellent communication skills are required. Your responsibilities will include overseeing and supporting processes by reviewing daily transactions, reviewing performance dashboards, supporting the team in improving performance parameters, recording and tracking all queries received, ensuring standard processes are followed, resolving client queries within defined SLAs, developing understanding of processes/products for team members, analyzing call logs, identifying red flags, and escalating serious client issues. You will also handle technical escalations through effective diagnosis and troubleshooting, manage and resolve technical roadblocks/escalations, escalate issues when necessary, provide product support and resolution to clients, troubleshoot client queries in a user-friendly manner, offer alternative solutions to retain customers, communicate effectively with clients, and follow up with customers to record feedback. In addition, you will be responsible for building people capability to ensure operational excellence, mentoring and guiding Production Specialists, collating and conducting trainings to bridge skill gaps, staying current with product features, enrolling in trainings as per client requirements, identifying common problems and recommending resolutions, and updating job knowledge through self-learning opportunities and personal networks. Your performance will be measured based on process metrics like the number of cases resolved per day, compliance to process and quality standards, meeting SLAs, Pulse score, customer feedback, and Team Management metrics like productivity, efficiency, and absenteeism. Capability development will be measured based on completed triages and technical test performance. Join Wipro, a company focused on digital transformation and reinvention. We are looking for individuals who are inspired by reinvention and want to evolve their careers and skills. Wipro is a place that empowers you to design your own reinvention and realize your ambitions. Applications from people with disabilities are explicitly welcome.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You should have 3-6 years of experience as a developer on the Palantir Foundry platform. Along with this, a strong understanding of data integration, data modeling, and software development principles is required. Proficiency in languages like Python, PySpark, and Scala Spark is essential. Experience with SQL and relational databases is also a must. Your responsibilities will include designing, developing, and deploying models and applications within the Palantir Foundry platform. You will be integrating data from various sources to ensure the robustness and reliability of data pipelines. Customizing and configuring the platform to meet business requirements will also be part of your role. The position is at the Consultant level and is based in Hyderabad, Bangalore, Mumbai, Pune, Chennai, Kolkata, or Gurgaon. The notice period for this role is between 0-90 days.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Analytics focused Senior Software Engineer at PubMatic, you will be responsible for developing advanced AI agents to enhance data analytics capabilities. Your expertise in building and optimizing AI agents, along with strong skills in Hadoop, Spark, Scala, Kafka, Spark Streaming, and cloud-based solutions, will play a crucial role in improving data-driven insights and analytical workflows. Your key responsibilities will include building and implementing a highly scalable big data platform to process terabytes of data, developing backend services using Java, REST APIs, JDBC, and AWS, and building and maintaining Big Data pipelines using technologies like Spark, Hadoop, Kafka, and Snowflake. Additionally, you will design and implement real-time data processing workflows, develop GenAI-powered agents for analytics and data enrichment, and integrate LLMs into existing services for query understanding and decision support. You will work closely with cross-functional teams to enhance the availability and scalability of large data platforms and PubMatic software functionality. Participating in Agile/Scrum processes, discussing software features with product managers, and providing customer support over email or JIRA will also be part of your role. We are looking for candidates with three plus years of coding experience in Java and backend development, solid computer science fundamentals, expertise in developing software engineering best practices, hands-on experience with Big Data tools, and proven expertise in building GenAI applications. The ability to lead feature development, debug distributed systems, and learn new technologies quickly are essential. Strong interpersonal and communication skills, including technical communications, are highly valued. To qualify for this role, you should have a bachelor's degree in engineering (CS/IT) or an equivalent degree from well-known Institutes/Universities. PubMatic employees globally have returned to our offices via a hybrid work schedule to maximize collaboration, innovation, and productivity. Our benefits package includes paternity/maternity leave, healthcare insurance, broadband reimbursement, and office perks like healthy snacks, drinks, and catered lunches. About PubMatic: PubMatic is a leading digital advertising platform that provides transparent advertising solutions to publishers, media buyers, commerce companies, and data owners. Our vision is to enable content creators to run a profitable advertising business and invest back into the multi-screen and multi-format content that consumers demand.,
Posted 1 month ago
7.0 - 11.0 years
7 - 11 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Developing Scala Spark pipelines that are resilient, modular and tested. Help automate and scale governance through technology enablement Enable users finding the right data for the right use case Participate in identifying and proposing solutions to data quality issues, and data management solutions Support technical implementation of solutions through data pipeline development Maintain technical processes and procedures for data management Very good understanding of MS Azure Data Lake and associated setups ETL knowledge to build semantic layers for reporting Creation / modification of pipelines based on source and target systems User and access Management and Training end users
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |