Jobs
Interviews

15352 Spark Jobs - Page 50

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

osition: Data Scientist Location: Chennai, India (Work from Office) Experience: 2–5 years About the Opportunity: Omnihire is seeking a Data Scientist to join a leading AI-driven data‐solutions company. As part of the Data Consulting team, you’ll collaborate with scientists, IT, and engineering to solve high-impact problems and deliver actionable insights. Key Responsibilities: Analyze large, structured and unstructured datasets (SQL, Hadoop/Spark) to extract business-critical insights Build and validate statistical models (regression, classification, time-series, segmentation) and machine-learning algorithms (Random Forest, Boosting, SVM, KNN) Develop deep-learning solutions (CNN, RNN, LSTM, transfer learning) and apply NLP techniques (tokenization, stemming/lemmatization, NER, LSA) Write production-quality code in Python and/or R using libraries (scikit-learn, TensorFlow/PyTorch, pandas, NumPy, NLTK/spaCy) Collaborate with cross-functional teams to scope requirements, propose analytics solutions, and present findings via clear visualizations (Power BI, Matplotlib) Own end-to-end ML pipelines: data ingestion → preprocessing → feature engineering → model training → evaluation → deployment Contribute to solution proposals and maintain documentation for data schemas, model architectures, and experiment tracking (Git, MLflow) Required Qualifications: Bachelor’s or Master’s in Computer Science, Statistics, Mathematics, Data Science, or a related field 2–5 years of hands-on experience as a Data Scientist (or similar) in a data-driven environment Proficiency in Python and/or R for statistical modeling and ML Strong SQL skills and familiarity with Big Data platforms (e.g., Hadoop, Apache Spark) Demonstrated experience building, validating, and deploying ML/DL models in production or staging Excellent problem-solving skills, attention to detail, and ability to communicate technical concepts clearly Self-starter who thrives in a collaborative, Agile environment Nice-to-Have: Active GitHub/Kaggle portfolio showcasing personal projects or contributions Exposure to cloud-based ML services (Azure ML Studio, AWS SageMaker) and containerization (Docker) Familiarity with advanced NLP frameworks (e.g., Hugging Face Transformers) or production monitoring tools (Azure Monitor, Prometheus) Why Join? Work on high-impact AI/ML projects that drive real business value Rapid skill development with exposure to cutting-edge technologies Collaborative, Agile culture with mentorship from senior data scientists Competitive compensation package and comprehensive benefits

Posted 5 days ago

Apply

2.5 years

0 Lacs

Pune, Maharashtra, India

On-site

We're looking for a Software Engineer - .NET This role is Office Based, Pune Office As a Software Engineer , you will be designing and delivering solutions that scale to meet the needs of some of the largest and most innovative organizations in the world. You will work with team members to understand and exceed the expectations of users, constantly pushing the technical envelope, and helping Cornerstone deliver great results. Working in an agile software development framework focused on development sprints and regular release cycles, you’ll own the complete feature story and mentor juniors. In this role, you will… Design, develop, and enhance .NET applications and services for legacy and cloud platforms, utilizing ASP.NET, C#, .NET, React, and CI/CD tools Analyze product and technical user stories and convey technical specifications in a concise and effective manner. Code & deliver a working deliverable, with a ‘first time right’ approach. Contribute to architectural decisions and participate in designing robust, scalable solutions. Troubleshoot and resolve complex production issues, deliver detailed root cause analysis (RCA), and collaborate with global Engineering, Product, and Release teams. Participate in sprint planning, and technical design reviews; provide input as appropriate. Partner with engineers, product managers, and other team members as appropriate Continuously expand and maintain deep knowledge of our products and technologies. You’ve Got What It Takes If You Have… B achelor’s/Master’s in Computer Science or related field 2.5+ years’ hands-on experience with ASP.NET, C#, and .NET. Basic exposure to Gen AI and familiarity with AI tools and their applications. Strong in OOP and SOLID design principles. Should be very good at analyzing and Debugging/Troubleshooting functional and technical issues. Proficient experience with relational databases such as Microsoft SQL Server/Postgres. Able to optimize designs/queries for scale. Proven experience in developing Microservices and RESTful services. Strong TDD skills with experience in unit testing frameworks like NUnit or xUnit. Proficiency with ORMs such as Entity Framework or NHibernate. Good understanding on secure development practices. Proactively codes to avoid Security issues whilst able to resolve all security findings Excellent analytical, quantitative and problem-solving abilities. Conversant in algorithms, software design patterns, and their best usage. Good understanding on how to deal with concurrency and parallel work streams. Self-motivated, requiring minimal oversight. Effective team player with strong communication skills and an ability to manage multiple priorities. Passion for continuous learning and technology improvement. Good to have Exposure to modern java script frameworks like Angular or React Exposure to non-relational DBs like MongoDB . Experience developing RESTful services, or other SOA development experience (preferably AWS) Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook !

Posted 5 days ago

Apply

2.5 years

0 Lacs

Pune, Maharashtra, India

On-site

We're looking for a Software Engineer - .NET This role is Office Based, Pune Office As a Software Engineer , you will be designing and delivering solutions that scale to meet the needs of some of the largest and most innovative organizations in the world. You will work with team members to understand and exceed the expectations of users, constantly pushing the technical envelope, and helping Cornerstone deliver great results. Working in an agile software development framework focused on development sprints and regular release cycles, you’ll own the complete feature story and mentor juniors. In this role, you will… Design, develop, and enhance .NET applications and services for legacy and cloud platforms, utilizing ASP.NET, C#, .NET, React, and CI/CD tools Analyze product and technical user stories and convey technical specifications in a concise and effective manner. Code & deliver a working deliverable, with a ‘first time right’ approach. Contribute to architectural decisions and participate in designing robust, scalable solutions. Troubleshoot and resolve complex production issues, deliver detailed root cause analysis (RCA), and collaborate with global Engineering, Product, and Release teams. Participate in sprint planning, and technical design reviews; provide input as appropriate. Partner with engineers, product managers, and other team members as appropriate Continuously expand and maintain deep knowledge of our products and technologies. You’ve Got What It Takes If You Have… B achelor’s/Master’s in Computer Science or related field 2.5+ years’ hands-on experience with ASP.NET, C#, and .NET. Basic exposure to Gen AI and familiarity with AI tools and their applications. Strong in OOP and SOLID design principles. Should be very good at analyzing and Debugging/Troubleshooting functional and technical issues. Proficient experience with relational databases such as Microsoft SQL Server/Postgres. Able to optimize designs/queries for scale. Proven experience in developing Microservices and RESTful services. Strong TDD skills with experience in unit testing frameworks like NUnit or xUnit. Proficiency with ORMs such as Entity Framework or NHibernate. Good understanding on secure development practices. Proactively codes to avoid Security issues whilst able to resolve all security findings Excellent analytical, quantitative and problem-solving abilities. Conversant in algorithms, software design patterns, and their best usage. Good understanding on how to deal with concurrency and parallel work streams. Self-motivated, requiring minimal oversight. Effective team player with strong communication skills and an ability to manage multiple priorities. Passion for continuous learning and technology improvement. Good to have Exposure to modern java script frameworks like Angular or React Exposure to non-relational DBs like MongoDB . Experience developing RESTful services, or other SOA development experience (preferably AWS) Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook !

Posted 5 days ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru, Karnataka

Work from Office

Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects

Posted 5 days ago

Apply

2.5 years

0 Lacs

Pune, Maharashtra, India

On-site

We're looking for a Software Engineer - .NET This role is Office Based, Pune Office As a Software Engineer , you will be designing and delivering solutions that scale to meet the needs of some of the largest and most innovative organizations in the world. You will work with team members to understand and exceed the expectations of users, constantly pushing the technical envelope, and helping Cornerstone deliver great results. Working in an agile software development framework focused on development sprints and regular release cycles, you’ll own the complete feature story and mentor juniors. In this role, you will… Design, develop, and enhance .NET applications and services for legacy and cloud platforms, utilizing ASP.NET, C#, .NET, React, and CI/CD tools Analyze product and technical user stories and convey technical specifications in a concise and effective manner. Code & deliver a working deliverable, with a ‘first time right’ approach. Contribute to architectural decisions and participate in designing robust, scalable solutions. Troubleshoot and resolve complex production issues, deliver detailed root cause analysis (RCA), and collaborate with global Engineering, Product, and Release teams. Participate in sprint planning, and technical design reviews; provide input as appropriate. Partner with engineers, product managers, and other team members as appropriate Continuously expand and maintain deep knowledge of our products and technologies. You’ve Got What It Takes If You Have… B achelor’s/Master’s in Computer Science or related field 2.5+ years’ hands-on experience with ASP.NET, C#, and .NET. Basic exposure to Gen AI and familiarity with AI tools and their applications. Strong in OOP and SOLID design principles. Should be very good at analyzing and Debugging/Troubleshooting functional and technical issues. Proficient experience with relational databases such as Microsoft SQL Server/Postgres. Able to optimize designs/queries for scale. Proven experience in developing Microservices and RESTful services. Strong TDD skills with experience in unit testing frameworks like NUnit or xUnit. Proficiency with ORMs such as Entity Framework or NHibernate. Good understanding on secure development practices. Proactively codes to avoid Security issues whilst able to resolve all security findings Excellent analytical, quantitative and problem-solving abilities. Conversant in algorithms, software design patterns, and their best usage. Good understanding on how to deal with concurrency and parallel work streams. Self-motivated, requiring minimal oversight. Effective team player with strong communication skills and an ability to manage multiple priorities. Passion for continuous learning and technology improvement. Good to have Exposure to modern java script frameworks like Angular or React Exposure to non-relational DBs like MongoDB . Experience developing RESTful services, or other SOA development experience (preferably AWS) Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook !

Posted 5 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Hello, FCM part of FTCG is one of the world’s largest travel management companies and a trusted partner for nationals and multinational companies. With a 24/7 reach in 97 countries, FCM’s flexible technology anticipates and solves client needs, supported by experts who provide in-depth local knowledge and duty of care as part of the ultimate personalised business travel experience. As part of the ASX-listed Flight Centre Travel Group, FCM delivers the best market-wide rates, unique added-value benefits, and exclusive solutions. Winner of the World's Leading Travel Management Company Award at the WTM for nine consecutive years (2019-2011), FCM is constantly transforming the business of travel through its empowered and accountable people who deliver 24/7 service and are available online and offline. FCM has won the coveted Great Place to Work certification for the fifth time ! FCM Travel India is one of India’s Top 100 Great Mid-size Workplaces 2024 and the Best in Professional Services. A leader in the travel tech space, FCM has proprietary client solutions. FCM provides specialist services via FCM Consulting and FCM Meetings & Events. Key Responsibilities Design and develop AI solutions that address real-world business challenges, ensuring alignment with strategic objectives and measurable outcomes. Work with large-scale structured and unstructured datasets, leveraging modern data frameworks, tools, and platforms. Establish and maintain robust standards for data security, privacy, and regulatory compliance across all AI and data workflows. Collaborate closely with cross-functional teams to gather requirements, share insights, and deliver high-impact solutions. Monitor and maintain production AI systems to ensure continued accuracy, scalability, and reliability over time. Stay up to date with the latest advancements in AI, machine learning, and data engineering, and apply them where relevant. Write clean, well-documented, and maintainable code, and actively contribute to team best practices and technical documentation. You'll Be Perfect For The Role If You Have Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field Strong programming skills in Python (preferred) and experience with AI/ML libraries such as TensorFlow, PyTorch, scikit-learn, or Hugging Face Experience designing and deploying machine learning models and AI systems in production environments Familiarity with modern data platforms and cloud services (e.g., Azure, AWS, GCP), including AutoML and MLflow Proficiency with data processing tools and frameworks (e.g., Spark, Pandas, SQL) and working with both structured and unstructured data Experience with Generative AI technologies, including prompt engineering, vector databases, and RAG (Retrieval-Augmented Generation) pipelines Solid understanding of data security, privacy, and compliance principles, with experience implementing these in real-world projects Strong problem-solving skills and ability to translate complex business problems into technical solutions Excellent communication and collaboration skills, with the ability to work effectively across technical and non-technical teams Experience with version control (e.g., Git) and agile development practices Enthusiasm for learning and applying emerging technologies in AI and machine learning Work Perks! - What’s in it for you: FCTG is renowned internationally for having amazing perks and an even better culture. We understand that our people are our most valuable asset. It is the passion and dedication of our teams that keep the company on top of the industry ladder. It’s also why we offer some great employee benefits and perks outside of the norm. You will be rewarded with competitive market salary. You will also be equipped with relevant training courses and tools to set you up for success with endless career advancement and job opportunities all over the world. Market Aligned remuneration structure and a highly competitive salary Fun and Energetic culture : At the heart of everything we do at FCM is a desire to have fun and be yourself Work life Balance : We believe in “No Leave = No Life” So have your own travel adventures with paid annual leave Great place to work - Recognized as a top workplace for 5 consecutive years, which is a testimonial of our commitment towards our people Wellbeing Focus - We take care of our employee with comprehensive medical coverage, accidental insurance, and term insurance for the well being of our people. Paternity Leave: We ensure that you can spend quality time with your growing family Travel perks : You'll have access to plenty of industry discounts to ensure you continue to broaden your horizons A career, not a job : We believe in our people brightness of future. As a high growth company, you will have the opportunity to advance your career in any direction you choose whether that is locally or globally. Reward & Recognition : Celebrate the success of yourself and others at our regular Buzz Nights and at the annual Global Gathering - You'll have to experience it to believe it! Love for travel : We were founded by people who wanted to travel and want others to do the same. That passion is something you can’t miss in our people or service. We value you... #FCMIN Flight Centre Travel Group is committed to creating an inclusive and diverse workplace that supports your unique identity to create better, safer experiences for everyone. We encourage you to come as you are; to foster inclusivity and collaboration. We celebrate you. Who We Are... Since our beginning, our vision has always been to open up the world for those who want to see. As a global travel retailer, our people come from all different backgrounds, and our connections spread to the far reaches of the globe - 20+ countries and counting! Together, we are a family (we call ourselves Flighties). We offer genuine opportunities for people to grow and evolve. We embrace new experiences, we celebrate the wins, seize all opportunities, and empower all of our people to find their Brightness of Future. We encourage you to DREAM BIG through collaboration and innovation, and make sure you are supported to make incredible ideas a reality. Together, we deliver quality, innovative solutions that delight our customers and achieve our strategic priorities. Irreverence. Ownership. Egalitarianism

Posted 5 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Experience in SQL and understanding of ETL best practices Should have good hands on in ETL/Big Data development Extensive hands on experience in Scala Should have experience in Spark/Yarn, troubleshooting Spark, Linux, Python Setting up a Hadoop cluster, Backup, recovery, and maintenance.

Posted 5 days ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Established in 2004, OLIVER is the world’s first and only specialist in designing, building, and running bespoke in-house agencies and marketing ecosystems for brands. We partner with over 300 clients in 40+ countries and counting. Our unique model drives creativity and efficiency, allowing us to deliver tailored solutions that resonate deeply with audiences. As a part of The Brandtech Group , we're at the forefront of leveraging cutting-edge AI technology to revolutionise how we create and deliver work. Our AI solutions enhance efficiency, spark creativity, and drive insightful decision-making, empowering our teams to produce innovative and impactful results. Role: Account Lead Location: Mumbai, India About the role: Working in true collaboration with our client, we have one goal in mind:‘to be the core craft and digital partner of our clients, powering global and regional creative excellence, achieved by elevating our people, expertise and culture to deliver shared success” The Account Lead is the client’s partner to bounce things around with and to own the relationship, and they are the agency leader to be responsible for projects, the team resource required and the work to be delivered. They think ahead, spotting problems before they occur, managing client and internal expectations, support their team both managing up and down, and contingency planning as they go. They develop close relationships and a sense of partnership with their clients and foster trust based on their ability to deliver. This trust enables them to develop, nurture and protect the best work possible through the client process. Internally they have strong relationships with all team members and a thorough understanding of how to lead their team to create great work and help problem solve when it’s needed. They work in partnership with both Strategy and Creative, and are fluent in the strategic debates regarding their brands. They are passionate about our creative product and know what best in class looks like both in terms of creativity and results. What you will be doing: Ability to manage a project with the client from beginning to end To provide leadership and expertise to the onsite creative team at Unilever Schedule and manage team priorities and deadlines across client projects Championing a lasting and strategic partnership that cultivates a client experience to engage and delight Financial accountability. Stellar project management, fully responsible for financial management regarding jobs/accounts including forecasting. Process Development and fulfilment – maintaining ongoing communications, both internal and external, to keep processes and resources streamlined Brand guardianship Presenting your work internally and to clients and manage workloads within agreed timings. Resource Management - Working alongside the Creative Director to ensure you have the right people at the right time to deliver to client/ project needs What you need to be great in this role: Excellent client engagement skills with the ability to proactively organise and influence clients and build strong and effective working relationships. Passion for deep-diving into a client’s business to get under the skin of it and fully understand their brands, products, and ways of working The ability to manage and filter workflow as well as organise and prioritise workloads to maximise productivity. A strong understanding and experience of working with end to end digital creative solutions; particularly across Social Media, eCommerce and Social Commerce Knowledge of account management, project management and invoicing. Highly creative with the ability to generate ideas and practically contribute to studio output. Ambition to push for the best and create award-winning work Understanding of how to integrate with a client-side team whilst maintaining a top tier agency service. Experience working with major Personal Skincare clients, as well as beauty or cosmetic brands a huge bonus Embodies the “can-do attitude” and is seen as a constant positive force on the team Schedule and manage team priorities and deadlines. Brand guardianship Presenting your work internally and to clients and manage workloads within agreed timings.

Posted 5 days ago

Apply

6.0 years

0 Lacs

Mohali district, India

On-site

Job Title: DevOps/MLOps Expert Location: Mohali (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert , you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform , CloudFormation , or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow , Prefect , or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks : TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools : MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving : TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking : MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms : AWS, Azure, or GCP with relevant certifications • Containerization : Docker, Kubernetes (CKA/CKAD preferred) • CI/CD : Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC : Terraform, CloudFormation, Pulumi, Ansible • Monitoring : Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools : Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time : Apache Kafka, Apache Spark, Apache Flink, Redis • Databases : PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing : Snowflake, BigQuery, Redshift, Databricks • Data Versioning : DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security : Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing : Experience with petabyte-scale data processing and real-time analytics • Performance Optimization : Advanced system optimization, distributed computing, caching strategies • API Development : REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams , product managers , and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics , SLAs , and enterprise ROI Growth Opportunities • Career Path : Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth : Work with cutting-edge enterprise AI/ML technologies • Leadership : Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure : Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime : Maintain 99.9%+ availability for enterprise clients • Deployment Frequency : Enable daily deployments with zero downtime • Performance : Ensure optimal response times and system performance • Cost Optimization : Achieve 20-30% annual infrastructure cost reduction • Security : Zero security incidents and full compliance adherence Business Impact • Time to Market : Reduce deployment cycles and improve development velocity • Client Satisfaction : Maintain 95%+ enterprise client satisfaction scores • Team Productivity : Improve engineering team efficiency by 40%+ • Scalability : Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 5 days ago

Apply

2.0 - 6.0 years

3 - 7 Lacs

Gurugram

Work from Office

We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability to design, build and unit test the applications in Pyspark. Experience with Python development and Python data transformations. Experience with SQL scripting on one or more platforms Hive, Oracle, PostgreSQL, MySQL etc. In-depth knowledge of Hadoop, Spark, and similar frameworks. Strong knowledge of Data Management principles. Experience with normalizing/de-normalizing data structures, and developing tabular, dimensional and other data models. Have knowledge about YARN, cluster, executor, cluster configuration. Hands on working in different file formats like Json, parquet, csv etc. Experience with CLI on Linux-based platforms. Experience analysing current ETL/ELT processes, define and design new processes. Experience analysing business requirements in BI/Analytics context and designing data models to transform raw data into meaningful insights. Good to have knowledge on Data Visualization. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.

Posted 5 days ago

Apply

2.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

Were looking for a Big Data Engineer who can find creative solutions to tough problems. As a Big Data Engineer, youll create and manage our data infrastructure and tools, including collecting, storing, processing and analyzing our data and data systems. You know how to work quickly and accurately, using the best solutions to analyze mass data sets, and you know how to get results. Youll also make this data easily accessible across the company and usable in multiple departments. Skillset Required Bachelors Degree or more in Computer Science or a related field. A solid track record of data management showing your flawless execution and attention to detail. Strong knowledge of and experience with statistics. Programming experience, ideally in Python, Spark, Kafka or Java, and a willingness to learn new programming languages to meet goals and objectives. Experience in C, Perl, Javascript or other programming languages is a plus. Knowledge of data cleaning, wrangling, visualization and reporting, with an understanding of the best, most efficient use of associated tools and applications to complete these tasks. Experience in MapReduce is a plus. Deep knowledge of data mining, machine learning, natural language processing, or information retrieval. Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources. Experience with machine learning toolkits including, H2O, SparkML or Mahout A willingness to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and your experience to get the job done. Experience in production support and troubleshooting. You find satisfaction in a job well done and thrive on solving head-scratching problems.

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking an experienced Lead Data Software Engineer to join our dynamic team and tackle rewarding challenges. As a Lead Engineer, you will be pivotal in creating and implementing Data solutions across various projects. The ideal candidate will possess deep experience in Data and associated technologies, strongly emphasizing Apache Spark, Python, Azure and AWS. Responsibilities Develop and execute end-to-end Data solutions for intricate business needs Work alongside cross-functional teams to comprehend project requirements and deliver superior software solutions Apply your knowledge in Apache Spark, Python, Azure and AWS to build scalable and effective data processing systems Maintain the performance, security, and scalability of Data applications Keep abreast of industry trends and advancements in Data technologies to enhance our development processes Requirements 8-12 years of hands-on experience in Data and Data-related technologies Expert-level knowledge and practical experience with Apache Spark Strong proficiency with Hadoop and Hive Proficiency in Python Experience working with native Cloud data services, specifically AWS and Azure Nice to have Background in machine learning algorithms and techniques Skills in data visualization tools such as Tableau or Power BI Knowledge of real-time data processing frameworks like Apache Flink or Apache Storm Understanding of containerization technologies like Docker or Kubernetes Technologies Hadoop Hive

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

We are hiring a Lead, Data Engineer to join our team. At Kroll, we are building a strong Data practice with artificial intelligence, machine learning practice and analytics, and we’re looking for you to join our growing portfolio. You will be involved in designing, building, and integrating data from various sources and working with an advanced engineering team and professionals from the world’s largest financial institutions, law enforcement, and government agencies. The day-to-day responsibilities include but not limited to: - Design and build organizational data infrastructure and architecture - Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes for data delivery. - Choose the best tools/services/resources to build robust data pipelines for data ingestion, connection, transformation, and distribution - Design, develop and manage ELT applications. - Working with global teams to deliver fault tolerant, high quality data pipelines Requirements: - Advanced Experience writing ETL/ELT jobs - Advanced Experience with Azure, AWS and Databricks Platform (Mostly data related services) - Advanced Experience with Python, Spark ecosystem (PySpark + Spark SQL), SQL database - Ability to develop REST APIs, Python SDKs or Libraries, Spark Jobs, etc - Proficiency in using open-source tools, frameworks, python libraries like FastAPI, Pydantic, Polars, Pandas, PySpark, Deltalake Tables, Docker, Kubernetes, etc - Experience in Lakehouse & Medallion architecture, Data Governance, Data Pipeline Orchestration - Excellent communication skills - Ability to conduct data profiling, cataloging, and mapping for technical data flows - Ability to work with an international team Desired Skills: - Strong cloud architecture principles: compute, storage, networks, security, cost savings, etc. - Advanced SQL and Saprk query/data pipeline performance tuning skills. - Experience and knowledge of building Lakehouse using technologies including Azure Databricks, Azure Data Lake, SQL, PySpark etc. - Programing paradigm like OOPPs, Async programming, Batch processing - Knowledge of CI/CD, Git About Kroll In a world of disruption and increasingly complex business challenges, our professionals bring truth into focus with the Kroll Lens. Our sharp analytical skills, paired with the latest technology, allow us to give our clients clarity—not just answering all areas of business. We value the diverse backgrounds and perspectives that enable us to think globally. As part of One team, One Kroll, you’ll contribute to a supportive and collaborative work environment that empowers you to excel. Kroll is the premier global valuation and corporate finance advisor with expertise in complex valuation, disputes and investigations, M&A, restructuring, and compliance and regulatory consulting. Our professionals balance analytical skills, deep market insight and independence to help our clients make sound decisions. As an organization, we think globally—and encourage our people to do the same. Kroll is committed to equal opportunity and diversity, and recruits people based on merit. In order to be considered for a position, you must formally apply via careers.kroll.com

Posted 5 days ago

Apply

1.0 - 3.0 years

9 - 13 Lacs

Pune

Work from Office

Overview We are hiring an Associate Data Engineer to support our core data pipeline development efforts and gain hands-on experience with industry-grade tools like PySpark, Databricks, and cloud-based data warehouses. The ideal candidate is curious, detail-oriented, and eager to learn from senior engineers while contributing to the development and operationalization of critical data workflows. Responsibilities Assist in the development and maintenance of ETL/ELT pipelines using PySpark and Databricks under senior guidance. Support data ingestion, validation, and transformation tasks across Rating Modernization and Regulatory programs. Collaborate with team members to gather requirements and document technical solutions. Perform unit testing, data quality checks , and process monitoring activities. Contribute to the creation of stored procedures, functions, and views . Support troubleshooting of pipeline errors and validation issues. Qualifications Bachelor’s degree in Computer Science, Engineering, or related discipline. 3+ years of experience in data engineering or internships in data/analytics teams. Working knowledge of Python, SQL , and ideally PySpark . Understanding of cloud data platforms (Databricks, BigQuery, Azure/GCP). Strong problem-solving skills and eagerness to learn distributed data processing. Good verbal and written communication skills. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing Multi-Agent & Agentic RAG workflows in production. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help build an embedded AI CoPilot across the different products at NetSkope What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Drive the end-to-end development and deployment of CoPilot, an embedded assistant, powered by cutting-edge Multi-Agent Workflows. This will involve designing and implementing complex interactions between various AI agents & tools to deliver seamless, context-aware assistance across our product suite Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps & LLMOps best practices to deploy and monitor machine learning models & agentic workflows in production. Implement comprehensive evaluation and observability strategies for the CoPilot Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Collaborate with cloud architects and security analysts to develop cloud-native security solutions x platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Has built & deployed a multi-agent or agentic RAG workflow in production. Expertise in prompt engineering patterns such as chain of thought, ReAct, zero/few shot. Experience in Langgraph/Autogen/ AWS Bedrock/ Pydantic AI/ Crew AI Strong understanding of MLOps practices and tools (e.g., Sagemaker/MLflow/ Kubeflow/ Airflow/ Dagster). Experience with evaluation & observability tools like Langfuse/ Arize Phoenix/ Langsmith. Data Engineering Proficiency in working with vector databases such as PGVector, Pinecone, and Weaviate. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Software Engineering Expertise in Python with experience in one other language (C++/Java/Go) for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Experience of building & consuming MCP clients & servers. Experience with asynchronous programming, including web-sockets, FastAPI, and Sanic. Good-to-Have Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like Pytorch, TensorFlow and Scikit-learn. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Graph database knowledge is a plus. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.

Posted 5 days ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Pune

Work from Office

What You'll Do The Global Analytics & Insights (GAI) team is looking for a Senior Data Engineer to lead our build of the data infrastructure for Avalara's core data assets- empowering us with accurate, data to lead data backed decisions. As A Senior Data Engineer, you will help architect, implement, and maintain our data infrastructure using Snowflake, dbt (Data Build Tool), Python, Terraform, and Airflow. You will immerse yourself in our financial, marketing, and sales data to become an expert of Avalara's domain. You will have deep SQL experience, an understanding of modern data stacks and technology, a desire to build things the right way using modern software principles, and experience with data and all things data related. What Your Responsibilities Will Be You will architect repeatable, reusable solutions to keep our technology stack DRY Conduct technical and architecture reviews with engineers, ensuring all contributions meet quality expectations You will develop scalable, reliable, and efficient data pipelines using dbt, Python, or other ELT tools Implement and maintain scalable data orchestration and transformation, ensuring data accuracy, consistency Collaborate with cross-functional teams to understand complex requirements and translate them into technical solutions Build scalable, complex dbt models Demonstrate ownership of complex projects and calculations of core financial metrics and processes Work with Data Engineering teams to define and maintain scalable data pipelines. Promote automation and optimization of reporting processes to improve efficiency. You will be reporting to Senior Manager What You'll Need to be Successful Bachelor's degree in Computer Science or Engineering, or related field 6+ years experience in data engineering field, with advanced SQL knowledge 4+ years of working with Git, and demonstrated experience collaborating with other engineers across repositories 4+ years of working with Snowflake 3+ years working with dbt (dbt core) 3+ years working with Infrastructure as Code Terraform 3+ years working with CI CD, and demonstrated ability to build and operate pipelines AWS Certified Terraform Certified Experience working with complex Salesforce data Snowflake, dbt certified

Posted 5 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🚨 Urgent Hiring: Java Spring Boot Developer – 6+ Years Experience | Gurgaon (On-site) 📍 Location: Gurgaon , India Type: Full-Time (preferred) Job Summary : We are seeking a highly skilled and motivated Java Spring Boot Developer to join our engineering team. This role focuses on developing and deploying scalable, event-driven applications on OpenShift , with data ingestion from Apache Kafka and transformation logic written in Apache Camel . The ideal candidate should possess a strong understanding of enterprise integration patterns, stream processing, and protocols, and have experience with observability tools and concepts in AI-enhanced applications. Key Responsibility : Design, develop, and deploy Java Spring Boot ( must ) applications on OpenShift ( ready to learn RedHat open shift or already have Kubernetes experience). Build robust data pipelines with Apache Kafka ( must ) for high-throughput ingestion and real-time processing. Implement transformation and routing logic using Apache Camel (basic knowledge and ready to learn) and Enterprise Integration Patterns (EIPs). Develop components that interface with various protocols including HTTP , JMS , and database systems (SQL/NoSQL). Utilize Apache Flink or similar tools for complex event and stream processing where necessary. Integrate observability solutions (e.g., Prometheus, Grafana, ELK, Open Telemetry) to ensure monitoring, logging, and alerting. Collaborate with AI/ML teams to integrate or enable AI-driven capabilities within applications. Write unit and integration tests, participate in code reviews, and support CI/CD practices. Troubleshoot and optimize application performance and data flows in production environments Required Skills & Qualification 5+ years of hands-on experience in Java development with strong proficiency in Spring Boot Solid experience with Apache Kafka (consumer/producer patterns, schema registry, Kafka Streams is a plus) Experience with stream processing technologies such as Apache Flink, Kafka Streams, or Spark Streaming. Proficient in Apache Camel and understanding of EIPs (routing, transformation, aggregation, etc.). Strong grasp of various protocols (HTTP, JMS, TCP) and messaging paradigms. In-depth understanding of database concepts – both relational and NoSQL. Knowledge of observability tools and techniques – logging, metrics, tracing. Exposure to AI concepts (basic understanding of ML model integration, AI-driven decisions, etc.). Troubleshoot and optimize application performance and data flows in production environments ⚠️ Important Notes Only candidates with a notice period of 20 days or less will be considered PF account is Must for joining Full time If you have already applied for this job with us, please do not submit a duplicate application. Budget is limited and max CTC based on years of experience and expertise. 📬 How to Apply Email your resume to career@strive4x.net with the subject line: Java Spring Boot Developer - Gurgaon Please include the following details Full Name Mobile Number Current Location Total Experience (in years) Current Company Current CTC Expected CTC Notice Period Are you open to relocating to Gurgaon (Yes/No)? Do you have PF account (Yes/No)? Do you prefer full time or Contract or both ? 👉 Know someone who fits the role? Tag or share this with them #JavaJobs #SpringBoot #GurgaonJobs #Kafka #ApacheCamel #OpenShift #HiringNow #SoftwareJobs #SeniorDeveloper #Microservices #Strive4X

Posted 5 days ago

Apply

6.0 - 8.0 years

1 - 4 Lacs

Chennai

Hybrid

Job Title:Snowflake Developer Experience: 6-8 Years Location:Chennai - Hybrid Job Description : 3+ years of experience as a Snowflake Developer or Data Engineer. Strong knowledge of SQL, SnowSQL, and Snowflake schema design. Experience with ETL tools and data pipeline automation. Basic understanding of US healthcare data (claims, eligibility, providers, payers). Experience working with largescale datasets and cloud platforms (AWS, Azure, GCP). Familiarity with data governance, security, and compliance (HIPAA, HITECH).

Posted 5 days ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Hyderabad, Ahmedabad

Work from Office

The Team: The usage reporting team gathers raw usage data from various products and produces unified datasets across departmental lines within Market Intelligence. We deliver essential intelligence for both public and internal reporting purposes. The Impact: As the Lead Developer for the usage reporting team, you will play a key role in delivering essential insights for both public and private users of the S&P Global Market Intelligence platforms. Our data provides the foundation for strategy and insights that our team members depend on to deliver critical intelligence for our clients around the world. Whats in it for you: Work with a variety of subject matter experts to develop and improve data offerings. Gain exposure to a wide range of datasets and stakeholders while tackling daily challenges. Oversee the complete software development lifecycle (SDLC) from initial architecture and design to development and support for data pipelines. Responsibilities: Produce technical design documents and conduct technical walkthroughs. Build and maintain data pipelines using a variety of programming languages and data processing techniques. Be part of an agile team that designs, develops, and maintains enterprise data systems and related software applications. Participate in design sessions for new product features, data models, and capabilities. Collaborate with key stakeholders to develop system architectures, API specifications, and implementation requirements. What Were Looking For: 4-8 years of experience as a Senior Developer with strong experience in programming languages and data processing techniques. 4-10 years of experience with public cloud platforms. Experience with data processing frameworks and orchestration tools. 4-10 years of data warehousing experience. A strong self-starter with independent motivation as a software engineer. Strong leadership skills with a proven ability to collaborate effectively with engineering leadership and key stakeholders. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, andmake decisions with conviction. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & Wellness: Health care coverage designed for the mind and body. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families.

Posted 5 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Location: Bengaluru (Hybrid) Role Summary We’re seeking a skilled Data Scientist with deep expertise in recommender systems to design and deploy scalable personalization solutions. This role blends research, experimentation, and production-level implementation, with a focus on content-based and multi-modal recommendations using deep learning and cloud-native tools. Responsibilities Research, prototype, and implement recommendation models: two-tower, multi-tower, cross-encoder architectures Utilize text/image embeddings (CLIP, ViT, BERT) for content-based retrieval and matching Conduct semantic similarity analysis and deploy vector-based retrieval systems (FAISS, Qdrant, ScaNN) Perform large-scale data prep and feature engineering with Spark/PySpark and Dataproc Build ML pipelines using Vertex AI, Kubeflow, and orchestration on GKE Evaluate models using recommender metrics (nDCG, Recall@K, HitRate, MAP) and offline frameworks Drive model performance through A/B testing and real-time serving via Cloud Run or Vertex AI Address cold-start challenges with metadata and multi-modal input Collaborate with engineering for CI/CD, monitoring, and embedding lifecycle management Stay current with trends in LLM-powered ranking, hybrid retrieval, and personalization Required Skills Python proficiency with pandas, polars, numpy, scikit-learn, TensorFlow, PyTorch, transformers Hands-on experience with deep learning frameworks for recommender systems Solid grounding in embedding retrieval strategies and approximate nearest neighbor search GCP-native workflows: Vertex AI, Dataproc, Dataflow, Pub/Sub, Cloud Functions, Cloud Run Strong foundation in semantic search, user modeling, and personalization techniques Familiarity with MLOps best practices—CI/CD, infrastructure automation, monitoring Experience deploying models in production using containerized environments and Kubernetes Nice to Have Ranking models knowledge: DLRM, XGBoost, LightGBM Multi-modal retrieval experience (text + image + tabular features) Exposure to LLM-powered personalization or hybrid recommendation systems Understanding of real-time model updates and streaming ingestion

Posted 5 days ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Minimum qualifications: Bachelor's degree in Computer Science or related technical field or equivalent practical experience. 6 years of experience as a technical sales engineer in a cloud computing environment or a customer-facing role. Experience with Apache Spark and analytic warehouse solutions (e.g., Teradata, Netezza, Vertica, SQL-Server, and Big Data technologies). Experience implementing analytics systems architecture. Preferred qualifications: Master's degree in Computer Science or a related technical field. Experience with technical sales or professional consulting in cloud computing, data, information life-cycle management and Big Data. Experience in data warehousing, data lakes, batch/real-time processing and Extract, Transform, and Load (ETL) workflow including architecture design, implementing, tuning and schema design. Experience with coding languages like Python, JavaScript, C++, Scala, R, or Go. Knowledge of Linux, Web 2.0 development platforms, solutions, and related technologies like HTTP, Basic/NTLM,sessions, XML/XSLT/XHTML/HTML. Understanding of DNS, TCP, Firewalls, Proxy Servers, DMZ, Load Balancing, VPN, VPC. About The Job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Support local sales teams in pursuing business opportunities by engaging customers to address data life-cycle aspects. Collaborate with business teams to identify business and technical requirements, conduct full technical discovery and architect client solutions. Lead technical projects, including technology advocacy, bid response support, product briefings, proof-of-concept work and co-ordinating technical resources. Leverage Google Cloud Platform products to demonstrate and prototype integrations in customer/partner environments.Travel for meetings, technical reviews, on-site delivery activities as needed. Deliver compelling product messaging to highlight the Google Cloud Platform value proposition through whiteboard and slide presentations, product demonstrations, white papers and Request For Information (RFI) response documents. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 5 days ago

Apply

6.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role: Senior Associate Tower: Data, Analytics & Specialist Managed Service Experience: 6 - 10 years Key Skills: Azure Educational Qualification: BE / B Tech / ME / M Tech / MBA Work Location: Bangalore Job Description As a Senior Associate, you will work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Use feedback and reflection to develop self-awareness, personal strengths, and address development areas. Flexible to work in stretch opportunities/assignments. Demonstrate critical thinking and the ability to bring order to unstructured problems. Ticket Quality and deliverables review, Status Reporting for the project. Adherence to SLAs, experience in incident management, change management and problem management. Seek and embrace opportunities which give exposure to different situations, environments, and perspectives. Use straightforward communication, in a structured way, when influencing and connecting with others. Able to read situations and modify behavior to build quality relationships. Uphold the firm's code of ethics and business conduct. Demonstrate leadership capabilities by working, with clients directly and leading the engagement. Work in a team environment that includes client interactions, workstream management, and cross-team collaboration. Good team player, take up cross competency work and contribute to COE activities. Escalation/Risk management. Position Requirements: Required Skills: Azure Cloud Engineer: Job description: Candidate is expected to demonstrate extensive knowledge and/or a proven record of success in the following areas: Should have minimum 6 years hand on experience building advanced Data warehousing solutions on leading cloud platforms. Should have minimum 3-5 years of Operate/Managed Services/Production Support Experience Should have extensive experience in developing scalable, repeatable, and secure data structures and pipelines to ingest, store, collect, standardize, and integrate data that for downstream consumption like Business Intelligence systems, Analytics modelling, Data scientists etc. Designing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into data storage systems, such as data warehouses or data lakes. Should have experience in building efficient, ETL/ELT processes using industry leading tools like Informatica, Talend, SSIS, AWS, Azure, Spark, SQL, Python etc. Should have Hands-on experience with Data analytics tools like Informatica, Collibra, Hadoop, Spark, Snowflake etc. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations. Perform data transformation and processing tasks to prepare the data for analysis and reporting in Azure Databricks or Azure Synapse Analytics for large-scale data transformations using tools like Apache Spark. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and troubleshooting data pipelines and resolving issues related to data processing, transformation, or storage. Implementing and maintaining data security and privacy measures, including access controls and encryption, to protect sensitive data Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases Should have experience in Building and maintaining Data Governance solutions (Data Quality, Metadata management, Lineage, Master Data Management and Data security) using industry leading tools Scaling and optimizing schema and performance tuning SQL and ETL pipelines in data lake and data warehouse environments. Should have Hands-on experience with Data analytics tools like databricks Should have Experience of ITIL processes like Incident management, Problem Management, Knowledge management, Release management, Data DevOps etc. Should have Strong communication, problem solving, quantitative and analytical abilities. Nice to have: Azure certification Managed Services- Data, Analytics & Insights Managed Service At PwC we relentlessly focus on working with our clients to bring the power of technology and humans together and create simple, yet powerful solutions. We imagine a day when our clients can simply focus on their business knowing that they have a trusted partner for their IT needs. Every day we are motivated and passionate about making our clients’ better. Within our Managed Services platform, PwC delivers integrated services and solutions that are grounded in deep industry experience and powered by the talent that you would expect from the PwC brand. The PwC Managed Services platform delivers scalable solutions that add greater value to our client’s enterprise through technology and human-enabled experiences. Our team of highly skilled and trained global professionals, combined with the use of the latest advancements in technology and process, allows us to provide effective and efficient outcomes. With PwC’s Managed Services our clients are able to focus on accelerating their priorities, including optimizing operations and accelerating outcomes. PwC brings a consultative first approach to operations, leveraging our deep industry insights combined with world class talent and assets to enable transformational journeys that drive sustained client outcomes. Our clients need flexible access to world class business and technology capabilities that keep pace with today’s dynamic business environment. Within our global, Managed Services platform, we provide Data, Analytics & Insights where we focus more so on the evolution of our clients’ Data and Analytics ecosystem. Our focus is to empower our clients to navigate and capture the value of their Data & Analytics portfolio while cost-effectively operating and protecting their solutions. We do this so that our clients can focus on what matters most to your business: accelerating growth that is dynamic, efficient and cost-effective. As a member of our Data, Analytics & Insights Managed Service team, we are looking for candidates who thrive working in a high-paced work environment capable of working on a mix of critical Data, Analytics & Insights offerings and engagement including help desk support, enhancement, and optimization work, as well as strategic roadmap and advisory level work. It will also be key to lend experience and effort in helping win and support customer engagements from not only a technical perspective, but also a relationship perspective.

Posted 5 days ago

Apply

4.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description: As a Data Engineer, you will be responsible for designing, implementing, and maintaining our data infrastructure to support our rapidly growing business needs. The ideal candidate will have expertise in Apache Iceberg, Apache Hive, Apache Hadoop, SparkSQL, YARN, HDFS, MySQL, Data Modeling, Data Warehousing, Spark Architecture, and SQL Query Optimization. Experience with Apache Flink, PySpark, Automated Data Quality testing & Data Migration is considered a plus. Also, it's mandatory to know any one cloud stack (AWS or Azure) for Data Engineering to Create Data Jobs and Workflows and Scheduler it later for Automation Job Responsibilities & Requirements : Bachelor's degree in computer science, Information Technology, or a related field. Master's degree preferred. 4-5 years of experience working as a Data Engineer Mandatory experience in PySpark Development for Big data processing Strong proficiency in Apache Iceberg, Apache Hive, Apache Hadoop, SparkSQL, YARN, HDFS, Data Modeling, and Data Warehousing. Core PySpark Development and Optimizing SQL queries and performance tuning to ensure optimal data retrieval and processing. Experience with Apache Flink, and Automated Data Quality testing is a plus. It's mandatory to know any one cloud stack (AWS or Azure) for Data Engineering to Create Data Jobs and Workflows and Scheduler later for Automation Join Xiaomi India Technology and be part of a team that is shaping the future of technology innovation. Apply now and embark on an exciting journey with us!

Posted 5 days ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Python Developer Experience Level: 5-7 Years Location : Hyderabad Job Description We are seeking an experienced Lead Python Developer with a proven track record of building scalable and secure applications, specifically in the travel and tourism industry. The ideal candidate should possess in-depth knowledge of Python, modern development frameworks, and expertise in integrating third-party travel APIs. This role demands a leader who can foster innovation while adhering to industry standards for security, scalability, and performance. Roles and Responsibilities Application Development: Architect and develop robust, high-performance applications using Python frameworks such as Django, Flask, and FastAPI. API Integration: Design and implement seamless integration with third-party APIs, including GDS, CRS, OTA, and airline-specific APIs, to enable real-time data retrieval for booking, pricing, and availability. Data Management: Develop and optimize complex data pipelines to manage structured and unstructured data, utilizing ETL processes, data lakes, and distributed storage solutions. Microservices Architecture: Build modular applications using microservices principles to ensure scalability, independent deployment, and high availability. Performance Optimization: Enhance application performance through efficient resource management, load balancing, and faster query handling to deliver an exceptional user experience. Security and Compliance: Implement secure coding practices, manage data encryption, and ensure compliance with industry standards such as PCI DSS and GDPR. Automation and Deployment: Leverage CI/CD pipelines, containerization, and orchestration tools to automate testing, deployment, and monitoring processes. Collaboration: Work closely with front-end developers, product managers, and stakeholders to deliver high- quality, user-centric solutions aligned with business goals.Requirements  Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.  Technical Expertise: o At least 4 years of hands-on experience with Python frameworks like Django, Flask, and FastAPI. o Proficiency in RESTful APIs, GraphQL, and asynchronous programming. o Strong knowledge of SQL/No SQL databases (PostgreSQL, MongoDB) and big data tools (e.g., Spark, Kafka). o Experience with cloud platforms (AWS, Azure, Google Cloud), containerization (Docker, Kubernetes), and CI/CD tools (e.g., Jenkins, GitLab CI). o Familiarity with testing tools such as PyTest, Selenium, and SonarQube. o Expertise in travel APIs, booking flows, and payment gateway integrations.  Soft Skills: o Excellent problem-solving and analytical abilities. o Strong communication, presentation, and teamwork skills. o A proactive attitude with a willingness to take ownership and perform under pressure.

Posted 5 days ago

Apply

5.0 - 10.0 years

22 - 25 Lacs

Hyderabad

Work from Office

Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies