Jobs
Interviews

155 Sharding Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

4 - 9 Lacs

Gurgaon

On-site

About the Team: Join a highly skilled and collaborative team dedicated to ensuring data reliability, performance, and security across our organization’s critical systems. We work closely with developers, architects, and DevOps professionals to deliver seamless and scalable database solutions in a cloud-first environment, leveraging the latest in AWS and open-source technologies. Our team values continuous learning, innovation, and the proactive resolution of database challenges. About the Role: As a Database Administrator specializing in MySQL and Postgres within AWS environments, you will play a key role in architecting, deploying, and supporting the backbone of our data infrastructure. You’ll leverage your expertise to optimize database instances, manage large-scale deployments, and ensure our databases are secure, highly available, and resilient. This is an opportunity to collaborate across teams, stay ahead with emerging technologies, and contribute directly to our business success. Responsibilities: Design, implement, and maintain MySQL and Postgres database instances on AWS, including managing clustering and replication (MongoDB, Postgres solutions). Write, review, and optimize stored procedures, triggers, functions, and scripts for automated database management. Continuously tune, index, and scale database systems to maximize performance and handle rapid growth. Monitor database operations to ensure high availability, robust security, and optimal performance. Develop, execute, and test backup and disaster recovery strategies in line with company policies. Collaborate with development teams to design efficient and effective database schemas aligned with application needs. Troubleshoot and resolve database issues, implementing corrective actions to restore service and prevent recurrence. Enforce and evolve database security best practices, including access controls and compliance measures. Stay updated on new database technologies, AWS advancements, and industry best practices. Plan and perform database migrations across AWS regions or instances. Manage clustering, replication, installation, and sharding for MongoDB, Postgres, and related technologies. Requirements: 4-7 Years of Experinece in Database Management Systems as a Database Engineer. Proven experience as a MySQL/Postgres Database Administrator in high-availability, production environments. Expertise in AWS cloud services, especially EC2, RDS, Aurora, DynamoDB, S3, and Redshift. In-depth knowledge of DR (Disaster Recovery) setups, including active-active and active-passive master configurations. Hands-on experience with MySQL partitioning and AWS Redshift. Strong understanding of database architectures, replication, clustering, and backup strategies (including Postgres replication & backup). Advanced proficiency in optimizing and troubleshooting SQL queries; adept with performance tuning and monitoring tools. Familiarity with scripting languages such as Bash or Python for automation/maintenance. Experience with MongoDB, Postgres clustering, Cassandra, and related NoSQL or distributed database solutions. Ability to provide 24/7 support and participate in on-call rotation schedules. Excellent problem-solving, communication, and collaboration skills. What we offer? A positive, get-things-done workplace A dynamic, constantly evolving space (change is par for the course – important you are comfortable with this) An inclusive environment that ensures we listen to a diverse range of voices when making decisions. Ability to learn cutting edge concepts and innovation in an agile start-up environment with a global scale Access to 5000+ training courses accessible anytime/anywhere to support your growth and development (Corporate with top learning partners like Harvard, Coursera, Udacity) About us: At PayU, we are a global fintech investor and our vision is to build a world without financial borders where everyone can prosper. We give people in high growth markets the financial services and products they need to thrive. Our expertise in 18+ high-growth markets enables us to extend the reach of financial services. This drives everything we do, from investing in technology entrepreneurs to offering credit to underserved individuals, to helping merchants buy, sell, and operate online. Being part of Prosus, one of the largest technology investors in the world, gives us the presence and expertise to make a real impact. Find out more at www.payu.com Our Commitment to Building A Diverse and Inclusive Workforce As a global and multi-cultural organization with varied ethnicities thriving across locations, we realize that our responsibility towards fulfilling the D&I commitment is huge. Therefore, we continuously strive to create a diverse, inclusive, and safe environment, for all our people, communities, and customers. Our leaders are committed to create an inclusive work culture which enables transparency, flexibility, and unbiased attention to every PayUneer so they can succeed, irrespective of gender, color, or personal faith. An environment where every person feels they belong, that they are listened to, and where they are empowered to speak up. At PayU we have zero tolerance towards any form of prejudice whether a specific race, ethnicity, or of persons with disabilities, or the LGBTQ communities.

Posted 20 hours ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

gStore is GreyOrange’s flagship SaaS platform that transforms physical retail operations through realtime, AI-driven inventory visibility and intelligent in-store task execution. It integrates advanced technologies like RFID, computer vision, and machine learning to deliver 98%+ inventory accuracy with precise spatial mapping. gStore empowers store associates with guided workflows for omnichannel fulfillment (BOPIS, ship-from-store, returns), intelligent task allocation, and real-time replenishment — significantly improving efficiency, reducing shrinkage, and driving in-store conversions. The platform is cloud-native, hardware-agnostic, and built to scale across thousands of stores globally with robust integrations and actionable analytics. Roles & Responsibilities Define and drive the overall architecture for scalable, secure, and high-performance distributed systems. Write and review code for critical modules and performance-sensitive components to set quality and architectural standards. Collaborate with engineering leads and product managers to align technology strategy with business goals. Evaluate and recommend tools, technologies, and processes to ensure the highest quality product platform. Own and evolve the system design, ensuring modularity, multi-tenancy, and future extensibility. Establish and govern best practices around service design, API development, security, observability, and performance. Review code, designs, and technical documentation, ensuring adherence to architecture and design principles. Lead design discussions and mentor senior and mid-level engineers to improve design thinking and engineering quality. Partner with DevOps to optimise CI/CD, containerization, and infrastructure-as-code Stay abreast of industry trends and emerging technologies, assessing their relevance and value. Skills 12+ years of experience in backend development Strong understanding of data structures and algorithms Good knowledge of low-level and high-level system designs and best practices Strong expertise in Java & Spring Boot , with a deep understanding of microservice architectures and design patterns. Good knowledge of databases (both SQL and NoSQL ), including schema design, sharding, and performance tuning. Expertise in Kubernetes, Helm, and container orchestration** for deploying and managing scalable applications. Advanced knowledge of Kafka for stream processing, event-driven architecture, and data integration. Proficiency in Redis for caching, session management, and pub-sub use cases. Solid understanding of API design (REST/gRPC), authentication (OAuth2/JWT), and security best practices. Strong grasp of system design fundamentals—scalability, reliability, consistency, and observability. Experience with monitoring and logging frameworks (e.g. Datadog, Prometheus, Grafana, ELK, or equivalent). Excellent problem-solving, communication, and cross-functional leadership skills. Prior experience in leading architecture for SaaS or high-scale multi-tenant platforms is highly desirable.

Posted 20 hours ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Senior DB Developer – Sports/Healthcare Location: Ahmedabad, Gujarat. Job Type: Full-Time. Job Description: We are seeking an exceptional Senior Database Developer with 8+ years of expertise who will play a critical role in design and development of a scalable, configurable, and customizable platform. Our new Senior Database Developer will help with the design and collaborate with cross-functional teams and provide data solutions for delivering high-performance applications. If you are passionate about bringing innovative technology to life, owning and solving problems in an independent, fail fast and highly supportive environment, and working with a creative and dynamic team, we want to hear from you. This role requires a strong understanding of enterprise applications and large-scale data processing platforms. Key Responsibilities: ● Design and architect scalable, efficient, high-availability and secure database solutions to meet business requirements. ● Designing the Schema and ER Diagram for horizontal scalable architecture ● Strong knowledge of NoSQL / MongoDB ● Knowledge of ETL Tools for data migration from source to destination. ● Establish database standards, procedures, and best practices for data modelling, storage, security, and performance. ● Implement data partitioning, sharding, and replication for high-throughput systems. ● Optimize data lake, data warehouse, and NoSQL solutions for fast retrieval. ● Collaborate with developers and data engineers to define data requirements and optimize database performance. ● Implement database security policies ensuring compliance with regulatory standards (e.g., GDPR, HIPAA). ● Optimize and tune databases for performance, scalability, and availability. ● Design disaster recovery and backup solutions to ensure data protection and business continuity. ● Evaluate and implement new database technologies and frameworks as needed. ● Provide expertise in database migration, transformation, and modernization projects. ● Conduct performance analysis and troubleshooting of database-related issues. ● Document database architecture and standards for future reference. Required Skills and Qualifications: ● 8+ years of experience in database architecture, design, and management. ● Experience with AWS (Amazon Web Services) and similar platforms like Azure and GCP (Google Cloud Platform). ● Experience deploying and managing applications, utilizing various cloud services (compute, storage, databases, etc.) ● Experience with specific services like EC2, S3, Lambda (for AWS) ● Proficiency with SQL and NoSQL databases (e.g., PostgreSQL, MySQL, Oracle, MongoDB , Cassandra). ● MongoDB and NoSQL Experience is a big added advantage. ● Expertise in data modelling, schema design, indexing, and partitioning. ● Experience with ETL processes, data warehousing, and big data technologies (e.g. Apache NiFi, Airflow, Redshift, Snowflake, Hadoop). ● Proficiency in database performance tuning, optimization, and monitoring tools. ● Strong knowledge of data security, encryption, and compliance frameworks. ● Excellent analytical, problem-solving, and communication skills. ● Proven experience in database migration and modernization projects. Preferred Qualifications: ● Certifications in cloud platforms (AWS, GCP, Azure) or database technologies. ● Experience with machine learning and AI-driven data solutions. ● Knowledge of graph databases and time-series databases. ● Familiarity with Kubernetes, containerized databases, and microservices architecture. Education: ● Bachelor's or Master’s degree in Computer Science , Software Engineering , or related technical field. Why Join Us? ● Be part of an exciting and dynamic project in the sports/health data domain. ● Work with cutting-edge technologies and large-scale data processing systems. ● Collaborative, fast-paced team environment with opportunities for professional growth. Competitive salary, bonus, and benefits package

Posted 1 day ago

Apply

5.0 years

4 - 10 Lacs

India

On-site

About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Role Summary: We are seeking an experienced and highly motivated Database Administrator (DBA) to join our team. The ideal candidate will be responsible for the design, implementation, performance tuning, and maintenance of relational (MSSQL, PostgreSQL) and NoSQL (MongoDB) databases, both on-premises and in cloud environments (AWS, Azure, GCP). You will ensure data integrity, security, availability, and optimal performance across all platforms. Key Responsibilities: Database Management & Optimization · Install, configure, and upgrade database servers (MSSQL, PostgreSQL, MongoDB). · Monitor performance, optimize queries, and tune databases for efficiency. · Implement and manage database clustering, replication, sharding, and high availability. Cloud Database Administration · Manage cloud-based database services (e.g., Amazon RDS, Azure SQL Database, GCP Cloud SQL, MongoDB Atlas). · Automate backup, failover, patching, and scaling in the cloud environment. · Ensure secure access, encryption, and compliance in the cloud. · ETL and Dev Ops experience is desirable. Backup, Recovery & Security · Design and implement robust backup and disaster recovery plans. · Regularly test recovery processes to ensure minimal downtime. · Apply database security best practices (roles, permissions, auditing, encryption). Scripting & Automation · Develop scripts for automation (using PowerShell, Bash, Python, etc.). · Automate repetitive DBA tasks using DevOps/CI-CD tools (Terraform, Ansible, etc.). Collaboration & Support · Work closely with developers, DevOps, and system admins to support application development. · Assist with database design, indexing strategy, schema changes, and query optimization. · Provide 24/7 support for critical production issues (on-call rotation may apply). Key Skills & Qualifications: · Bachelor’s degree in computer science, Information Technology, or related field. · 5+ years of experience as a DBA with production experience in: MSSQL Server (SQL Server 2016 and above) PostgreSQL (including PostGIS, logical/physical replication) MongoDB (including MongoDB Atlas, replica sets, sharding) · Experience with cloud database services (AWS RDS, Azure SQL, GCP Cloud SQL). · Strong understanding of performance tuning, indexing, and query optimization. · Solid grasp of backup and restore strategies, disaster recovery, and HA setups. · Familiarity with monitoring tools (e.g., Prometheus, Datadog, New Relic, Zabbix). · Knowledge of scripting languages (PowerShell, Bash, or Python). · Understanding of DevOps principles, version control (Git), CI/CD pipelines. Preferred Qualifications: · Certification in any cloud platform (AWS/Azure/GCP). · Microsoft Certified: Azure Database Administrator Associate. · Experience with Kubernetes Operators for databases (e.g., Crunchy Postgres Operator). · Experience with Infrastructure as Code (Terraform, CloudFormation). Benefits: · Competitive salary and performance bonus. · Health insurance, paid leaves. · Opportunity to work with cutting-edge cloud and database technologies. Job Types: Full-time, Permanent Pay: ₹400,000.00 - ₹1,000,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025

Posted 1 day ago

Apply

6.0 years

0 Lacs

Delhi

Remote

Overview WELCOME TO SITA We're the team that keeps airports moving, airlines flying smoothly, and borders open. Our tech and communication innovations are the secret behind the success of the world’s air travel industry. You'll find us at 95% of international hubs. We partner closely with over 2,500 transportation and government clients, each with their own unique needs and challenges. Our goal is to find fresh solutions and cutting-edge tech to make their operations run like clockwork. Want to be a part of something big? Are you ready to love your job? The adventure begins right here, with you, at SITA. ABOUT THE ROLE & TEAM The Senior Software Developer (Database Administrator) will play a pivotal role in the design, development, and maintenance of high-performance and scalable database environments. This individual will ensure seamless integration of various database components, leveraging advanced technologies to support applications and data systems. The candidate should possess expertise in SQL Server, MongoDB and other NoSQL solutions would be a plus. WHAT YOU’LL DO Manage, monitor, and maintain SQL Server databases both On-Prem and Cloud across production and non-production environments. Design and implement scalable and reliable database architectures. Develop robust and secure database systems, ensuring high availability and performance. Create and maintain shell scripts for database automation, monitoring, and administrative tasks. Troubleshoot and resolve database issues to ensure system stability and optimal performance. Implement backup, recovery, Migration and disaster recovery strategies. Collaborate with cross-functional teams to understand requirements and deliver database solutions that align with business objectives. Qualifications ABOUT YOUR SKILLS Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Over 6 years of experience in database administration, specializing in MongoDB and SQL Server. Proficient in shell scripting (e.g., Bash, PowerShell) for database automation. Expertise in query optimization, database performance tuning, and high-availability setups such as replica sets, sharding, and failover clusters. Familiarity with cloud-based database solutions and DevOps pipelines. Skilled in database security, including role-based access and encryption. Experienced with monitoring tools like mongotop, mongostat, and SQL Profiler. Knowledge of messaging queues (RabbitMQ, IBM MQ, or Solace) is a plus. Strong understanding of database administration best practices, design patterns, and standards. Demonstrates excellent problem-solving skills, attention to detail, and effective communication and teamwork abilities. NICE-TO-HAVE Professional certification is a plus. WHAT WE OFFER We’re all about diversity. We operate in 200 countries and speak 60 different languages and cultures. We’re really proud of our inclusive environment. Our offices are comfortable and fun places to work, and we make sure you get to work from home too. Find out what it's like to join our team and take a step closer to your best life ever. Flex Week: Work from home up to 2 days/week (depending on your team’s needs) Flex Day: Make your workday suit your life and plans. Flex Location: Take up to 30 days a year to work from any location in the world. Employee Wellbeing: We’ve got you covered with our Employee Assistance Program (EAP), for you and your dependents 24/7, 365 days/year. We also offer Champion Health – a personalized platform that supports a range of wellbeing needs. Professional Development : Level up your skills with our training platforms, including LinkedIn Learning! Competitive Benefits : Competitive benefits that make sense with both your local market and employment status. SITA is an Equal Opportunity Employer. We value a diverse workforce. In support of our Employment Equity Program, we encourage women, aboriginal people, members of visible minorities, and/or persons with disabilities to apply and self-identify in the application process.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Company: The healthcare industry is the next great frontier of opportunity for software development, and Health Catalyst is one of the most dynamic and influential companies in this space. We are working on solving national-level healthcare problems, and this is your chance to improve the lives of millions of people, including your family and friends. Health Catalyst is a fast-growing company that values smart, hardworking, and humble individuals. Each product team is a small, mission-critical team focused on developing innovative tools to support Catalyst’s mission to improve healthcare performance, cost, and quality. POSITION OVERVIEW: We are looking for a highly skilled Senior Database Engineer & Storage Expert with 5+ years of hands-on experience in managing and optimizing large-scale, high-throughput database systems. The ideal candidate will possess deep expertise in handling complex ingestion pipelines across multiple data stores and a strong understanding of distributed database architecture. The candidate will play a critical technical leadership role in ensuring our data systems are robust, performant, and scalable to support massive datasets ingested from various sources without bottlenecks. You will work closely with data engineers, platform engineers, and infrastructure teams to continuously improve database performance and reliability. performance bottlenecks. KEY RESPONSIBILITIES: • Query Optimization: Design, write, debug and optimize complex queries for RDS (MySQL/PostgreSQL), MongoDB, Elasticsearch, and Cassandra. • Large-Scale Ingestion: Configure databases to handle high-throughput data ingestion efficiently. • Database Tuning: Optimize database configurations (e.g., memory allocation, connection pooling, indexing) to support large-scale operations. • Schema and Index Design: Develop schemas and indexes to ensure efficient storage and retrieval of large datasets. • Monitoring and Troubleshooting: Analyze and resolve issues such as slow ingestion rates, replication delays, and performance bottlenecks. • Performance Debugging: Analyze and troubleshoot database slowdowns by investigating query execution plans, logs, and metrics. • Log Analysis: Use database logs to diagnose and resolve issues related to query performance, replication, and ingestion bottlenecks • Data Partitioning and Sharding: Implement partitioning, sharding, and other distributed database techniques to improve scalability. • Batch and Real-Time Processing: Optimize ingestion pipelines for both batch and real-time workloads. • Collaboration: Partner with data engineers and Kafka experts to design and maintain robust ingestion pipelines. • Stay Updated: Stay up to date with the latest advancements in database technologies and recommend improvements. REQUIRED SKILLS AND QUALIFICATIONS: • Database Expertise: Proven experience with MySQL/PostgreSQL (RDS), MongoDB, Elasticsearch, and Cassandra. • High-Volume Operations: Proven experience in configuring and managing databases for large-scale data ingestions. • Performance Tuning: Hands-on experience with query optimization, indexing strategies, and execution plan analysis for large datasets. • Database Internals: Strong understanding of replication, partitioning, sharding, and caching mechanisms. • Data Modeling: Ability to design schemas and data models tailored for high throughput use cases. • Programming Skills: Proficiency in at least one programming language (e.g., Python, Java, Go) for building data pipelines. • Debugging Proficiency: Strong ability to debug slowdowns by analyzing database logs, query execution plans, and system metrics. • Log Analysis Tools: Familiarity with database log formats and tools for parsing and analyzing logs. • Monitoring Tools: Experience with monitoring tools such as AWS CloudWatch, Prometheus, and Grafana to track ingestion performance. • Problem-Solving: Analytical skills to diagnose and resolve ingestion-related issues effectively. PREFERRED QUALIFICATIONS: • Certification in any of the mentioned database technologies. • Hands-on experience with cloud platforms such as AWS (preferred), Azure, or GCP. • Knowledge of distributed systems and large-scale data processing. • Familiarity with cloud-based database solutions and infrastructure. • Familiarity with large scale data ingestion tools like Kafka, Spark or Flink. EDUCATIONAL REQUIREMENTS: • Bachelor’s degree in computer science, Information Technology, or a related field. Equivalent work experience will also be considered Equal Employment Opportunity has been, and will continue to be, a fundamental principle at Health Catalyst, where employment is based upon personal capabilities and qualification without discrimination or harassment on the basis of race, color, national origin, religion, sex, sexual orientation, gender identity, age, disability, citizenship status, marital status, creed, genetic predisposition or carrier status, sexual orientation or any other characteristic protected by law.. Health Catalyst is committed to a work environment where all individuals are treated with respect and dignity.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Greater Chennai Area

On-site

Responsibilities Participate in requirements definition, analysis, and the design of logical and physical data models for Dimensional Data Model, NoSQL, or Graph Data Model. Lead data discovery discussions with Business in JAD sessions and map the business requirements to logical and physical data modeling solutions. Conduct data model reviews with project team members. Capture technical metadata through data modeling tools. Ensure database designs efficiently support BI and end user requirements. Drive continual improvement and enhancement of existing systems. Collaborate with ETL/Data Engineering teams to create data process pipelines for data ingestion and transformation. Collaborate with Data Architects for data model management, documentation, and version control. Maintain expertise and proficiency in the various application areas. Maintain current knowledge of industry trends and standards. Required Skills Strong data analysis and data profiling skills. Strong conceptual, logical, and physical data modeling for VLDB Data Warehouse and Graph DB. Hands-on experience with modeling tools such as ERWIN or another industry-standard tool. Fluent in both normalized and dimensional model disciplines and techniques. Minimum of 3 years' experience in Oracle Database. Hands-on experience with Oracle SQL, PL/SQL, or Cypher. Exposure to Databricks Spark, Delta Technologies, Informatica ETL, or other industry-leading tools. Good knowledge or experience with AWS Redshift and Graph DB design and management. Working knowledge of AWS Cloud technologies, mainly on the services of VPC, EC2, S3, DMS, and Glue. Bachelor's degree in Software Engineering, Computer Science, or Information Systems (or equivalent experience). Excellent verbal and written communication skills, including the ability to describe complex technical concepts in relatable terms. Ability to manage and prioritize multiple workstreams with confidence in making decisions about prioritization. Data-driven mentality. Self-motivated, responsible, conscientious, and detail-oriented. Effective oral and written communication skills. Ability to learn and maintain knowledge of multiple application areas. Understanding of industry best practices pertaining to Quality Assurance concepts and Level : Bachelor's degree in Computer Science, Engineering, or relevant fields with 3+ years of experience as a Data and Solution Architect supporting Enterprise Data and Integration Applications or a similar role for large-scale enterprise solutions. 3+ years of experience in Big Data Infrastructure and tuning experience in Lakehouse Data Ecosystem, including Data Lake, Data Warehouses, and Graph DB. AWS Solutions Architect Professional Level certifications. Extensive experience in data analysis on critical enterprise systems like SAP, E1, Mainframe ERP, SFDC, Adobe Platform, and eCommerce systems. Skill Set Required GCP, Data Modelling (OLTP, OLAP), indexing, DBSchema, CloudSQL, BigQuery Data Modeller - Hands-on data modelling for OLTP and OLAP systems. In-Depth knowledge of Conceptual, Logical and Physical data modelling. Strong understanding of Indexing, partitioning, data sharding with practical experience of having done the same. Strong understanding of variables impacting database performance for near-real time reporting and application interaction. Should have working experience on at least one data modelling tool, preferably DBSchema. People with functional knowledge of the mutual fund industry will be a plus. Good understanding of GCP databases like AlloyDB, CloudSQL and BigQuery (ref:hirist.tech)

Posted 2 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

HEROIC Cybersecurity ( HEROIC.com ) is seeking a Senior Data Infrastructure Engineer with deep expertise in DataStax Enterprise (DSE) and Apache Cassandra to help architect, scale, and maintain the data infrastructure that powers our cybersecurity intelligence platforms. You will be responsible for designing and managing fully automated, big data pipelines that ingest, process, and serve hundreds of billions of breached and leaked records sourced from the surface, deep, and dark web. You'll work with DSE Cassandra, Solr, and Spark, helping us move toward a 99% automated pipeline for data ingestion, enrichment, deduplication, and indexing — all built for scale, speed, and reliability. This position is critical in ensuring our systems are fast, reliable, and resilient as we ingest thousands of unique datasets daily from global threat intelligence sources. What you will do: Design, deploy, and maintain high-performance Cassandra clusters using DataStax Enterprise (DSE) Architect and optimize automated data pipelines to ingest, clean, enrich, and store billions of records daily Configure and manage DSE Solr and Spark to support search and distributed processing at scale Automate dataset ingestion workflows from unstructured surface, deep, and dark web sources Cluster management, replication strategy, capacity planning, and performance tuning Ensure data integrity, availability, and security across all distributed systems Write and manage ETL processes, scripts, and APIs to support data flow automation Monitor systems for bottlenecks, optimize queries and indexes, and resolve production issues Research and integrate third-party data tools or AI-based enhancements (e.g., smart data parsing, deduplication, ML-based classification) Collaborate with engineering, data science, and product teams to support HEROIC’s AI-powered cybersecurity platform Requirements Minimum 5 years experience with Cassandra / DataStax Enterprise in production environments Hands-on experience with DSE Cassandra, Solr, Apache Spark, CQL, and data modeling at scale Strong understanding of NoSQL architecture, sharding, replication, and high availability Advanced knowledge of Linux/Unix, shell scripting, and automation tools (e.g., Ansible, Terraform) Proficient in at least one programming language: Python, Java, or Scala Experience building large-scale automated data ingestion systems or ETL workflows Solid grasp of AI-enhanced data processing, including smart cleaning, deduplication, and classification Excellent written and spoken English communication skills Prior experience with cybersecurity or dark web data (preferred but not required) Benefits Position Type: Full-time Location: Pune, India (Remote – Work from anywhere) Compensation: Competitive salary based on experience Benefits: Paid Time Off + Public Holidays Professional Growth: Amazing upward mobility in a rapidly expanding company. Innovative Culture: Fast-paced, innovative, and mission-driven. Be part of a team that leverages AI and cutting-edge technologies. About Us: HEROIC Cybersecurity ( HEROIC.com ) is building the future of cybersecurity. Unlike traditional cybersecurity solutions, HEROIC takes a predictive and proactive approach to intelligently secure our users before an attack or threat occurs. Our work environment is fast-paced, challenging and exciting. At HEROIC, you’ll work with a team of passionate, engaged individuals dedicated to intelligently securing the technology of people all over the world. Position Keywords: DataStax Enterprise (DSE), Apache Cassandra, Apache Spark, Apache Solr, AWS, Jira, NoSQL, CQL (Cassandra Query Language), Data Modeling, Data Replication, ETL Pipelines, Data Deduplication, Data Lake, Linux/Unix Administration, Bash, Docker, Kubernetes, CI/CD, Python, Java, Distributed Systems, Cluster Management, Performance Tuning, High Availability, Disaster Recovery, AI-based Automation, Artificial Intelligence, Big Data, Dark Web Data

Posted 3 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Role: MongoDB Senior Database Administrator Location: Offshore/India Who are we looking for? We are looking for 7+ years of administrator experience in MongoDB/Cassandra/Snowflake Databases. This role is focused on production support, ensuring database performance, availability, and reliability across multiple clusters. The ideal candidate will be responsible for ensuring the availability, performance, and security of our NoSQL database environment. You will provide 24/7 production support, troubleshoot issues, monitor system health, optimize performance, and collaborate with cross-functional teams to maintain a reliable and efficient Snowflake platform . Technical Skills  Proven experience as a MongoDB/Cassandra/Snowflake Databases Administrator or similar role in production support environments.  7+ years of hands-on experience as a MongoDB DBA supporting production environments.  Strong understanding of MongoDB architecture, including replica sets, sharding, and aggregation framework.  Proficiency in writing and optimizing complex MongoDB queries and indexes.  Experience with backup and recovery solutions (e.g., mongodump, mongorestore, Ops Manager).  Solid knowledge of Linux/Unix systems and scripting (Shell, Python, or similar).  Experience with monitoring tools like Prometheus, Grafana, DataStax OpsCenter, or similar.  Understanding of distributed systems and high-availability concepts.  Proficiency in troubleshooting cluster issues, performance tuning, and capacity planning.  In-depth understanding of data management (e.g. permissions, recovery, security and monitoring)  Understanding of ETL/ELT tools and data integration patterns.  Strong troubleshooting and problem-solving skills.  Excellent communication and collaboration abilities.  Ability to work in a 24/7 support rotation and handle urgent production issues.  Strong understanding of relational database concepts.  Experience with database design, modeling, and optimization is good to have  Familiarity with data security is the best practice and backup procedures. Responsibilities  Production Support & Incident Management: Provide 24/7 support for MongoDB environments, including on-call rotation. Monitor system health and respond to s, incidents, and performance degradation issues. Troubleshoot and resolve production database issues in a timely manner.  Database Administration Install, configure, and upgrade MongoDB clusters in on-prem or cloud environments. Perform routine maintenance including backups, restores, indexing, and data migration. Monitor and manage replica sets, sharding, and cluster balancing.  Performance Tuning & Optimization Analyze query and indexing strategies to improve performance. Tune MongoDB server parameters and JVM settings where applicable. Monitor and optimize disk I/O, memory usage, and CPU utilization .  Security & Compliance Implement and manage access control, roles, and authentication mechanisms (LDAP, x.509, SCRAM). Ensure encryption, auditing, and compliance with data governance and security policies.  Automation & Monitoring Create and maintain scripts for automation of routine tasks (e.g., backups, health checks). Set up and maintain monitoring tools (e.g., MongoDB Ops Manager, Prometheus/Grafana, MMS).  Documentation & Collaboration Maintain documentation on architecture, configurations, procedures, and incident reports. Work closely with application and infrastructure teams to support new releases and deployments. Qualification  Experience with MongoDB Atlas and other cloud-managed MongoDB services.  MongoDB certification (MongoDB Certified DBA Associate/Professional).  Experience with automation tools like Ansible, Terraform, or Puppet.  Understanding of DevOps practices and CI/CD integration.  Familiarity with other NoSQL and RDBMS technologies is a plus.  Education qualification: Any degree from a reputed college  7+ years overall IT experience.

Posted 4 days ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Full-time | Entry-Level | Freshers Welcome (B.Tech Required) Location: Ahmedabad, Gujarat, India ⸻ About the Role We are seeking a detail-oriented and passionate Junior Database Engineer to join our growing infrastructure team at our Hyderabad office. This is an excellent opportunity for fresh graduates who are eager to dive deep into relational database systems, query optimization, and data infrastructure engineering. You will be responsible for maintaining, optimizing, and scaling MySQL-based database systems that power our marketplace platform—supporting real-time, high-availability operations across global trade networks. ⸻ Core Responsibilities • Support the administration and performance tuning of MySQL databases in production and development environments. • Implement database design best practices including normalization, indexing strategies, and query optimization. • Assist with managing master-slave replication, backup & recovery processes, and disaster recovery planning. • Learn and support sharding strategies, data partitioning, and horizontal scaling for large datasets. • Write and optimize complex SQL queries, stored procedures, and triggers. • Monitor database health using monitoring tools and address bottlenecks, slow queries, or deadlocks. • Collaborate with backend engineers and DevOps to ensure database reliability, scalability, and high availability. ⸻ Technical Skills & Requirements • Fresh graduates (B.Tech in Computer Science, IT, or related fields) with academic or project experience in SQL and RDBMS. • Strong understanding of relational database design, ACID principles, and transaction management. • Hands-on experience with MySQL or compatible systems (MariaDB, Percona). • Familiarity with ER modeling, data migration, and schema versioning. • Exposure to concepts like: • Replication (master-slave/master-master) • Sharding & partitioning • Write/read splitting • Backup strategies (mysqldump, Percona XtraBackup) • Connection pooling and resource utilization • Comfortable working in Linux environments and using CLI tools. • Strong analytical skills and a curiosity to explore and solve data-layer challenges. Interview Process 1. Shortlisting – Based on resume and relevant experience 2. Technical Assessment – Practical web development test 3. Final Interview – With the client’s hiring team ⸻ Why Join Us? • Be part of a cutting-edge AI project with global exposure • Work in a professional environment with real growth opportunities • Gain valuable experience in client-facing, production-level development • Strong potential for contract extension or full-time conversion ⸻ Interested in working on impactful web products for the future of AI?

Posted 4 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Dive in and do the best work of your career at DigitalOcean. Journey alongside a strong community of top talent who are relentless in their drive to build the simplest scalable cloud. If you have a growth mindset, naturally like to think big and bold, and are energized by the fast-paced environment of a true industry disruptor, you’ll find your place here. We value winning together—while learning, having fun, and making a profound difference for the dreamers and builders in the world. At DigitalOcean, we're not just simplifying cloud computing - we're revolutionizing it. We serve the developer community and the businesses they build with a relentless pursuit of simplicity. With our customers at the heart of what we do - and powered by a diverse culture that values boldness, speed, simplicity, ownership, and a growth mindset - we are committed to building truly useful products. Come swim with us! Position Overview We are looking for a Software Engineer who is passionate about writing clean, maintainable code and eager to contribute to the success of our platform. As a Software Engineer at DigitalOcean, you will join a dynamic team dedicated to revolutionizing cloud computing.We’re looking for an experienced Software Engineer II to join our growing engineering team. You’ll work on building and maintaining features that directly impact our users, from creating scalable backend systems to improving performance for thousands of customers. What You’ll Do Design, develop, and maintain backend systems and services that power our platform. Collaborate with cross-functional teams to design and implement new features, ensuring the best possible developer experience for our users. Troubleshoot complex technical problems and find efficient solutions in a timely manner. Write high-quality, testable code, and contribute to code reviews to maintain high standards of development practices. Participate in architecture discussions and contribute to the direction of the product’s technical vision. Continuously improve the reliability, scalability, and performance of the platform. Participate in rotating on-call support, providing assistance with production systems when necessary. Mentor and guide junior engineers, helping them grow technically and professionally. What You’ll Add To DigitalOcean A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in at least one modern programming language (e.g., Go, Python, Ruby, Java, etc.), with a strong understanding of data structures, algorithms, and software design principles. Hands-on experience with cloud computing platforms and infrastructure-as-code practices. Strong knowledge of RESTful API design and web services architecture. Demonstrated ability to build scalable and reliable systems that operate in production at scale. Excellent written and verbal communication skills to effectively collaborate with teams. A deep understanding of testing principles and the ability to write automated tests that ensure the quality of code. A passion for mentoring junior engineers and helping build a culture of learning and improvement. Familiarity with agile methodologies, including sprint planning, continuous integration, and delivery. Knowledge of advanced database concepts such as sharding, indexing, and performance tuning. Exposure to monitoring and observability tools such as Prometheus, Grafana, or ELK Stack. Experience with infrastructure-as-code tools such as Terraform or CloudFormation. Familiarity with Kubernetes, Docker, and other containerization/orchestration tools. Why You’ll Like Working for DigitalOcean We innovate with purpose. You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions. We prioritize career development. At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development. We care about your well-being. Regardless of your location, we will provide you with a competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences. We reward our employees. The salary range for this position is based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program. We value diversity and inclusion. We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service. This job is located in Hyderabad, India

Posted 4 days ago

Apply

10.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Company Description Celcius Logistics Solutions Pvt Ltd is India's first and only asset-light cold chain marketplace, offering a web and app-based SaaS platform that brings the entire cold chain network online. Our platform enables seamless connections between transporters and manufacturers of perishable products, serving key sectors like dairy, pharmaceuticals, fresh agro produce, and frozen products. We provide comprehensive network monitoring and booking capabilities for reefer vehicle loads and cold storage space across India. With over 3,500 registered reefer trucks and 100+ cold storage facilities, we are revolutionizing the cold chain industry in India. Role Description We are looking for a Senior Database Administrator (DBA) to lead the design, implementation, and management of high-performance, highly available database systems. This role is critical to support real-time data ingestion, processing, and storage for our vehicle telemetry platforms. You will be responsible for ensuring 24/7 database availability, optimizing performance for millions of transactions per day, and enabling scalability for future growth. Key Responsibilities: Design and implement fault-tolerant, highly available database architectures. Manage clustering, replication, and automated failover systems. Ensure zero-downtime during updates, scaling, and maintenance. Monitor and optimize database performance and query efficiency. Tune database configurations for peak performance under load. Implement caching and indexing strategies. Design data models for real-time telemetry ingestion. Implement partitioning, sharding, and retention policies. Ensure data consistency, archival, and lifecycle management. Set up and enforce database access controls and encryption. Perform regular security audits and comply with data regulations. Implement backup, disaster recovery, and restore procedures. Qualifications 10+ years as a hands-on DBA managing production databases Experience handling high-volume, real-time data (ideally telemetry or IoT) Familiarity with microservices-based architectures Proven track record in implementing high-availability and disaster recovery solutions Advanced knowledge of enterprise RDBMS (Oracle, PostgreSQL, MongoDB, etc.) Experience with time-series and geospatial data Hands-on experience with clustering, sharding, and replication Expertise in performance tuning and query optimization Proficiency in database automation and monitoring tools Strong scripting skills (Python, Shell, etc.)

Posted 5 days ago

Apply

5.0 - 10.0 years

15 - 22 Lacs

Gurugram, Delhi / NCR

Hybrid

about the role Design, develop, and maintain MongoDB databases for high-performance applications. Optimize queries and indexing strategies to improve database performance. Ensure database security, backup, recovery, and disaster recovery planning. Monitor database performance and troubleshoot issues proactively. Implement and manage replication, sharding, and scaling strategies. Collaborate with development teams to optimize data models and queries. Perform regular upgrades, patches, and maintenance of MongoDB clusters. Establish and enforce best practices for database administration and development. Support and automate database operations using scripts and tools. about you Strong expertise in MongoDB development and administration . Experience with database performance tuning and optimization. Hands-on experience with replication, sharding, and indexing. Proficiency in MongoDB query language (Aggregation framework, CRUD operations). Knowledge of Database security, authentication, and authorization mechanisms. Experience with Backup and Recovery strategies . Good to have: Experience with Automation tools like Ansible, Shell Scripting, or Python. Good to have: Familiarity with Cloud-based MongoDB deployments (MongoDB Atlas, AWS, Azure, GCP). Good to have: Knowledge of any RDBMS, especially Oracle or PostgreSQL. Good to have: Exposure to other NoSQL databases like Cassandra, Redis, or DynamoDB

Posted 6 days ago

Apply

5.0 years

0 Lacs

Delhi, India

Remote

About Us HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. About The Role We’re seeking a seasoned Full Stack Developer to join our CRM team — someone who thrives in a fast-paced environment where AI-driven development, intelligent tooling, and high-scale systems are the norm. You’ll work with bleeding-edge tools like Cursor, adopt principles of the Model Context Protocol (MCP), and integrate with third-party marketplaces to help extend the capabilities of our platform Responsibilities: AI-Native Development: Use Cursor, GitHub Copilot, and other AI-enhanced tools to accelerate development workflows Context-Driven Engineering with MCP: Leverage the Model Context Protocol (MCP) to manage model-aware context 3rd-Party Marketplace Integrations: Design and build scalable integrations with external APIs and marketplace ecosystems Front & Backend Development: Build robust CRM features using Vue.js and Node.js Real-Time Systems: Architect event-based applications powered by Kafka, RabbitMQ, ActiveMQ Data Engineering at Scale: Work with ElasticSearch, MongoDB, and related tooling Team Collaboration: Collaborate with cross-functional teams to ship features Developer Excellence: Contribute to a high-quality engineering culture Requirements: 5+ years of full time software development experience Experienced in scaling products from early-stage to high-growth revenue milestones Proven track record of leading engineering teams and driving product development through key inflection points Experience with Node.js and Vue.js Understanding of ElasticSearch, database sharding, autoscaling Experience with Pub-Sub, Kafka, RabbitMQ, ActiveMQ Proficient in MongoDB Comfortable using AI-powered dev tools like Cursor Familiar with Git, CI/CD, agile methodologies Excellent communication and teamwork skills Bachelor's degree or equivalent experience Nice to Have: Experience with third-party marketplaces (e.g., Salesforce, HubSpot, Zapier) Open source/FOSS contributions Familiarity with AI application patterns, LLM context management Exposure to plugin architectures or white-label platforms EEO Statement: The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.

Posted 6 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Dive in and do the best work of your career at DigitalOcean. Journey alongside a strong community of top talent who are relentless in their drive to build the simplest scalable cloud. If you have a growth mindset, naturally like to think big and bold, and are energized by the fast-paced environment of a true industry disruptor, you’ll find your place here. We value winning together —while learning, having fun, and making a profound difference for the dreamers and builders in the world. We are looking for an experienced database engineer with an operations background in building sustainable solutions for data storage and streaming platforms. Our team’s mission statement is to “provide tools and expertise to solve common operational problems, accelerating and simplifying product development.” As part of the team, you’ll be working with a variety of data-related technologies, including MySQL, Clickhouse, Kafka, and Redis, in an effort to transform these technologies into platform services. NOTE: this is not a ‘data scientist’ role, rather this role is to help design and build datastore-related platforms for internal stakeholder groups within DigitalOcean. See more below for role expectations. This is an opportunity to build the services and systems that will accelerate the development of DigitalOcean’s cloud features. Services will provide highly available, operationally elegant solutions that serve as a foundation for a growing product base and serving a global audience. This is a high-impact role and you’ll be working with a large variety of product engineering teams across the company. What You’ll Be Doing Administration, operations, and performance tuning of Vitess-managed MySQL datastores ,with a focus on large-scale, sharded environments Architecting new Vitess-based MySQL database infrastructure on bare metal Delivering managed data platform solutions as a service that facilitate adoption and offer operational elegance Working closely with product engineering and infrastructure teams to drive adoption of services throughout the company Instrument and monitor services developed to ensure operational performance Create tooling and automation to reduce operational burdens Establishing best practices for development, deployment, and operations Driving adoption of services throughout the company Interact with developers and teams to resolve site and database issues What You'll Add To DigitalOcean Experience supporting MySQL (ideally with Vitess or other sharding solutions) in a production environment, with in-depth knowledge of backups, high availability, sharding, and performance tuning Distinguished track record developing and automating platform solutions that serve the needs of other engineering teams Experience with other data technologies such as Kafka and Redis Fluency in SQL, Python, Bash, or other scripting languages Experience with Linux performance troubleshooting Experience with configuration management tooling such as Chef & Ansible What We’d Love You To Have An understanding of using ProxySQL and Kubernetes Familiarity with continuous integration tools such as Concourse, GitHub Actions Some familiarity with Go readability Passion for production engineering done in a resilient fashion You have a passion for not repeating yourself (DRY) by way of automation What Will Not Be Expected From You Demonstrated expertise in being a ‘data scientist’ - this role has much more production engineering focus Crunching mundane support tickets day over day - be the Automator Following a lengthy and strict product roadmap - engineers wear product hats as needed and help define what platform gets built Why You’ll Like Working for DigitalOcean We innovate with purpose. You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions. We prioritize career development. At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development. We care about your well-being. Regardless of your location, we will provide you with a competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences. We reward our employees. The salary range for this position based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program. We value diversity and inclusion. We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service. This is role is located in Hyderabad, India

Posted 6 days ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a Database Performance & Data Modeling Specialist with a primary focus on optimizing schema structures, tuning SQL queries, and ensuring that data models are well-prepared for high-volume, real-time systems. Your responsibilities include designing data models that balance performance, flexibility, and scalability, conducting performance benchmarking to identify bottlenecks and propose improvements, analyzing slow queries to recommend indexing, denormalization, or schema revisions, monitoring query plans, memory usage, and caching strategies for cloud databases, and collaborating with developers and analysts to optimize application-to-database workflows. You must possess strong experience in database performance tuning, especially in GCP platforms like BigQuery, CloudSQL, and AlloyDB. Proficiency in schema refactoring, partitioning, clustering, and sharding techniques is essential. Familiarity with profiling tools, slow query logs, and GCP monitoring solutions is required, along with SQL optimization skills including query rewriting and execution plan analysis. Preferred skills include a background in mutual fund or high-frequency financial data modeling, hands-on experience with relational databases like PostgreSQL, MySQL, distributed caching, materialized views, and hybrid model structures. Soft skills that are crucial for this role include being precision-driven with an analytical mindset, a clear communicator with attention to detail, and possessing strong problem-solving and troubleshooting abilities. By joining this role, you will have the opportunity to shape high-performance data systems from the ground up, play a critical role in system scalability and responsiveness, and work with high-volume data in a cloud-native enterprise setting.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Architect specializing in OLTP & OLAP Systems, you will play a crucial role in designing, optimizing, and governing data models for both OLTP and OLAP environments. Your responsibilities will include architecting end-to-end data models across different layers, defining conceptual, logical, and physical data models, and collaborating closely with stakeholders to capture functional and performance requirements. You will need to optimize database structures for real-time and analytical workloads, enforce data governance, security, and compliance best practices, and enable schema versioning, lineage tracking, and change control. Additionally, you will review query plans and indexing strategies to enhance performance. To excel in this role, you must possess a deep understanding of OLTP and OLAP systems architecture, along with proven experience in GCP databases such as BigQuery, CloudSQL, and AlloyDB. Your expertise in database tuning, indexing, sharding, and normalization/denormalization will be critical, as well as proficiency in data modeling tools like DBSchema, ERWin, or equivalent. Familiarity with schema evolution, partitioning, and metadata management is also required. Experience in the BFSI or mutual fund domain, knowledge of near real-time reporting and streaming analytics architectures, and familiarity with CI/CD for database model deployments are preferred skills that will set you apart. Strong communication, stakeholder management, strategic thinking, and the ability to mentor data modelers and engineers are essential soft skills for success in this position. By joining our team, you will have the opportunity to own the core data architecture for a cloud-first enterprise, bridge business goals with robust data design, and work with modern data platforms and tools. If you are looking to make a significant impact in the field of data architecture, this role is perfect for you.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

You are looking for a Data Modelling Consultant with 6 to 9 years of experience to work in Chennai office. As a Data Modelling Consultant, your role will involve providing end-to-end modeling support for OLTP and OLAP systems hosted on Google Cloud. Your responsibilities will include designing and validating conceptual, logical, and physical models for cloud databases, translating requirements into efficient schema designs, and supporting data model reviews, tuning, and implementation. You will also guide teams on best practices for schema evolution, indexing, and governance to enable usage of models in real-time applications and analytics platforms. To succeed in this role, you must have strong experience in modeling across OLTP and OLAP systems, hands-on experience with GCP tools like BigQuery, CloudSQL, and AlloyDB, and the ability to understand business rules and translate them into scalable structures. Additionally, familiarity with partitioning, sharding, materialized views, and query optimization is essential. Preferred skills for this role include experience with BFSI or financial domain data schemas, familiarity with modeling methodologies and standards such as 3NF and star schema. Soft skills like excellent stakeholder communication, collaboration, strategic thinking, and attention to scalability are also important. Joining this role will allow you to deliver advisory value across critical data initiatives, influence the modeling direction for a data-driven organization, and be at the forefront of GCP-based enterprise data transformation.,

Posted 1 week ago

Apply

7.0 years

0 Lacs

Greater Kolkata Area

Remote

About Workato Workato transforms technology complexity into business opportunity. As the leader in enterprise orchestration, Workato helps businesses globally streamline operations by connecting data, processes, applications, and experiences. Its AI-powered platform enables teams to navigate complex workflows in real-time, driving efficiency and agility. Trusted by a community of 400,000 global customers, Workato empowers organizations of every size to unlock new value and lead in today’s fast-changing world. Learn how Workato helps businesses of all sizes achieve more at workato.com. Why join us? Ultimately, Workato believes in fostering a flexible, trust-oriented culture that empowers everyone to take full ownership of their roles . We are driven by innovation and looking for team players who want to actively build our company. But, we also believe in balancing productivity with self-care . That’s why we offer all of our employees a vibrant and dynamic work environment along with a multitude of benefits they can enjoy inside and outside of their work lives. If this sounds right up your alley, please submit an application. We look forward to getting to know you! Also, Feel Free To Check Out Why Business Insider named us an “enterprise startup to bet your career on” Forbes’ Cloud 100 recognized us as one of the top 100 private cloud companies in the world Deloitte Tech Fast 500 ranked us as the 17th fastest growing tech company in the Bay Area, and 96th in North America Quartz ranked us the #1 best company for remote workers Responsibilities We are looking for an exceptional Senior Infrastructure Engineer with experience in building high-performing, scalable, enterprise-grade applications . to join our growing team. In this role, you will be responsible for building a high-performance queuing/storage engine. You will work in a polyglot environment where you can learn new languages and technologies whilst working with an enthusiastic team. You will also be responsible for: Software Engineering Design and develop high-volume, low-latency applications for mission-critical systems and deliver high availability and performance Contribute to all phases of the development life cycle Write well-designed, testable, efficient code Evaluate and propose improvements to existing system Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review Infrastructure Engineering Maintain and evolve application cloud infrastructure (AWS) Maintain and evolve Kubernetes clusters Infrastructure hardening according to compliance and security requirements Maintenance and development of monitoring, logging, tracing, alerting solutions OpenSearch Expertise Experience scaling OpenSearch clusters to handle heavy query and indexing workloads, including optimizing bulk indexing operations and query throughput Proficiency in implementing and managing effective sharding strategies to balance performance, storage, and recovery needs Advanced knowledge of OpenSearch performance tuning, including JVM settings, field mappings, and cache optimization Expertise in designing robust disaster recovery solutions with cross-cluster replication, snapshots, and restoration procedures Experience implementing and optimizing vector search capabilities for ML applications, including k-NN algorithms and approximate nearest neighbor (ANN) search Knowledge of custom OpenSearch plugin development for specialized indexing or query requirements Hands-on experience deploying and managing self-hosted OpenSearch clusters in Kubernetes environments Familiarity with monitoring OpenSearch performance metrics and implementing automated scaling solutions Requirements Qualifications / Experience / Technical Skills BS/MS degree in Computer Science, Engineering or a related subject 7+ years of industry experience Experience of working with public cloud infrastructure providers (AWS/Azure/Google Cloud) Experience with Terraform, Docker A hands-on approach to implementing solutions Good understanding of Linux networking and security Exceptional understanding of Kubernetes concepts Experience with Golang/Python/Java/Ruby (any) and databases such as PostgreSQL Contributions to open source projects is a plus Soft Skills / Personal Characteristics Communicate in English with colleagues and customers

Posted 1 week ago

Apply

4.0 years

1 - 5 Lacs

Noida

On-site

Position- Database Administrator About Wildnet Technologies: Wildnet Technologies is an award-winning White Label Digital Marketing and IT Services with a track record of helping businesses and Google Partner Agencies achieve their goals. We offer a comprehensive range of high-quality Digital Marketing Services and On-Demand Technology Resources. With over 12,000 successful projects delivered to date, our team of 300+ professionals is headquartered in India and serves clients in the United States, Canada, Australia, and the United Kingdom. Our expertise includes SEO, Paid Search, Paid Social Services, programmatic advertising, and more. Job Responsibilities: Deploy, monitor, and manage databases across both production and pre-productionenvironments. Automate infrastructure provisioning and configuration utilizing Terraform and Ansible. Manage infrastructure on Linux-based systems such as RHEL 9.x. Monitor system health, establish comprehensive alerting, and respond to incidents proactively to minimize downtime. Collaborate with DevOps and Data Engineering teams to seamlessly align infrastructure with MLOps workflows. Implement robust security controls, including data encryption, access management, andcomprehensive auditing to protect sensitive information. Troubleshoot and resolve performance issues within our database systems, ensuring optimal operation. Required Skills : PostgreSQL: In-depth knowledge of administration, performance tuning, replication, backup, and recovery. MariaDB/MySQL: Proficiency in managing these relational databases, including high availability solutions, schema design, query optimization, and user management. MongoDB: Experience with NoSQL database administration, including sharding, replica sets, indexing, and performance monitoring. MS SQL Server: Familiarity with managing SQL Server environments, including maintenance plans, security, and troubleshooting. AWS RDS/Aurora: Strong practical experience with Amazon Relational Database Service (RDS) and Aurora, encompassing instance provisioning, scaling, monitoring, and backup strategies. 4+ years of experience as a Database Administrator or DevOps Engineer with a focus on Linux OS. Extensive experience with Infrastructure as Code (IaC) tools, specifically Terraform and Ansible. Comprehensive knowledge of networking, security, and performance tuning within distributed environments. Proven experience with monitoring tools like DataDog, Splunk , SignalFx, PagerDuty. Deep knowledge and practical experience with the AWS cloud platform. Familiarity with other cloud platforms (e.g., GCP, Azure, or IBM Cloud) is a plus. Good understanding of Docker and container technologies. Good to Have: Certifications in Kubernetes (CKA/CKAD), Terraform (HashiCorp Certified), or Linux (RHCE/LPIC). Exposure to CI/CD pipelines, GitOps workflows, and tools like ArgoCD or Flux.\ Why Join Wildnet - Established Industry Leader : 15+ years of expertise in digital marketing and IT services; among the pioneers in India's digital space. Great Place to Work® Certified : Recognized for fostering a flexible, positive, and people-first work culture. Learning & Growth : Fast-paced environment with ongoing training, career advancement, and leadership development opportunities. Health & Wellness Benefits : Comprehensive insurance and wellness support for employees and their families. Work-Life Balance : Flexible Working Hours , 5-day work week and generous leave policy to support personal well-being. Exposure to Top Clients : Work on diverse projects with leading global brands across industries

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Modeller specializing in GCP and Cloud Databases, you will play a crucial role in designing and optimizing data models for both OLTP and OLAP systems. Your expertise in cloud-based databases, data architecture, and modeling will be essential in collaborating with engineering and analytics teams to ensure efficient operational systems and real-time reporting pipelines. You will be responsible for designing conceptual, logical, and physical data models tailored for OLTP and OLAP systems. Your focus will be on developing and refining models that support performance-optimized cloud data pipelines, implementing models in BigQuery, CloudSQL, and AlloyDB, as well as designing schemas with indexing, partitioning, and data sharding strategies. Translating business requirements into scalable data architecture and schemas will be a key aspect of your role, along with optimizing for near real-time ingestion, transformation, and query performance. You will utilize tools like DBSchema for collaborative modeling and documentation while creating and maintaining metadata and documentation around models. In terms of required skills, hands-on experience with GCP databases (BigQuery, CloudSQL, AlloyDB), a strong understanding of OLTP and OLAP systems, and proficiency in database performance tuning are essential. Additionally, familiarity with modeling tools such as DBSchema or ERWin, as well as a proficiency in SQL, schema definition, and normalization/denormalization techniques, will be beneficial. Preferred skills include functional knowledge of the Mutual Fund or BFSI domain, experience integrating with cloud-native ETL and data orchestration pipelines, and familiarity with schema version control and CI/CD in a data context. In addition to technical skills, soft skills such as strong analytical and communication abilities, attention to detail, and a collaborative approach across engineering, product, and analytics teams are highly valued. Joining this role will provide you with the opportunity to work on enterprise-scale cloud data architectures, drive performance-oriented data modeling for advanced analytics, and collaborate with high-performing cloud-native data teams.,

Posted 1 week ago

Apply

6.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Data Modeller – GCP & Cloud Databases Location: Chennai (Work From Office) Experience Required: 6 to 9 Years Role Overview We are looking for a hands-on Data Modeller with strong expertise in cloud-based databases, data architecture, and modeling for OLTP and OLAP systems. You will work closely with engineering and analytics teams to design and optimize conceptual, logical, and physical data models, supporting both operational systems and near real-time reporting pipelines. Key Responsibilities Design conceptual, logical, and physical data models for OLTP and OLAP systems Develop and refine models that support performance-optimized cloud data pipelines Collaborate with data engineers to implement models in BigQuery, CloudSQL, and AlloyDB Design schemas and apply indexing, partitioning, and data sharding strategies Translate business requirements into scalable data architecture and schemas Optimize for near real-time ingestion, transformation, and query performance Use tools such as DBSchema or similar for collaborative modeling and documentation Create and maintain metadata and documentation around models Must-Have Skills Hands-on experience with GCP databases: BigQuery, CloudSQL, AlloyDB Strong understanding of OLTP vs OLAP systems and respective design principles Experience in database performance tuning: indexing, sharding, and partitioning Skilled in modeling tools such as DBSchema, ERWin, or similar Understanding of variables that impact performance in real-time/near real-time systems Proficient in SQL, schema definition, and normalization/denormalization techniques Preferred Skills Functional knowledge of the Mutual Fund or BFSI domain Experience integrating with cloud-native ETL and data orchestration pipelines Familiarity with schema version control and CI/CD in a data context Soft Skills Strong analytical and communication skills Detail-oriented and documentation-focused Ability to collaborate across engineering, product, and analytics teams Why Join Work on enterprise-scale cloud data architectures Drive performance-first data modeling for advanced analytics Collaborate with high-performing cloud-native data teams Skills: olap,normalization,indexing,gcp databases,sharding,olap systems,modeling,schema definition,sql,data,oltp systems,alloydb,erwin,modeling tools,bigquery,database performance tuning,databases,partitioning,denormalization,dbschema,cloudsql

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Pune, Maharashtra

On-site

Location: Pune, Maharashtra Experience: 5–10 years Qualification: Bachelor’s or Master’s degree in Computer Science, IT, or related field Requirements: Design and develop logical and physical database models aligned with business needs. Implement and configure databases, tables, views, and stored procedures. Monitor and optimize database performance, tuning queries and resolving bottlenecks. Implement database security, including access controls, encryption, and compliance measures. Develop and maintain backup and disaster recovery strategies, ensuring data continuity. Design and implement data integration mechanisms across systems, ensuring consistency. Plan and execute strategies for scalability and high availability (sharding, replication, failover). Lead data migration projects, validating data integrity post-migration. Create and maintain detailed documentation for schemas, data flows, and specifications. Collaborate with developers, system admins, and stakeholders to align database architecture. Troubleshoot and resolve database issues related to errors, performance, and data inconsistencies. Strong knowledge of RDBMS (MySQL, PostgreSQL, SQL Server) and NoSQL (MongoDB, Cassandra). Proficient in SQL, database query optimization, indexing, and ETL processes. Experience with cloud database platforms like AWS RDS, Azure SQL, or Google Cloud SQL. Excellent problem-solving and communication skills. Relevant database certifications are a plus. If you have a deep understanding of database technologies, a passion for data integrity and performance, and a knack for designing efficient data solutions, we encourage you to apply. Join our team and play a vital role in managing our data infrastructure to support the organization's data-driven initiatives. Please share an updated copy of your CVs on hrdept@cstech.ai.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Company We are seeking a skilled and experienced MongoDB Developer with 5–8 years of expertise in designing, developing, and maintaining scalable NoSQL database solutions. The ideal candidate will have deep knowledge of MongoDB internals, performance tuning, data modeling, and integration with backend applications. You will play a key role in the development and optimization of data-driven applications across various domains. About the Role The MongoDB Developer will be responsible for designing and implementing MongoDB data models based on application requirements and performance considerations. Responsibilities Design and implement MongoDB data models based on application requirements and performance considerations. Develop efficient queries and aggregation pipelines for complex datasets. Integrate MongoDB with backend technologies (Node.js, Java, Python, etc.). Monitor, troubleshoot, and optimize MongoDB performance, including indexing and replication strategies. Collaborate with software engineers, data architects, and DevOps teams to deliver robust and scalable solutions. Maintain and enhance existing database schemas and documents. Implement backup and recovery strategies, and participate in disaster recovery planning. Ensure data security and compliance with organizational policies. Write clean, maintainable code and maintain detailed documentation. Contribute to continuous improvement by identifying and promoting best practices in NoSQL development. Qualifications 5 to 8 years of professional experience with MongoDB development. Strong knowledge of MongoDB architecture, aggregation framework, and performance tuning. Experience with schema design, data modeling (both embedded and normalized), and indexing strategies. Hands-on experience integrating MongoDB with one or more backend programming languages (e.g., Node.js, Java, Python, Go). Familiarity with MongoDB Atlas and cloud deployment strategies. Proficient in writing JavaScript or shell scripts for MongoDB automation. Experience with version control systems like Git and CI/CD pipelines. Understanding of data replication, sharding, and high availability in MongoDB. Good problem-solving skills and the ability to work in an Agile environment. Preferred Skills Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Experience with additional NoSQL technologies (e.g., Redis, Cassandra, Elasticsearch) is a plus. MongoDB certification is an added advantage. Knowledge of containerization (Docker, Kubernetes) and DevOps practices. Pay range and compensation package Salary and compensation details will be discussed during the interview process. Equal Opportunity Statement We are committed to diversity and inclusivity in our hiring practices.

Posted 1 week ago

Apply

13.0 years

0 Lacs

Gurugram, Haryana, India

Remote

About The Role Grade Level (for internal use): 13 Job Title: Engineering Lead – Document Solution (Director Level) The Team: We are building very exciting Document Solutions offering which leverages Document Digitization and Agreement Intelligence to dramatically reduce the time required to manage these documents effectively while unlocking the vital data you need to generate deeper insights and enhance decision quality. Our solution includes industry-leading modules and tools widely adopted by financial institutions. This strategic initiative uses sophisticated AI models trained to extract data from organization, formation, AML, regulatory and legal documents, Document Digitization unlocks critical information for reuse across operations, significantly reduces the need for manual review and enables organizations to adopt scalable processes. This solution is going to be integrated across regulatory & compliance suite of products such as Counterparty Manager, ISDA Amend, Outreach360, Request for Amendment, KYC, and Tax Utility. We leverage a mature Java/Spring Boot-based tech stack, supported by AWS infrastructure, along with the latest advancements in the industry to deliver this solution over a multi-year span. What’s In It For You Build a next-generation product that customers can rely on for informed business decisions, enhanced customer experiences, and scalability. Develop your skills by working on an enterprise-level product focused on client lifecycle management and associated new technologies. Gain experience with modern, cutting-edge cloud, AI, and platform engineering technologies. Collaborate directly with clients, commercial teams, product managers, and tech leadership toward the common goal of achieving business success. Build a rewarding career with a global company. Duties & Accountabilities Lead a global engineering team across backend, front-end, data, and AI functions, with a focus on modern architectures, AI-driven automation, and cross-jurisdictional data compliance. Design and architect solutions for complex business challenges in document solution space, utilizing your extensive experience with the Java/Spring Boot/Angular/PostgreSQL tech stack and AWS infrastructure. Implement agentic AI and LLM-based services to streamline onboarding, document processing, and exception handling. Provide guidance and technical leadership to development teams on best practices, coding standards, and software design principles, ensuring high-quality outcomes. Demonstrate a deep understanding of existing system architecture (spanning multiple systems) and creatively envision optimal implementations to meet diverse client requirements. Drive participation in all scrum ceremonies, ensuring Agile best practices are effectively followed. Play a key role in the development team to create high-quality, high-performance, and scalable code. Evaluate and recommend new technologies, assisting in their adoption by development teams to enhance productivity and scalability. Collaborate effectively with remote teams in a geographically distributed development model. Communicate clearly and effectively with business stakeholders, building consensus and resolving queries regarding architecture and design. Troubleshoot and resolve complex software issues and defects within the Java/Angular/PostgreSQL tech stack and AWS-based infrastructure. Foster a professional culture within the team, emphasizing ownership, excellence, quality, and value for customers and the business. Ensure compliance with data privacy, data sovereignty, and regulatory architecture patterns (e.g., regional sharding, zero-data copy patterns). Customer Focus Build positive and productive relationships with customers by delivering high-quality solutions that enable business growth. Serve as the primary contact for customer inquiries and concerns. Analyze customer requests, set delivery priorities, and adjust schedules to meet timely delivery goals. Education And Experience Bachelor’s degree in computer science or a related field. Proven experience working with document management and/or workflow solutions, demonstrating a strong grasp of the subject matter. Experience with the latest AI tools to enhance developer productivity and creatively approach customer challenges. Extensive experience in a team environment following Agile software development principles. Strong interpersonal and written communication skills. Demonstrated ability to successfully manage multiple tasks simultaneously. High energy and a self-starter mentality, with a passion for creative problem-solving. Technical Skills 13+ years of relevant experience is preferred Strong Core Java 8+/Java EE design skills, including design patterns. Significant experience in designing and executing microservices using Spring Boot and other Spring components (JDBC, Batch, Security, Spring Data, etc.). Proficient in messaging tools such as Active MQ, SQS, SNS, and Distributed Messaging Systems. Expertise in optimizing SQL queries on PostgreSQL databases. Strong experience with multithreading, data structures, and concurrency scenarios. Proficient in using REST APIs, XML, JAXB, and JSON in creating layered systems. Experience with AWS Services (AWS Lambda, AWS CloudWatch, API Gateway, ECS, ECR, SQS, SNS). Familiarity with Open AI APIs, Agentic AI – Crew / LangChain / RAG / AutoGen / NLP / Java / Python / REST / Telemetry / Security / Auditability. Knowledge of data partitioning, GDPR, and the latest UI trends, such as Micro Frontend Architecture, is desirable. Add-ons Experience working directly with Business and Fund Formation documents including: organization, formation, AML, regulatory and legal documents Experience working directly with digitizing Legal and Trading Contracts in the Capital Markets space Experience working at Capital Markets or Private Markets institution. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315950 Posted On: 2025-06-27 Location: Gurgaon, Haryana, India

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies