Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Dive in and do the best work of your career at DigitalOcean. Journey alongside a strong community of top talent who are relentless in their drive to build the simplest scalable cloud. If you have a growth mindset, naturally like to think big and bold, and are energized by the fast-paced environment of a true industry disruptor, you’ll find your place here. We value winning together —while learning, having fun, and making a profound difference for the dreamers and builders in the world. We are looking for an experienced database engineer with an operations background in building sustainable solutions for data storage and streaming platforms. Our team’s mission statement is to “provide tools and expertise to solve common operational problems, accelerating and simplifying product development.” As part of the team, you’ll be working with a variety of data-related technologies, including MySQL, Clickhouse, Kafka, and Redis, in an effort to transform these technologies into platform services. NOTE: this is not a ‘data scientist’ role, rather this role is to help design and build datastore-related platforms for internal stakeholder groups within DigitalOcean. See more below for role expectations. This is an opportunity to build the services and systems that will accelerate the development of DigitalOcean’s cloud features. Services will provide highly available, operationally elegant solutions that serve as a foundation for a growing product base and serving a global audience. This is a high-impact role and you’ll be working with a large variety of product engineering teams across the company. What You’ll Be Doing Administration, operations, and performance tuning of Vitess-managed MySQL datastores ,with a focus on large-scale, sharded environments Architecting new Vitess-based MySQL database infrastructure on bare metal Delivering managed data platform solutions as a service that facilitate adoption and offer operational elegance Working closely with product engineering and infrastructure teams to drive adoption of services throughout the company Instrument and monitor services developed to ensure operational performance Create tooling and automation to reduce operational burdens Establishing best practices for development, deployment, and operations Driving adoption of services throughout the company Interact with developers and teams to resolve site and database issues What You'll Add To DigitalOcean Experience supporting MySQL (ideally with Vitess or other sharding solutions) in a production environment, with in-depth knowledge of backups, high availability, sharding, and performance tuning Distinguished track record developing and automating platform solutions that serve the needs of other engineering teams Experience with other data technologies such as Kafka and Redis Fluency in SQL, Python, Bash, or other scripting languages Experience with Linux performance troubleshooting Experience with configuration management tooling such as Chef & Ansible What We’d Love You To Have An understanding of using ProxySQL and Kubernetes Familiarity with continuous integration tools such as Concourse, GitHub Actions Some familiarity with Go readability Passion for production engineering done in a resilient fashion You have a passion for not repeating yourself (DRY) by way of automation What Will Not Be Expected From You Demonstrated expertise in being a ‘data scientist’ - this role has much more production engineering focus Crunching mundane support tickets day over day - be the Automator Following a lengthy and strict product roadmap - engineers wear product hats as needed and help define what platform gets built Why You’ll Like Working for DigitalOcean We innovate with purpose. You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions. We prioritize career development. At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development. We care about your well-being. Regardless of your location, we will provide you with a competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences. We reward our employees. The salary range for this position based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program. We value diversity and inclusion. We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service. This is role is located in Hyderabad, India
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a Database Performance & Data Modeling Specialist with a primary focus on optimizing schema structures, tuning SQL queries, and ensuring that data models are well-prepared for high-volume, real-time systems. Your responsibilities include designing data models that balance performance, flexibility, and scalability, conducting performance benchmarking to identify bottlenecks and propose improvements, analyzing slow queries to recommend indexing, denormalization, or schema revisions, monitoring query plans, memory usage, and caching strategies for cloud databases, and collaborating with developers and analysts to optimize application-to-database workflows. You must possess strong experience in database performance tuning, especially in GCP platforms like BigQuery, CloudSQL, and AlloyDB. Proficiency in schema refactoring, partitioning, clustering, and sharding techniques is essential. Familiarity with profiling tools, slow query logs, and GCP monitoring solutions is required, along with SQL optimization skills including query rewriting and execution plan analysis. Preferred skills include a background in mutual fund or high-frequency financial data modeling, hands-on experience with relational databases like PostgreSQL, MySQL, distributed caching, materialized views, and hybrid model structures. Soft skills that are crucial for this role include being precision-driven with an analytical mindset, a clear communicator with attention to detail, and possessing strong problem-solving and troubleshooting abilities. By joining this role, you will have the opportunity to shape high-performance data systems from the ground up, play a critical role in system scalability and responsiveness, and work with high-volume data in a cloud-native enterprise setting.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Architect specializing in OLTP & OLAP Systems, you will play a crucial role in designing, optimizing, and governing data models for both OLTP and OLAP environments. Your responsibilities will include architecting end-to-end data models across different layers, defining conceptual, logical, and physical data models, and collaborating closely with stakeholders to capture functional and performance requirements. You will need to optimize database structures for real-time and analytical workloads, enforce data governance, security, and compliance best practices, and enable schema versioning, lineage tracking, and change control. Additionally, you will review query plans and indexing strategies to enhance performance. To excel in this role, you must possess a deep understanding of OLTP and OLAP systems architecture, along with proven experience in GCP databases such as BigQuery, CloudSQL, and AlloyDB. Your expertise in database tuning, indexing, sharding, and normalization/denormalization will be critical, as well as proficiency in data modeling tools like DBSchema, ERWin, or equivalent. Familiarity with schema evolution, partitioning, and metadata management is also required. Experience in the BFSI or mutual fund domain, knowledge of near real-time reporting and streaming analytics architectures, and familiarity with CI/CD for database model deployments are preferred skills that will set you apart. Strong communication, stakeholder management, strategic thinking, and the ability to mentor data modelers and engineers are essential soft skills for success in this position. By joining our team, you will have the opportunity to own the core data architecture for a cloud-first enterprise, bridge business goals with robust data design, and work with modern data platforms and tools. If you are looking to make a significant impact in the field of data architecture, this role is perfect for you.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You are looking for a Data Modelling Consultant with 6 to 9 years of experience to work in Chennai office. As a Data Modelling Consultant, your role will involve providing end-to-end modeling support for OLTP and OLAP systems hosted on Google Cloud. Your responsibilities will include designing and validating conceptual, logical, and physical models for cloud databases, translating requirements into efficient schema designs, and supporting data model reviews, tuning, and implementation. You will also guide teams on best practices for schema evolution, indexing, and governance to enable usage of models in real-time applications and analytics platforms. To succeed in this role, you must have strong experience in modeling across OLTP and OLAP systems, hands-on experience with GCP tools like BigQuery, CloudSQL, and AlloyDB, and the ability to understand business rules and translate them into scalable structures. Additionally, familiarity with partitioning, sharding, materialized views, and query optimization is essential. Preferred skills for this role include experience with BFSI or financial domain data schemas, familiarity with modeling methodologies and standards such as 3NF and star schema. Soft skills like excellent stakeholder communication, collaboration, strategic thinking, and attention to scalability are also important. Joining this role will allow you to deliver advisory value across critical data initiatives, influence the modeling direction for a data-driven organization, and be at the forefront of GCP-based enterprise data transformation.,
Posted 2 weeks ago
7.0 years
0 Lacs
Greater Kolkata Area
Remote
About Workato Workato transforms technology complexity into business opportunity. As the leader in enterprise orchestration, Workato helps businesses globally streamline operations by connecting data, processes, applications, and experiences. Its AI-powered platform enables teams to navigate complex workflows in real-time, driving efficiency and agility. Trusted by a community of 400,000 global customers, Workato empowers organizations of every size to unlock new value and lead in today’s fast-changing world. Learn how Workato helps businesses of all sizes achieve more at workato.com. Why join us? Ultimately, Workato believes in fostering a flexible, trust-oriented culture that empowers everyone to take full ownership of their roles . We are driven by innovation and looking for team players who want to actively build our company. But, we also believe in balancing productivity with self-care . That’s why we offer all of our employees a vibrant and dynamic work environment along with a multitude of benefits they can enjoy inside and outside of their work lives. If this sounds right up your alley, please submit an application. We look forward to getting to know you! Also, Feel Free To Check Out Why Business Insider named us an “enterprise startup to bet your career on” Forbes’ Cloud 100 recognized us as one of the top 100 private cloud companies in the world Deloitte Tech Fast 500 ranked us as the 17th fastest growing tech company in the Bay Area, and 96th in North America Quartz ranked us the #1 best company for remote workers Responsibilities We are looking for an exceptional Senior Infrastructure Engineer with experience in building high-performing, scalable, enterprise-grade applications . to join our growing team. In this role, you will be responsible for building a high-performance queuing/storage engine. You will work in a polyglot environment where you can learn new languages and technologies whilst working with an enthusiastic team. You will also be responsible for: Software Engineering Design and develop high-volume, low-latency applications for mission-critical systems and deliver high availability and performance Contribute to all phases of the development life cycle Write well-designed, testable, efficient code Evaluate and propose improvements to existing system Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review Infrastructure Engineering Maintain and evolve application cloud infrastructure (AWS) Maintain and evolve Kubernetes clusters Infrastructure hardening according to compliance and security requirements Maintenance and development of monitoring, logging, tracing, alerting solutions OpenSearch Expertise Experience scaling OpenSearch clusters to handle heavy query and indexing workloads, including optimizing bulk indexing operations and query throughput Proficiency in implementing and managing effective sharding strategies to balance performance, storage, and recovery needs Advanced knowledge of OpenSearch performance tuning, including JVM settings, field mappings, and cache optimization Expertise in designing robust disaster recovery solutions with cross-cluster replication, snapshots, and restoration procedures Experience implementing and optimizing vector search capabilities for ML applications, including k-NN algorithms and approximate nearest neighbor (ANN) search Knowledge of custom OpenSearch plugin development for specialized indexing or query requirements Hands-on experience deploying and managing self-hosted OpenSearch clusters in Kubernetes environments Familiarity with monitoring OpenSearch performance metrics and implementing automated scaling solutions Requirements Qualifications / Experience / Technical Skills BS/MS degree in Computer Science, Engineering or a related subject 7+ years of industry experience Experience of working with public cloud infrastructure providers (AWS/Azure/Google Cloud) Experience with Terraform, Docker A hands-on approach to implementing solutions Good understanding of Linux networking and security Exceptional understanding of Kubernetes concepts Experience with Golang/Python/Java/Ruby (any) and databases such as PostgreSQL Contributions to open source projects is a plus Soft Skills / Personal Characteristics Communicate in English with colleagues and customers
Posted 2 weeks ago
4.0 years
1 - 5 Lacs
Noida
On-site
Position- Database Administrator About Wildnet Technologies: Wildnet Technologies is an award-winning White Label Digital Marketing and IT Services with a track record of helping businesses and Google Partner Agencies achieve their goals. We offer a comprehensive range of high-quality Digital Marketing Services and On-Demand Technology Resources. With over 12,000 successful projects delivered to date, our team of 300+ professionals is headquartered in India and serves clients in the United States, Canada, Australia, and the United Kingdom. Our expertise includes SEO, Paid Search, Paid Social Services, programmatic advertising, and more. Job Responsibilities: Deploy, monitor, and manage databases across both production and pre-productionenvironments. Automate infrastructure provisioning and configuration utilizing Terraform and Ansible. Manage infrastructure on Linux-based systems such as RHEL 9.x. Monitor system health, establish comprehensive alerting, and respond to incidents proactively to minimize downtime. Collaborate with DevOps and Data Engineering teams to seamlessly align infrastructure with MLOps workflows. Implement robust security controls, including data encryption, access management, andcomprehensive auditing to protect sensitive information. Troubleshoot and resolve performance issues within our database systems, ensuring optimal operation. Required Skills : PostgreSQL: In-depth knowledge of administration, performance tuning, replication, backup, and recovery. MariaDB/MySQL: Proficiency in managing these relational databases, including high availability solutions, schema design, query optimization, and user management. MongoDB: Experience with NoSQL database administration, including sharding, replica sets, indexing, and performance monitoring. MS SQL Server: Familiarity with managing SQL Server environments, including maintenance plans, security, and troubleshooting. AWS RDS/Aurora: Strong practical experience with Amazon Relational Database Service (RDS) and Aurora, encompassing instance provisioning, scaling, monitoring, and backup strategies. 4+ years of experience as a Database Administrator or DevOps Engineer with a focus on Linux OS. Extensive experience with Infrastructure as Code (IaC) tools, specifically Terraform and Ansible. Comprehensive knowledge of networking, security, and performance tuning within distributed environments. Proven experience with monitoring tools like DataDog, Splunk , SignalFx, PagerDuty. Deep knowledge and practical experience with the AWS cloud platform. Familiarity with other cloud platforms (e.g., GCP, Azure, or IBM Cloud) is a plus. Good understanding of Docker and container technologies. Good to Have: Certifications in Kubernetes (CKA/CKAD), Terraform (HashiCorp Certified), or Linux (RHCE/LPIC). Exposure to CI/CD pipelines, GitOps workflows, and tools like ArgoCD or Flux.\ Why Join Wildnet - Established Industry Leader : 15+ years of expertise in digital marketing and IT services; among the pioneers in India's digital space. Great Place to Work® Certified : Recognized for fostering a flexible, positive, and people-first work culture. Learning & Growth : Fast-paced environment with ongoing training, career advancement, and leadership development opportunities. Health & Wellness Benefits : Comprehensive insurance and wellness support for employees and their families. Work-Life Balance : Flexible Working Hours , 5-day work week and generous leave policy to support personal well-being. Exposure to Top Clients : Work on diverse projects with leading global brands across industries
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Modeller specializing in GCP and Cloud Databases, you will play a crucial role in designing and optimizing data models for both OLTP and OLAP systems. Your expertise in cloud-based databases, data architecture, and modeling will be essential in collaborating with engineering and analytics teams to ensure efficient operational systems and real-time reporting pipelines. You will be responsible for designing conceptual, logical, and physical data models tailored for OLTP and OLAP systems. Your focus will be on developing and refining models that support performance-optimized cloud data pipelines, implementing models in BigQuery, CloudSQL, and AlloyDB, as well as designing schemas with indexing, partitioning, and data sharding strategies. Translating business requirements into scalable data architecture and schemas will be a key aspect of your role, along with optimizing for near real-time ingestion, transformation, and query performance. You will utilize tools like DBSchema for collaborative modeling and documentation while creating and maintaining metadata and documentation around models. In terms of required skills, hands-on experience with GCP databases (BigQuery, CloudSQL, AlloyDB), a strong understanding of OLTP and OLAP systems, and proficiency in database performance tuning are essential. Additionally, familiarity with modeling tools such as DBSchema or ERWin, as well as a proficiency in SQL, schema definition, and normalization/denormalization techniques, will be beneficial. Preferred skills include functional knowledge of the Mutual Fund or BFSI domain, experience integrating with cloud-native ETL and data orchestration pipelines, and familiarity with schema version control and CI/CD in a data context. In addition to technical skills, soft skills such as strong analytical and communication abilities, attention to detail, and a collaborative approach across engineering, product, and analytics teams are highly valued. Joining this role will provide you with the opportunity to work on enterprise-scale cloud data architectures, drive performance-oriented data modeling for advanced analytics, and collaborate with high-performing cloud-native data teams.,
Posted 2 weeks ago
6.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Data Modeller – GCP & Cloud Databases Location: Chennai (Work From Office) Experience Required: 6 to 9 Years Role Overview We are looking for a hands-on Data Modeller with strong expertise in cloud-based databases, data architecture, and modeling for OLTP and OLAP systems. You will work closely with engineering and analytics teams to design and optimize conceptual, logical, and physical data models, supporting both operational systems and near real-time reporting pipelines. Key Responsibilities Design conceptual, logical, and physical data models for OLTP and OLAP systems Develop and refine models that support performance-optimized cloud data pipelines Collaborate with data engineers to implement models in BigQuery, CloudSQL, and AlloyDB Design schemas and apply indexing, partitioning, and data sharding strategies Translate business requirements into scalable data architecture and schemas Optimize for near real-time ingestion, transformation, and query performance Use tools such as DBSchema or similar for collaborative modeling and documentation Create and maintain metadata and documentation around models Must-Have Skills Hands-on experience with GCP databases: BigQuery, CloudSQL, AlloyDB Strong understanding of OLTP vs OLAP systems and respective design principles Experience in database performance tuning: indexing, sharding, and partitioning Skilled in modeling tools such as DBSchema, ERWin, or similar Understanding of variables that impact performance in real-time/near real-time systems Proficient in SQL, schema definition, and normalization/denormalization techniques Preferred Skills Functional knowledge of the Mutual Fund or BFSI domain Experience integrating with cloud-native ETL and data orchestration pipelines Familiarity with schema version control and CI/CD in a data context Soft Skills Strong analytical and communication skills Detail-oriented and documentation-focused Ability to collaborate across engineering, product, and analytics teams Why Join Work on enterprise-scale cloud data architectures Drive performance-first data modeling for advanced analytics Collaborate with high-performing cloud-native data teams Skills: olap,normalization,indexing,gcp databases,sharding,olap systems,modeling,schema definition,sql,data,oltp systems,alloydb,erwin,modeling tools,bigquery,database performance tuning,databases,partitioning,denormalization,dbschema,cloudsql
Posted 2 weeks ago
0.0 - 10.0 years
0 Lacs
Pune, Maharashtra
On-site
Location: Pune, Maharashtra Experience: 5–10 years Qualification: Bachelor’s or Master’s degree in Computer Science, IT, or related field Requirements: Design and develop logical and physical database models aligned with business needs. Implement and configure databases, tables, views, and stored procedures. Monitor and optimize database performance, tuning queries and resolving bottlenecks. Implement database security, including access controls, encryption, and compliance measures. Develop and maintain backup and disaster recovery strategies, ensuring data continuity. Design and implement data integration mechanisms across systems, ensuring consistency. Plan and execute strategies for scalability and high availability (sharding, replication, failover). Lead data migration projects, validating data integrity post-migration. Create and maintain detailed documentation for schemas, data flows, and specifications. Collaborate with developers, system admins, and stakeholders to align database architecture. Troubleshoot and resolve database issues related to errors, performance, and data inconsistencies. Strong knowledge of RDBMS (MySQL, PostgreSQL, SQL Server) and NoSQL (MongoDB, Cassandra). Proficient in SQL, database query optimization, indexing, and ETL processes. Experience with cloud database platforms like AWS RDS, Azure SQL, or Google Cloud SQL. Excellent problem-solving and communication skills. Relevant database certifications are a plus. If you have a deep understanding of database technologies, a passion for data integrity and performance, and a knack for designing efficient data solutions, we encourage you to apply. Join our team and play a vital role in managing our data infrastructure to support the organization's data-driven initiatives. Please share an updated copy of your CVs on hrdept@cstech.ai.
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Company We are seeking a skilled and experienced MongoDB Developer with 5–8 years of expertise in designing, developing, and maintaining scalable NoSQL database solutions. The ideal candidate will have deep knowledge of MongoDB internals, performance tuning, data modeling, and integration with backend applications. You will play a key role in the development and optimization of data-driven applications across various domains. About the Role The MongoDB Developer will be responsible for designing and implementing MongoDB data models based on application requirements and performance considerations. Responsibilities Design and implement MongoDB data models based on application requirements and performance considerations. Develop efficient queries and aggregation pipelines for complex datasets. Integrate MongoDB with backend technologies (Node.js, Java, Python, etc.). Monitor, troubleshoot, and optimize MongoDB performance, including indexing and replication strategies. Collaborate with software engineers, data architects, and DevOps teams to deliver robust and scalable solutions. Maintain and enhance existing database schemas and documents. Implement backup and recovery strategies, and participate in disaster recovery planning. Ensure data security and compliance with organizational policies. Write clean, maintainable code and maintain detailed documentation. Contribute to continuous improvement by identifying and promoting best practices in NoSQL development. Qualifications 5 to 8 years of professional experience with MongoDB development. Strong knowledge of MongoDB architecture, aggregation framework, and performance tuning. Experience with schema design, data modeling (both embedded and normalized), and indexing strategies. Hands-on experience integrating MongoDB with one or more backend programming languages (e.g., Node.js, Java, Python, Go). Familiarity with MongoDB Atlas and cloud deployment strategies. Proficient in writing JavaScript or shell scripts for MongoDB automation. Experience with version control systems like Git and CI/CD pipelines. Understanding of data replication, sharding, and high availability in MongoDB. Good problem-solving skills and the ability to work in an Agile environment. Preferred Skills Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Experience with additional NoSQL technologies (e.g., Redis, Cassandra, Elasticsearch) is a plus. MongoDB certification is an added advantage. Knowledge of containerization (Docker, Kubernetes) and DevOps practices. Pay range and compensation package Salary and compensation details will be discussed during the interview process. Equal Opportunity Statement We are committed to diversity and inclusivity in our hiring practices.
Posted 2 weeks ago
13.0 years
0 Lacs
Gurugram, Haryana, India
Remote
About The Role Grade Level (for internal use): 13 Job Title: Engineering Lead – Document Solution (Director Level) The Team: We are building very exciting Document Solutions offering which leverages Document Digitization and Agreement Intelligence to dramatically reduce the time required to manage these documents effectively while unlocking the vital data you need to generate deeper insights and enhance decision quality. Our solution includes industry-leading modules and tools widely adopted by financial institutions. This strategic initiative uses sophisticated AI models trained to extract data from organization, formation, AML, regulatory and legal documents, Document Digitization unlocks critical information for reuse across operations, significantly reduces the need for manual review and enables organizations to adopt scalable processes. This solution is going to be integrated across regulatory & compliance suite of products such as Counterparty Manager, ISDA Amend, Outreach360, Request for Amendment, KYC, and Tax Utility. We leverage a mature Java/Spring Boot-based tech stack, supported by AWS infrastructure, along with the latest advancements in the industry to deliver this solution over a multi-year span. What’s In It For You Build a next-generation product that customers can rely on for informed business decisions, enhanced customer experiences, and scalability. Develop your skills by working on an enterprise-level product focused on client lifecycle management and associated new technologies. Gain experience with modern, cutting-edge cloud, AI, and platform engineering technologies. Collaborate directly with clients, commercial teams, product managers, and tech leadership toward the common goal of achieving business success. Build a rewarding career with a global company. Duties & Accountabilities Lead a global engineering team across backend, front-end, data, and AI functions, with a focus on modern architectures, AI-driven automation, and cross-jurisdictional data compliance. Design and architect solutions for complex business challenges in document solution space, utilizing your extensive experience with the Java/Spring Boot/Angular/PostgreSQL tech stack and AWS infrastructure. Implement agentic AI and LLM-based services to streamline onboarding, document processing, and exception handling. Provide guidance and technical leadership to development teams on best practices, coding standards, and software design principles, ensuring high-quality outcomes. Demonstrate a deep understanding of existing system architecture (spanning multiple systems) and creatively envision optimal implementations to meet diverse client requirements. Drive participation in all scrum ceremonies, ensuring Agile best practices are effectively followed. Play a key role in the development team to create high-quality, high-performance, and scalable code. Evaluate and recommend new technologies, assisting in their adoption by development teams to enhance productivity and scalability. Collaborate effectively with remote teams in a geographically distributed development model. Communicate clearly and effectively with business stakeholders, building consensus and resolving queries regarding architecture and design. Troubleshoot and resolve complex software issues and defects within the Java/Angular/PostgreSQL tech stack and AWS-based infrastructure. Foster a professional culture within the team, emphasizing ownership, excellence, quality, and value for customers and the business. Ensure compliance with data privacy, data sovereignty, and regulatory architecture patterns (e.g., regional sharding, zero-data copy patterns). Customer Focus Build positive and productive relationships with customers by delivering high-quality solutions that enable business growth. Serve as the primary contact for customer inquiries and concerns. Analyze customer requests, set delivery priorities, and adjust schedules to meet timely delivery goals. Education And Experience Bachelor’s degree in computer science or a related field. Proven experience working with document management and/or workflow solutions, demonstrating a strong grasp of the subject matter. Experience with the latest AI tools to enhance developer productivity and creatively approach customer challenges. Extensive experience in a team environment following Agile software development principles. Strong interpersonal and written communication skills. Demonstrated ability to successfully manage multiple tasks simultaneously. High energy and a self-starter mentality, with a passion for creative problem-solving. Technical Skills 13+ years of relevant experience is preferred Strong Core Java 8+/Java EE design skills, including design patterns. Significant experience in designing and executing microservices using Spring Boot and other Spring components (JDBC, Batch, Security, Spring Data, etc.). Proficient in messaging tools such as Active MQ, SQS, SNS, and Distributed Messaging Systems. Expertise in optimizing SQL queries on PostgreSQL databases. Strong experience with multithreading, data structures, and concurrency scenarios. Proficient in using REST APIs, XML, JAXB, and JSON in creating layered systems. Experience with AWS Services (AWS Lambda, AWS CloudWatch, API Gateway, ECS, ECR, SQS, SNS). Familiarity with Open AI APIs, Agentic AI – Crew / LangChain / RAG / AutoGen / NLP / Java / Python / REST / Telemetry / Security / Auditability. Knowledge of data partitioning, GDPR, and the latest UI trends, such as Micro Frontend Architecture, is desirable. Add-ons Experience working directly with Business and Fund Formation documents including: organization, formation, AML, regulatory and legal documents Experience working directly with digitizing Legal and Trading Contracts in the Capital Markets space Experience working at Capital Markets or Private Markets institution. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315950 Posted On: 2025-06-27 Location: Gurgaon, Haryana, India
Posted 2 weeks ago
4.0 years
0 Lacs
India
On-site
Required Skills & Qualifications: 4+ years of backend development experience with Node.js . Strong expertise in MongoDB (aggregation pipelines, indexing, sharding, etc.). Good understanding of relational databases (SQL) and schema optimization. Experience with Angular/ React.js is an added advantage. Ensure code quality : Ensure that the code is of high quality, meets coding standards, is well-documented Resolve technical issues: Expertise in code reviewing, Identify and resolve technical issues that arise during development and guide the team to create effective and reusable coding. Sound knowledge in creating semi-automated approach for code review, functional check, file handling operations, Database queries, global variables, etc. Must have strong debugging and troubleshooting skills, with solid understanding of logging and exception handling Knowledge on AI / ML / computer vision-based applications. Strong communication skills Job Types: Full-time, Permanent Location Type: In-person Schedule: Day shift Ability to commute/relocate: Palayam, Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Current Monthly Salary? Least Expected Monthly Salary? How early you can join? Experience: Node.js: 4 years (Required) Mongodb: 4 years (Required) MySQL: 4 years (Required) Microservices: 4 years (Preferred) React.js: 1 year (Preferred) Work Location: In person Speak with the employer +91 9072049595
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Palayam, Thiruvananthapuram, Kerala
On-site
Required Skills & Qualifications: 4+ years of backend development experience with Node.js . Strong expertise in MongoDB (aggregation pipelines, indexing, sharding, etc.). Good understanding of relational databases (SQL) and schema optimization. Experience with Angular/ React.js is an added advantage. Ensure code quality : Ensure that the code is of high quality, meets coding standards, is well-documented Resolve technical issues: Expertise in code reviewing, Identify and resolve technical issues that arise during development and guide the team to create effective and reusable coding. Sound knowledge in creating semi-automated approach for code review, functional check, file handling operations, Database queries, global variables, etc. Must have strong debugging and troubleshooting skills, with solid understanding of logging and exception handling Knowledge on AI / ML / computer vision-based applications. Strong communication skills Job Types: Full-time, Permanent Location Type: In-person Schedule: Day shift Ability to commute/relocate: Palayam, Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Current Monthly Salary? Least Expected Monthly Salary? How early you can join? Experience: Node.js: 4 years (Required) Mongodb: 4 years (Required) MySQL: 4 years (Required) Microservices: 4 years (Preferred) React.js: 1 year (Preferred) Work Location: In person Speak with the employer +91 9072049595
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Database Administrator. In this role, you will: Primary responsibility of this role is to provide 24x7 production support to Mongo and Db2 databases. Ideal candidate should have 5+ years of overall experience and expert level knowledge of Mongo DB technology (optionally IBM Db2 LUW), and he/she should be capable enough to handle complex issues independently and provide resolutions within agreed SLA’s, also ensure customer delight, and always adhere to HSBC policies and procedures. Candidate having creative thinking and willing to bring innovations and passionate about working on challenging initiatives will be given preference. Requirements To be successful in this role, you should meet the following requirements: Primary responsibility of this role is to provide 24x7 Production Support for MongoDB databases. Perform Root Cause Analysis and troubleshooting majority production issues and provide solutions. Maintain High Availability & Replication setups. Primary responsibility of this role is to provide 24x7 Production Support for Mongo databases. Candidate should be capable to handling production issues independently and provide resolutions within agreed SLA’s. Participate in Crisis Calls, MIR, DQR Experience of working in Sharding Environments Knowledge of Replication Experience in various monitoring tools like profiler, Mongo top, Mongostats Experience in MongoDB backup and recovery Experience in OPs Manager. Experience in MongoDB Logging mechanism and Journaling Experience of Handling Crisis Calls. Troubleshoot various type of critical issues in production environments. Candidate ready to work in 24*7 shifts on rotational basis. Additional Skill Experience in IBM Db2 RDBMS technology. Soft skills / Behavioral Qualities: Efficiently manage all tasks. Taking ownership and accountability of tasks. Proactive approach in taking tasks. Active listening skills Clear, Precise, and Timely communication with Customer/Team Members/Management Ready to work in 365 days / 24 x 7 environment as per Business/Project requirements. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Software Engineer II Who is Mastercard? We work to connect and power an inclusive, digital economy that benefits everyone, everywhere, by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. We cultivate a culture of inclusion for all employees that respects their individual strengths, views, and experiences. We believe that our differences enable us to be a better team – one that makes better decisions, drives innovation, and delivers better business results. Technology at Mastercard What we create today will define tomorrow. Revolutionary technologies that reshape the digital economy to be more connected and inclusive than ever before. Safer, faster, more sustainable. And we need the best people to do it. Technologists who are energized by the challenges of a truly global network. With the talent and vision to create the critical systems and products that power global commerce and connect people everywhere to the vital goods and services they need every day. Working at Mastercard means being part of a unique culture. Inclusive and diverse, a rich collaboration of ideas and perspectives. A place that celebrates your strengths, values your experiences, and offers you the flexibility to shape a career across disciplines and continents. And the opportunity to work alongside experts and leaders at every level of the business, improving what exists, and inventing what’s next. About The Role Software Engineers at Mastercard design and code a variety of applications covering transaction processing, analytics, user interfaces, and APIs using a blend of cutting-edge technologies across big data, distributed systems, machine learning, and more. As a Software Engineer, you will deliver these products and solutions with speed and agility as part of a small team. This will involve developing high-performing, highly scalable software solutions and products for some of the world’s top brands. Specific tasks vary depending on the project and the business unit that you join in. All Staff At Mastercard Are Expected To Demonstrate 'Mastercard Way' Cultural Values Every Day - Own It, Simplify It, Sense Of Urgency, Thoughtful Risk-taking, Unlock Potential, And Be Inclusive – With a Relentless Focus On Our Customers. As a Software Engineer II At Mastercard, You Are Expected To Perform The Following General Responsibilities: Own tasks (dev, test, deployment) at an application/software component level Able to troubleshoot and refactor existing code Adopt new languages and architecture patterns needed for the work Influence the decisions made by the team Assist peers and less experienced members All About You It is expected a candidate should have experience across multiple areas listed below. Understands and has experience with full-stack application development including backend services, persistence tiers (e.g., databases), and interfaces (e.g., GUI, APIs, etc.). Understands and implements standard data access patterns (e.g., data transfer objects, object-relational mappings, data sharding, geo-partitioning). Uses defensive coding, logging, and alerts to help prevent issues. Understands the full feature development lifecycle including creating technical design documentation, comparing & contrasting approaches, tasking out stories, and designing test plans. Approaches feature rollout defensively with rollback contingencies. Conducts product demos. Good oral and written communicator. Communicates clearly and concisely. Structures communication to facilitate asynchronous discussion and build consensus. Creates technical documentation to describe their feature’s architecture, functionality, purpose, and troubleshooting workflow. Treats testing as a first-class requirement for any feature, ensuring testability, and coverage from the start. Understands how and when to employ various types of tests such as unit, regression, performance, and integration. Able to isolate technical problems by forming a hypothesis and eliminating prospective causes. Shows creativity and persistence on intermittent or hard to reproduce issues. Works closely with junior engineers and interns. Consistently checks in to provide guidance and ensure progress. Coaches other team members to be productive and grow their skills. Experience using enterprise tooling and processes to support full SDLC including CI/CD, tools for version control, code review, code coverage, trouble-ticketing, feature/defects planning, tracking and management. Exhibits flexibility and professionalism. Approaches discussions with empathy with a goal of building consensus. Frequently pitches in and lends support where needed. Effectively partners with team members in all roles and geographies. Ability to use knowledge of industry and software development trends to propose and apply them to design/architectural challenges for product features. Equal opportunities considerations In the US, Mastercard is an inclusive Equal Employment Opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. If you require accommodation or assistance to complete the online application process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly. Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-253006
Posted 2 weeks ago
7.0 years
0 Lacs
India
On-site
Position: MongoDB Developer Experience: 7-8 years Key Responsibilities: Design and implement MongoDB database solutions for performance and scalability. Create, optimize, and maintain MongoDB collections, indexes, and schemas. Develop efficient queries for CRUD operations and aggregations. Integrate MongoDB with backend APIs and services. Monitor, troubleshoot, and improve database performance. Ensure data security and integrity across all systems. Collaborate with front-end, back-end, and DevOps teams to ensure seamless data flow. Create and maintain documentation related to database structure and code. Required Skills & Qualifications: Strong experience with MongoDB and NoSQL database design . Proficient in MongoDB query language , aggregation framework, and indexing. Experience with Node.js , Express , or other backend technologies. Familiarity with data modeling, sharding, and replication. Knowledge of MongoDB tools like Mongo Compass , Mongoose , or Robo 3T . Understanding of REST APIs and backend integration. Ability to write clean, maintainable, and efficient code. Good understanding of version control tools like Git . Strong analytical and problem-solving skills. Preferred Qualifications: MongoDB certification or related training. Experience with cloud-hosted databases (MongoDB Atlas, AWS DocumentDB). Familiarity with performance tuning and monitoring tools. Prior experience working in Agile/Scrum environments. Job Type: Full-time Schedule: Day shift Work Location: In person Speak with the employer +91 7877727352
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Web Developer Intern – Full Stack (Ci4 Digital Platform) Location: Bangalore Duration: 3–6 Months (with potential conversion to full time role) Stipend: As per industry standards About the Role We're looking for a proactive and technically capable Web Developer Intern to assist in building a secure, scalable, and modular web application platform. You’ll work closely with our core tech and product team to translate functional requirements into reliable, maintainable digital modules such as work order management, finance, feedback, and time tracking. Key Responsibilities: Develop and maintain web modules using modern front-end and back-end frameworks Implement secure user authentication and authorization flows Design scalable database schemas and optimize queries with basic data separation and access control Work on website performance, including caching strategies, asset optimization, and secure cookie/session handling Ensure responsive, intuitive UI/UX using contemporary design standards Collaborate with product owners and backend engineers to integrate APIs Follow Git workflows and Agile sprint cycles Tech Stack Requirements You don’t need to know everything, but strong grasp or willingness to quickly learn is essential. Frontend: React.js, Vue.js, or similar JS framework HTML5, CSS3, TailwindCSS / Bootstrap Webpack / Vite, browser storage, and component libraries Backend: Node.js (Express), Django, or equivalent RESTful APIs & basic GraphQL exposure preferred JWT or OAuth2 based authentication Role-based access control Database: PostgreSQL / MySQL with normalized schemas MongoDB (plus if familiar with document-store structuring) Basic understanding of database sharding/splitting and indexing DevOps / Tools: Git, GitHub/GitLab Docker basics Redis (caching), NGINX, or similar web server Basic CI/CD understanding Other Skills: Familiarity with secure coding practices (input validation, CSRF/XSS protection, HTTPS) Understanding of session/cookie management and client-server architecture API documentation and debugging tools (Postman, Swagger, etc.) Eligibility Final-year student or recent graduate in Computer science/IT or related field Prior internship or project experience in full-stack development is a strong plus Eagerness to learn, adapt, and own technical deliverables
Posted 2 weeks ago
0.0 - 13.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Description : Job Title: Staff Engineer - MCPD Department Overview: Marketing Cloud Platform Department (MCPD)’s mission is to lead Rakuten’s marketing related products strategy, execute product development and implementation. We empower Rakuten internal marketing teams by building engaging, respectful and cost-efficient marketing platforms that put our customer at the center. Our main advantage comes from our ability to leverage the Rakuten Ecosystem. We provide marketing solutions such as marketing campaign management, multichannel communication and personalization. As a 200+ experts across Japan, India and Singapore, we are proud to be a technology organization and we share our knowledge across the Rakuten Tech community. Job Description: Position Overview: We are seeking a highly skilled and experienced Staff Engineer to to join our engineering team. The ideal candidate will possess deep expertise in Java, advanced Java, and microservices architecture, with strong skills in system design, low-level design (LLD), and database scaling. You will play a pivotal role in shaping our technology stack, ensuring high availability and scalability, and mentoring junior engineers. A key focus will be on leveraging Kubernetes for container orchestration and driving technical excellence across the organization. As a Staff Engineer, you will be responsible for hands-on and overseeing the development and delivery of our MCPD products, ensuring the highest standards of quality, performance, and reliability. Key Responsibilities: Architect and Design Systems: Lead the design and development of highly scalable and resilient microservices-based systems, providing both high-level architecture (HLD) and detailed low-level designs (LLD). Code Reviews and Best Practices: Drive engineering best practices through code reviews, design discussions, and collaboration with cross-functional teams. Database Scaling: Design and optimize databases to support high transaction volumes, ensuring efficient scaling and performance tuning for production environments. Microservices Development: Build, deploy, and manage microservices using modern technologies and frameworks, ensuring smooth operation in distributed environments. System Ownership: Take full ownership of features or systems from conception to production, ensuring they meet reliability, performance, and security standards. Kubernetes & Cloud-Native Architecture: Leverage Kubernetes for deploying, scaling, and managing containerized applications, ensuring seamless orchestration in cloud environments. Mentorship and Leadership: Mentor junior engineers and contribute to their professional growth through knowledge sharing, coaching, and promoting a culture of continuous improvement. Collaboration: Work closely with product managers, architects, and other engineering teams to translate business requirements into technical solutions. Innovation and R&D: Stay updated with the latest technology trends and evaluate new tools, frameworks, and methodologies to ensure the team is leveraging cutting-edge solutions. Qualifications: Bachelor’s or master’s degree in computer science, Engineering, or a related field. 10+ years of experience in software development, with expertise in Java and advanced Java concepts. Proven experience with microservices architecture, including hands-on experience building and maintaining distributed systems. Strong understanding of system design and ability to create high-availability, high-performance applications. Experience with low-level design (LLD), providing detailed design solutions for complex problems. Deep knowledge of database scaling techniques, such as partitioning, sharding, and replication. Proficient in Kubernetes for container orchestration, including production experience with large-scale deployments. Knowledge of messaging systems (Kafka, RabbitMQ). Familiarity with cloud platforms (AWS, GCP, Azure) and modern DevOps practices. Preferred Skills: Experience with other languages such as Python, Go, or Node.js. Familiarity with AIOps, observability, and monitoring tools. Mandatory Skills : Java, Spring Boot, GO, Python, LLD, HLD Location : Bangalore, Karnataka, India Years Of Exp : 10 to 13 years Why you should choose us? Are you interested in working for a Global Leader in E-commerce? Are you excited about working on highly scalable platforms and applications that are accessed by millions of users every day? If so, read on to find out more about the opportunity. Rakuten is the largest E-commerce company in Japan and one of the largest E-commerce and Internet Services companies in the World. Rakuten is ranked in top 20 most innovative companies in the world by Forbes. Rakuten India Development Centre is the second largest technology hub outside of Japan that enables & builds platforms for global E commerce, Payments, Digital, AI, Data Science services across the globe. The India arm serves as a research and development center with an employee strength of around 450+ (& Growing). Rakuten is committed to cultivating and preserving a culture of inclusion and connectedness. We are able to grow and learn better together with a diverse team and inclusive workforce. The collective sum of the individual differences, life experiences, knowledge, innovation, self-expression, and talent that our employees invest in their work represents not only part of our culture, but our reputation and Rakuten’s achievement as well. In recruiting for our team, we welcome the unique contributions that you can bring in terms of their education, opinions, culture, ethnicity, race, sex, gender identity and expression, nation of origin, age, languages spoken, veteran’s status, color, religion, disability, sexual orientation and beliefs”
Posted 2 weeks ago
5.0 years
6 - 20 Lacs
India
On-site
Job Description: Senior Database Developer (MySQL & AWS Expert) Location: Hyderabad, India Experience: 5+ Years (Preferably 7+ Years) Employment Type: Full-time Role Overview: We are looking for an exceptionally strong Database Developer with 5+ years of hands-on experience specializing in MySQL database development on Amazon AWS Cloud. The ideal candidate should have deep expertise in high-performance query tuning, handling massive datasets, designing complex summary tables, and implementing scalable database architectures. This role demands a highly analytical and problem-solving mindset, capable of delivering optimized and mission-critical database solutions. Key Responsibilities: • Design, develop, and optimize highly scalable MySQL databases on AWS cloud infrastructure. • Expert-level performance tuning of queries, indexes, and stored procedures for mission-critical applications. • Handle large-scale datasets, ensuring efficient query execution and minimal latency. • Architect and implement summary tables for optimized reporting and analytical performance. • Work closely with software engineers to design efficient data models, indexing strategies, and partitioning techniques. • Ensure high availability, disaster recovery, and fault tolerance of database systems. • Perform root-cause analysis of database bottlenecks and implement robust solutions. • Implement advanced replication strategies, read/write separation, and data sharding for optimal performance. • Work with DevOps teams to automate database monitoring, backups, and performance metrics using AWS tools. • Optimize stored procedures, triggers, and complex database functions to enhance system efficiency. • Ensure best-in-class data security, encryption, and access control policies. Must-Have Skills: • Proven expertise in MySQL query optimization, indexing, and execution plan analysis. • Strong knowledge of AWS RDS, Aurora, and cloud-native database services. • Hands-on experience in tuning high-performance, high-volume transactional databases. • Deep understanding of database partitioning, sharding, caching, and replication strategies. • Experience working with large-scale datasets (millions to billions of records) and ensuring low-latency queries. • Advanced experience in database schema design, normalization, and optimization for high availability. • Proficiency in query profiling, memory management, and database load balancing. • Strong understanding of data warehousing, ETL processes, and analytics-driven data models. • Expertise in troubleshooting slow queries and deadlocks in a production environment. • Proficiency in scripting languages like Python, Shell, or SQL scripting for automation. Preferred Skills: • Experience with big data technologies like Redshift, Snowflake, Hadoop, or Spark. • Exposure to NoSQL databases (MongoDB, Redis) for hybrid data architectures. • Hands-on experience with CI/CD pipelines and DevOps database management. • Experience in predictive analytics and AI-driven data optimizations. Educational Qualification: • Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Salary & Benefits: • Top-tier compensation package for highly skilled candidates. • Fast-track career growth with opportunities for leadership roles. • Comprehensive health benefits and performance-based bonuses. • Exposure to cutting-edge technologies and large-scale data challenges. If you are a world-class MySQL expert with a passion for solving complex database challenges and optimizing large-scale systems, apply now! Job Types: Full-time, Permanent Pay: ₹634,321.11 - ₹2,091,956.36 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Language: English (Required) Work Location: In person Expected Start Date: 21/07/2025
Posted 3 weeks ago
4.0 years
0 Lacs
India
On-site
An innovative MySQL Database Administrator to assist in developing and creating cutting-edge solutions for the healthcare industry. This is an opportunity to develop and create cutting-edge solutions for our products and services while being part of an extraordinary team. This role will ensure that the business is well-coordinated and productive by managing its processes and coaching its people. The area of responsibility for this role is very wide and thus requires thorough knowledge of various company processes. Who we are… It’s simple. We do business differently, by empowering our team to create fresh ideas which impact lives everywhere. We don’t just dream it; we do it every day at Axxess, we bring life-changing technology to healthcare impacting the way people work, learn and grow their business. Our edge does not come from our technology, it comes from our people. We work as one team with a common goal to create shared success benefiting everyone. What you will do... Participate in daily operations to support the business. Optimizing database (SQL calls) for max performance Collaborate with management in the development of performance goals and long-term operational plans. Collaborate in operational processes and performance, recommend solutions for improvement. Experience with SQL and stored routines. Experience working with HA Proxy, ProxySQL or MaxScale will be a plus. Working knowledge with AWS database technologies like Aurora, RDS. Monitor day-to-day operational systems and processes that provide visibility into goals, progress, and obstacles for our key initiatives. Maintain relationships with all technology team members, external partners, and vendors to provide input regarding operational activity and strategic goals. Monitor and analyze key metrics for the day-to-day performance of the operations to ensure efficient and timely completion of tasks. Perform technical troubleshooting to diagnose, isolate and correct data and database issues. Qualifications What you bring… Required – 4 years of MySQL management experience. Required – Experience in MySQL configuring HA and clustering solutions. Required – Large scale MySQL deployments. Required – MySQL sharding architecture and techniques. Ability to work with Linux/Unix based OS Environment. MS SQL experience is a plus. Software as a service industry experience is a plus. Cloud experience is a plus Strong communication, interpersonal and professional skills. Proven ability to plan and manage the operational process for maximum efficiency and productivity. Demonstrated ability to solve problems and anticipate customer needs. Demonstrated ability to organize, set, and implement priorities and manage multiple tasks.
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description About Sutherland Artificial Intelligence. Automation.Cloud engineering. Advanced analytics.For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence. We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model. For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA.We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships. Sutherland Unlocking digital performance. Delivering measurable results. Job Description Sutherland Global Services is seeking a MERN Stack Developer with experience in developing, testing, and Bachelor degree in Computer Science, Engineering or a related subject Should have worked on ASP.NET, MVC - 4.0/ 3.5/ 2.0 and Entity Framework (Mandatory) Hands on experience with .Net Core (Good to have) Development experience in SQL Store Procedures (Mandatory) Implementation experience with LinQ Query (Mandatory) Knowledge on Azure cloud services (Good to have) Develop WCF and Web API (SOAP \ REST) (Mandatory) Minimum expertise with AngularJs and jQuery (Mandatory) Knowledge in DevOps, TFS branching and merging (Mandatory) Technology Split: NET Frameworks: Proficient in developing applications using .NET Core and .NET Framework. Experienced with ASP.NET MVC to create robust and scalable web applications. Programming Languages: Strong programming skills in C# for backend development. Fluent in JavaScript for dynamic front-end interactions. Database Management: Experience in designing and managing SQL queries. Experience in working with SQL partition and sharding methodologies. Capable of debugging and developing complex queries, optimizing performance, and ensuring data security. Frontend Development: Proficient in HTML5 and CSS3 for structuring and designing web pages. Experience with responsive design and cross-browser compatibility. Development Tools and Environments: Familiar with Visual Studio and VS Code for integrated development environments. Experience with version control using Git. Web Technologies: Skilled in client-side technologies including JavaScript, jQuery, and AJAX. Experience in developing and debugging web services like REST APIs and SOAP Experience with GraphQL API technologies will be an advantage. Testing and Debugging: Capable of performing unit testing and integration testing to ensure application reliability and functionality. Proficient in debugging tools and techniques for both frontend and backend. Soft Skills: Strong problem-solving abilities and analytical skills. Qualifications To succeed in this position, you must have: Bachelor degree in Computer Science, Engineering or a related subject Should have worked on ASP.NET, MVC - 4.0/ 3.5/ 2.0 and Entity Framework (Mandatory) Hands on experience with .Net Core (Good to have) Development experience in SQL Store Procedures (Mandatory) Implementation experience with LinQ Query (Mandatory) Knowledge on Azure cloud services (Good to have) Develop WCF and Web API (SOAP \ REST) (Mandatory) Minimum expertise with AngularJs and jQuery (Mandatory) Knowledge in DevOps, TFS branching and merging (Mandatory) Technology Split: NET Frameworks: Proficient in developing applications using .NET Core and .NET Framework. Experienced with ASP.NET MVC to create robust and scalable web applications. Programming Languages: Strong programming skills in C# for backend development. Fluent in JavaScript for dynamic front-end interactions. Database Management: Experience in designing and managing SQL queries. Experience in working with SQL partition and sharding methodologies. Capable of debugging and developing complex queries, optimizing performance, and ensuring data security. Frontend Development: Proficient in HTML5 and CSS3 for structuring and designing web pages. Experience with responsive design and cross-browser compatibility. Development Tools and Environments: Familiar with Visual Studio and VS Code for integrated development environments. Experience with version control using Git. Web Technologies: Skilled in client-side technologies including JavaScript, jQuery, and AJAX. Experience in developing and debugging web services like REST APIs and SOAP Experience with GraphQL API technologies will be an advantage. Testing and Debugging: Capable of performing unit testing and integration testing to ensure application reliability and functionality. Proficient in debugging tools and techniques for both frontend and backend. Soft Skills: Strong problem-solving abilities and analytical skills. Additional Information This is a full-time position based out of Chennai.
Posted 3 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements The MongoDB Administrator is responsible for managing and maintaining MongoDB database systems to ensure optimal performance, availability, and security. This role involves database design, implementation, monitoring, backup, and recovery tasks, as well as collaborating with development and operations teams to support application requirements. Key Responsibilities: Install,configure, and maintain MongoDB databases across development, testing, and production environments. Monitor database performance, implement changes, and apply new patches and versions when required. Ensure high availability and disaster recovery through replication, sharding, and backup strategies. Create and maintain users, roles, and permissions in accordance with security policies. Design and implement indexing, data modeling, and query optimization for performance tuning. Set up and manage replica sets and sharded clusters. Implement and maintain monitoring systems such as MongoDB Ops Manager, Prometheus, or similar tools. Develop and execute scripts for automation and database maintenance tasks. Collaborate with application developers to optimize data operations and troubleshoot issues. Maintain documentation related to database configuration, standard procedures, and processes. Conduct regular security audits and ensure compliance with organizational and regulatory standards. Additional Job Description Must have 5+ years of experience. Proven experience as a MongoDB Database Administrator. Strong understanding of MongoDB architecture, data modeling, and performance tuning. Experience with MongoDB Atlas and on-premises MongoDB installations. Familiarity with Linux/Unix system administration. Proficiency in scripting languages like Bash, Python, or PowerShell. Knowledge of DevOps tools and CI/CD pipelines is a plus. Experience with cloud platforms (AWS, Azure, GCP) is desirable. Solid understanding of backup and recovery practices. Strong problem-solving and analytical skills. EEO Statement At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service - all backed by TELUS, our multi-billion dollar telecommunications parent. Equal Opportunity Employer At TELUS Digital, we are proud to be an equal opportunity employer and are committed to creating a diverse and inclusive workplace. All aspects of employment, including the decision to hire and promote, are based on applicants’ qualifications, merits, competence and performance without regard to any characteristic related to diversity.
Posted 3 weeks ago
6.0 years
0 Lacs
India
Remote
Experience: 6+ Years ______________ Position Overview We are looking for a Senior MongoDB Database Developer or Architect with a minimum of 6 years of hands-on experience to lead and manage our MongoDB database infrastructure. This is a strategic role, ideal for someone who thrives in dynamic environments and has a strong background in MongoDB database administration, performance tuning, and architecture. The role will involve working across development, staging, and production systems to ensure scalability, reliability, and high performance of database operations. In addition to deep MongoDB expertise, familiarity with FiveTran, Snowflake, ETL processes, Node.js, JavaScript, and application design patterns for MongoDB will be highly valued. ______________ Key Responsibilities Monitor, maintain, and optimize MongoDB environments for performance and uptime. Configure and fine-tune database parameters for efficiency. Implement and manage robust backup and recovery strategies. Ensure data integrity, security, and compliance across all environments. Migrate databases from on-premises to cloud platforms (e.g., MongoDB Community to Atlas). Upgrade MongoDB versions and supporting agents/tools as needed. Application & Production Support Troubleshoot and resolve database and application-related issues. Provide expert-level support to engineering teams and end-users. Perform routine maintenance and system upgrades. Maintain comprehensive documentation for procedures, configurations, and best practices. Database Development Collaborate with development teams to implement MongoDB best practices and application design patterns. Create and maintain scripts for data extraction, transformation, and reporting. Enhance and maintain ETL pipelines, including integration with FiveTran and Snowflake. Capacity Planning Analyze usage trends and plan for future data growth and scaling. Recommend and implement data growth strategies and resource right-sizing. Monitor performance metrics and address bottlenecks proactively. Manage data lifecycle, including archival and retention strategies. Performance Optimization Optimize query execution plans, indexing strategies, and replication performance. Monitor and tune disk I/O, oplog usage, and overall system throughput. Implement server scaling and sharding strategies as necessary. Disaster Recovery & High Availability Design and test comprehensive disaster recovery and high-availability solutions. Ensure consistent backup jobs and regularly validate recovery processes. Work with cross-functional teams to maintain DR readiness. ______________ Preferred Skills & Technologies Strong expertise in MongoDB (including sharding, replication, performance tuning) Experience with MongoDB Atlas and Ops Manager Proficiency in ETL pipelines and tools (FiveTran, Snowflake) Hands-on scripting and development experience (JavaScript, Node.js) Experience with cloud migration and database modernization Familiarity with DevOps practices and CI/CD pipelines for database changes ______________ This is a remote opportunity open for both onshore (USA) and offshore (India) candidates.
Posted 3 weeks ago
5.0 years
0 Lacs
India
On-site
About ArthaChain ArthaChain is an ultra-scalable, AI-powered Layer 1 blockchain developed by Diigoo Tech Pvt. Ltd. Designed with SVCP (Social Verified Consensus Protocol), AI sharding, and advanced modular architecture, ArthaChain delivers unmatched performance — over 22 million TPS with sub-<2s finality — making it a future-proof infrastructure for global Web3 adoption. We are now entering our pre-seed investment round ($1.45M at $10M valuation) and are looking for a battle-tested Funding Advisor to help us close this round and secure institutional, VC, and grant-based funding. Who We Want We are not looking for generalists or entry-level advisors. This role is strictly for experienced professionals with a proven history of raising capital for blockchain/Web3 startups. You are someone who: Has closed multiple 6–7 figure rounds (seed to Series A) in the crypto/Web3 space. Has deep connections with VCs, crypto funds, DAOs, or family offices that actively invest in blockchain. Understands the nuances of tokenomics, regulatory dynamics, and crypto-native fundraising models (SAFT, SAFEs, equity-token hybrids, etc.) Can personally make introductions to the right investors, and not just advise from the sidelines. Key Responsibilities Strategically guide our pre-seed and seed round fundraising, including pitch optimization and valuation strategy. Leverage your existing VC/fund/angel network to secure warm intros and build momentum. Advise on funding instruments, equity/token allocations, and long-term capital structuring. Support deal negotiation, due diligence, and investor relations. Assist with applying to Web3 accelerators and grant programs. Minimum Requirements 5+ years experience in Web3 fundraising, VC, crypto banking, or strategic finance. Demonstrated success in funding blockchain or crypto-native companies. Deep knowledge of Web3 capital markets, tokenomics, and regulatory considerations. Ability to commit 10–15 hours/month and deliver measurable traction. Strong ethical grounding, we are a legit, compliance-first project. Compensation Retainer + Success Fee
Posted 3 weeks ago
5.0 - 8.0 years
1 - 5 Lacs
Chennai
On-site
Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. ͏ Hands-on experience in data modelling for both OLTP and OLAP systems. In-depth knowledge of Conceptual, Logical, and Physical data modelling. Strong understanding of indexing, partitioning, and data sharding with practical experience. Experience in identifying and addressing factors affecting database performance for near-real-time reporting and application interaction. Proficiency with at least one data modelling tool (preferably DB Schema). Functional knowledge of the mutual fund industry is a plus. Familiarity with GCP databases like Alloy DB, Cloud SQL, and Big Query. Willingness to work from Chennai (office presence is mandatory) Chennai customer site, requiring five days of on-site work each week. Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough