Jobs
Interviews

152 Sharding Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Software Engineer II Who is Mastercard? We work to connect and power an inclusive, digital economy that benefits everyone, everywhere, by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. We cultivate a culture of inclusion for all employees that respects their individual strengths, views, and experiences. We believe that our differences enable us to be a better team – one that makes better decisions, drives innovation, and delivers better business results. Technology at Mastercard What we create today will define tomorrow. Revolutionary technologies that reshape the digital economy to be more connected and inclusive than ever before. Safer, faster, more sustainable. And we need the best people to do it. Technologists who are energized by the challenges of a truly global network. With the talent and vision to create the critical systems and products that power global commerce and connect people everywhere to the vital goods and services they need every day. Working at Mastercard means being part of a unique culture. Inclusive and diverse, a rich collaboration of ideas and perspectives. A place that celebrates your strengths, values your experiences, and offers you the flexibility to shape a career across disciplines and continents. And the opportunity to work alongside experts and leaders at every level of the business, improving what exists, and inventing what’s next. About The Role Software Engineers at Mastercard design and code a variety of applications covering transaction processing, analytics, user interfaces, and APIs using a blend of cutting-edge technologies across big data, distributed systems, machine learning, and more. As a Software Engineer, you will deliver these products and solutions with speed and agility as part of a small team. This will involve developing high-performing, highly scalable software solutions and products for some of the world’s top brands. Specific tasks vary depending on the project and the business unit that you join in. All Staff At Mastercard Are Expected To Demonstrate 'Mastercard Way' Cultural Values Every Day - Own It, Simplify It, Sense Of Urgency, Thoughtful Risk-taking, Unlock Potential, And Be Inclusive – With a Relentless Focus On Our Customers. As a Software Engineer II At Mastercard, You Are Expected To Perform The Following General Responsibilities: Own tasks (dev, test, deployment) at an application/software component level Able to troubleshoot and refactor existing code Adopt new languages and architecture patterns needed for the work Influence the decisions made by the team Assist peers and less experienced members All About You It is expected a candidate should have experience across multiple areas listed below. Understands and has experience with full-stack application development including backend services, persistence tiers (e.g., databases), and interfaces (e.g., GUI, APIs, etc.). Understands and implements standard data access patterns (e.g., data transfer objects, object-relational mappings, data sharding, geo-partitioning). Uses defensive coding, logging, and alerts to help prevent issues. Understands the full feature development lifecycle including creating technical design documentation, comparing & contrasting approaches, tasking out stories, and designing test plans. Approaches feature rollout defensively with rollback contingencies. Conducts product demos. Good oral and written communicator. Communicates clearly and concisely. Structures communication to facilitate asynchronous discussion and build consensus. Creates technical documentation to describe their feature’s architecture, functionality, purpose, and troubleshooting workflow. Treats testing as a first-class requirement for any feature, ensuring testability, and coverage from the start. Understands how and when to employ various types of tests such as unit, regression, performance, and integration. Able to isolate technical problems by forming a hypothesis and eliminating prospective causes. Shows creativity and persistence on intermittent or hard to reproduce issues. Works closely with junior engineers and interns. Consistently checks in to provide guidance and ensure progress. Coaches other team members to be productive and grow their skills. Experience using enterprise tooling and processes to support full SDLC including CI/CD, tools for version control, code review, code coverage, trouble-ticketing, feature/defects planning, tracking and management. Exhibits flexibility and professionalism. Approaches discussions with empathy with a goal of building consensus. Frequently pitches in and lends support where needed. Effectively partners with team members in all roles and geographies. Ability to use knowledge of industry and software development trends to propose and apply them to design/architectural challenges for product features. Equal opportunities considerations In the US, Mastercard is an inclusive Equal Employment Opportunity employer that considers applicants without regard to gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. If you require accommodation or assistance to complete the online application process, please contact reasonable_accommodation@mastercard.com and identify the type of accommodation or assistance you are requesting. Do not include any medical or health information in this email. The Reasonable Accommodations team will respond to your email promptly. Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-253006

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

On-site

Position: MongoDB Developer Experience: 7-8 years Key Responsibilities: Design and implement MongoDB database solutions for performance and scalability. Create, optimize, and maintain MongoDB collections, indexes, and schemas. Develop efficient queries for CRUD operations and aggregations. Integrate MongoDB with backend APIs and services. Monitor, troubleshoot, and improve database performance. Ensure data security and integrity across all systems. Collaborate with front-end, back-end, and DevOps teams to ensure seamless data flow. Create and maintain documentation related to database structure and code. Required Skills & Qualifications: Strong experience with MongoDB and NoSQL database design . Proficient in MongoDB query language , aggregation framework, and indexing. Experience with Node.js , Express , or other backend technologies. Familiarity with data modeling, sharding, and replication. Knowledge of MongoDB tools like Mongo Compass , Mongoose , or Robo 3T . Understanding of REST APIs and backend integration. Ability to write clean, maintainable, and efficient code. Good understanding of version control tools like Git . Strong analytical and problem-solving skills. Preferred Qualifications: MongoDB certification or related training. Experience with cloud-hosted databases (MongoDB Atlas, AWS DocumentDB). Familiarity with performance tuning and monitoring tools. Prior experience working in Agile/Scrum environments. Job Type: Full-time Schedule: Day shift Work Location: In person Speak with the employer +91 7877727352

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Web Developer Intern – Full Stack (Ci4 Digital Platform) Location: Bangalore Duration: 3–6 Months (with potential conversion to full time role) Stipend: As per industry standards About the Role We're looking for a proactive and technically capable Web Developer Intern to assist in building a secure, scalable, and modular web application platform. You’ll work closely with our core tech and product team to translate functional requirements into reliable, maintainable digital modules such as work order management, finance, feedback, and time tracking. Key Responsibilities: Develop and maintain web modules using modern front-end and back-end frameworks Implement secure user authentication and authorization flows Design scalable database schemas and optimize queries with basic data separation and access control Work on website performance, including caching strategies, asset optimization, and secure cookie/session handling Ensure responsive, intuitive UI/UX using contemporary design standards Collaborate with product owners and backend engineers to integrate APIs Follow Git workflows and Agile sprint cycles Tech Stack Requirements You don’t need to know everything, but strong grasp or willingness to quickly learn is essential. Frontend: React.js, Vue.js, or similar JS framework HTML5, CSS3, TailwindCSS / Bootstrap Webpack / Vite, browser storage, and component libraries Backend: Node.js (Express), Django, or equivalent RESTful APIs & basic GraphQL exposure preferred JWT or OAuth2 based authentication Role-based access control Database: PostgreSQL / MySQL with normalized schemas MongoDB (plus if familiar with document-store structuring) Basic understanding of database sharding/splitting and indexing DevOps / Tools: Git, GitHub/GitLab Docker basics Redis (caching), NGINX, or similar web server Basic CI/CD understanding Other Skills: Familiarity with secure coding practices (input validation, CSRF/XSS protection, HTTPS) Understanding of session/cookie management and client-server architecture API documentation and debugging tools (Postman, Swagger, etc.) Eligibility Final-year student or recent graduate in Computer science/IT or related field Prior internship or project experience in full-stack development is a strong plus Eagerness to learn, adapt, and own technical deliverables

Posted 1 week ago

Apply

0.0 - 13.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Description : Job Title: Staff Engineer - MCPD Department Overview: Marketing Cloud Platform Department (MCPD)’s mission is to lead Rakuten’s marketing related products strategy, execute product development and implementation. We empower Rakuten internal marketing teams by building engaging, respectful and cost-efficient marketing platforms that put our customer at the center. Our main advantage comes from our ability to leverage the Rakuten Ecosystem. We provide marketing solutions such as marketing campaign management, multichannel communication and personalization. As a 200+ experts across Japan, India and Singapore, we are proud to be a technology organization and we share our knowledge across the Rakuten Tech community. Job Description: Position Overview: We are seeking a highly skilled and experienced Staff Engineer to to join our engineering team. The ideal candidate will possess deep expertise in Java, advanced Java, and microservices architecture, with strong skills in system design, low-level design (LLD), and database scaling. You will play a pivotal role in shaping our technology stack, ensuring high availability and scalability, and mentoring junior engineers. A key focus will be on leveraging Kubernetes for container orchestration and driving technical excellence across the organization. As a Staff Engineer, you will be responsible for hands-on and overseeing the development and delivery of our MCPD products, ensuring the highest standards of quality, performance, and reliability. Key Responsibilities: Architect and Design Systems: Lead the design and development of highly scalable and resilient microservices-based systems, providing both high-level architecture (HLD) and detailed low-level designs (LLD). Code Reviews and Best Practices: Drive engineering best practices through code reviews, design discussions, and collaboration with cross-functional teams. Database Scaling: Design and optimize databases to support high transaction volumes, ensuring efficient scaling and performance tuning for production environments. Microservices Development: Build, deploy, and manage microservices using modern technologies and frameworks, ensuring smooth operation in distributed environments. System Ownership: Take full ownership of features or systems from conception to production, ensuring they meet reliability, performance, and security standards. Kubernetes & Cloud-Native Architecture: Leverage Kubernetes for deploying, scaling, and managing containerized applications, ensuring seamless orchestration in cloud environments. Mentorship and Leadership: Mentor junior engineers and contribute to their professional growth through knowledge sharing, coaching, and promoting a culture of continuous improvement. Collaboration: Work closely with product managers, architects, and other engineering teams to translate business requirements into technical solutions. Innovation and R&D: Stay updated with the latest technology trends and evaluate new tools, frameworks, and methodologies to ensure the team is leveraging cutting-edge solutions. Qualifications: Bachelor’s or master’s degree in computer science, Engineering, or a related field. 10+ years of experience in software development, with expertise in Java and advanced Java concepts. Proven experience with microservices architecture, including hands-on experience building and maintaining distributed systems. Strong understanding of system design and ability to create high-availability, high-performance applications. Experience with low-level design (LLD), providing detailed design solutions for complex problems. Deep knowledge of database scaling techniques, such as partitioning, sharding, and replication. Proficient in Kubernetes for container orchestration, including production experience with large-scale deployments. Knowledge of messaging systems (Kafka, RabbitMQ). Familiarity with cloud platforms (AWS, GCP, Azure) and modern DevOps practices. Preferred Skills: Experience with other languages such as Python, Go, or Node.js. Familiarity with AIOps, observability, and monitoring tools. Mandatory Skills : Java, Spring Boot, GO, Python, LLD, HLD Location : Bangalore, Karnataka, India Years Of Exp : 10 to 13 years Why you should choose us? Are you interested in working for a Global Leader in E-commerce? Are you excited about working on highly scalable platforms and applications that are accessed by millions of users every day? If so, read on to find out more about the opportunity. Rakuten is the largest E-commerce company in Japan and one of the largest E-commerce and Internet Services companies in the World. Rakuten is ranked in top 20 most innovative companies in the world by Forbes. Rakuten India Development Centre is the second largest technology hub outside of Japan that enables & builds platforms for global E commerce, Payments, Digital, AI, Data Science services across the globe. The India arm serves as a research and development center with an employee strength of around 450+ (& Growing). Rakuten is committed to cultivating and preserving a culture of inclusion and connectedness. We are able to grow and learn better together with a diverse team and inclusive workforce. The collective sum of the individual differences, life experiences, knowledge, innovation, self-expression, and talent that our employees invest in their work represents not only part of our culture, but our reputation and Rakuten’s achievement as well. In recruiting for our team, we welcome the unique contributions that you can bring in terms of their education, opinions, culture, ethnicity, race, sex, gender identity and expression, nation of origin, age, languages spoken, veteran’s status, color, religion, disability, sexual orientation and beliefs”

Posted 1 week ago

Apply

5.0 years

6 - 20 Lacs

India

On-site

Job Description: Senior Database Developer (MySQL & AWS Expert) Location: Hyderabad, India Experience: 5+ Years (Preferably 7+ Years) Employment Type: Full-time Role Overview: We are looking for an exceptionally strong Database Developer with 5+ years of hands-on experience specializing in MySQL database development on Amazon AWS Cloud. The ideal candidate should have deep expertise in high-performance query tuning, handling massive datasets, designing complex summary tables, and implementing scalable database architectures. This role demands a highly analytical and problem-solving mindset, capable of delivering optimized and mission-critical database solutions. Key Responsibilities: • Design, develop, and optimize highly scalable MySQL databases on AWS cloud infrastructure. • Expert-level performance tuning of queries, indexes, and stored procedures for mission-critical applications. • Handle large-scale datasets, ensuring efficient query execution and minimal latency. • Architect and implement summary tables for optimized reporting and analytical performance. • Work closely with software engineers to design efficient data models, indexing strategies, and partitioning techniques. • Ensure high availability, disaster recovery, and fault tolerance of database systems. • Perform root-cause analysis of database bottlenecks and implement robust solutions. • Implement advanced replication strategies, read/write separation, and data sharding for optimal performance. • Work with DevOps teams to automate database monitoring, backups, and performance metrics using AWS tools. • Optimize stored procedures, triggers, and complex database functions to enhance system efficiency. • Ensure best-in-class data security, encryption, and access control policies. Must-Have Skills: • Proven expertise in MySQL query optimization, indexing, and execution plan analysis. • Strong knowledge of AWS RDS, Aurora, and cloud-native database services. • Hands-on experience in tuning high-performance, high-volume transactional databases. • Deep understanding of database partitioning, sharding, caching, and replication strategies. • Experience working with large-scale datasets (millions to billions of records) and ensuring low-latency queries. • Advanced experience in database schema design, normalization, and optimization for high availability. • Proficiency in query profiling, memory management, and database load balancing. • Strong understanding of data warehousing, ETL processes, and analytics-driven data models. • Expertise in troubleshooting slow queries and deadlocks in a production environment. • Proficiency in scripting languages like Python, Shell, or SQL scripting for automation. Preferred Skills: • Experience with big data technologies like Redshift, Snowflake, Hadoop, or Spark. • Exposure to NoSQL databases (MongoDB, Redis) for hybrid data architectures. • Hands-on experience with CI/CD pipelines and DevOps database management. • Experience in predictive analytics and AI-driven data optimizations. Educational Qualification: • Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Salary & Benefits: • Top-tier compensation package for highly skilled candidates. • Fast-track career growth with opportunities for leadership roles. • Comprehensive health benefits and performance-based bonuses. • Exposure to cutting-edge technologies and large-scale data challenges. If you are a world-class MySQL expert with a passion for solving complex database challenges and optimizing large-scale systems, apply now! Job Types: Full-time, Permanent Pay: ₹634,321.11 - ₹2,091,956.36 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Language: English (Required) Work Location: In person Expected Start Date: 21/07/2025

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

An innovative MySQL Database Administrator to assist in developing and creating cutting-edge solutions for the healthcare industry. This is an opportunity to develop and create cutting-edge solutions for our products and services while being part of an extraordinary team. This role will ensure that the business is well-coordinated and productive by managing its processes and coaching its people. The area of responsibility for this role is very wide and thus requires thorough knowledge of various company processes. Who we are… It’s simple. We do business differently, by empowering our team to create fresh ideas which impact lives everywhere. We don’t just dream it; we do it every day at Axxess, we bring life-changing technology to healthcare impacting the way people work, learn and grow their business. Our edge does not come from our technology, it comes from our people. We work as one team with a common goal to create shared success benefiting everyone. What you will do... Participate in daily operations to support the business. Optimizing database (SQL calls) for max performance Collaborate with management in the development of performance goals and long-term operational plans. Collaborate in operational processes and performance, recommend solutions for improvement. Experience with SQL and stored routines. Experience working with HA Proxy, ProxySQL or MaxScale will be a plus. Working knowledge with AWS database technologies like Aurora, RDS. Monitor day-to-day operational systems and processes that provide visibility into goals, progress, and obstacles for our key initiatives. Maintain relationships with all technology team members, external partners, and vendors to provide input regarding operational activity and strategic goals. Monitor and analyze key metrics for the day-to-day performance of the operations to ensure efficient and timely completion of tasks. Perform technical troubleshooting to diagnose, isolate and correct data and database issues. Qualifications What you bring… Required – 4 years of MySQL management experience. Required – Experience in MySQL configuring HA and clustering solutions. Required – Large scale MySQL deployments. Required – MySQL sharding architecture and techniques. Ability to work with Linux/Unix based OS Environment. MS SQL experience is a plus. Software as a service industry experience is a plus. Cloud experience is a plus Strong communication, interpersonal and professional skills. Proven ability to plan and manage the operational process for maximum efficiency and productivity. Demonstrated ability to solve problems and anticipate customer needs. Demonstrated ability to organize, set, and implement priorities and manage multiple tasks.

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Description About Sutherland Artificial Intelligence. Automation.Cloud engineering. Advanced analytics.For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence. We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model. For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA.We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships. Sutherland Unlocking digital performance. Delivering measurable results. Job Description Sutherland Global Services is seeking a MERN Stack Developer with experience in developing, testing, and Bachelor degree in Computer Science, Engineering or a related subject Should have worked on ASP.NET, MVC - 4.0/ 3.5/ 2.0 and Entity Framework (Mandatory) Hands on experience with .Net Core (Good to have) Development experience in SQL Store Procedures (Mandatory) Implementation experience with LinQ Query (Mandatory) Knowledge on Azure cloud services (Good to have) Develop WCF and Web API (SOAP \ REST) (Mandatory) Minimum expertise with AngularJs and jQuery (Mandatory) Knowledge in DevOps, TFS branching and merging (Mandatory) Technology Split: NET Frameworks: Proficient in developing applications using .NET Core and .NET Framework. Experienced with ASP.NET MVC to create robust and scalable web applications. Programming Languages: Strong programming skills in C# for backend development. Fluent in JavaScript for dynamic front-end interactions. Database Management: Experience in designing and managing SQL queries. Experience in working with SQL partition and sharding methodologies. Capable of debugging and developing complex queries, optimizing performance, and ensuring data security. Frontend Development: Proficient in HTML5 and CSS3 for structuring and designing web pages. Experience with responsive design and cross-browser compatibility. Development Tools and Environments: Familiar with Visual Studio and VS Code for integrated development environments. Experience with version control using Git. Web Technologies: Skilled in client-side technologies including JavaScript, jQuery, and AJAX. Experience in developing and debugging web services like REST APIs and SOAP Experience with GraphQL API technologies will be an advantage. Testing and Debugging: Capable of performing unit testing and integration testing to ensure application reliability and functionality. Proficient in debugging tools and techniques for both frontend and backend. Soft Skills: Strong problem-solving abilities and analytical skills. Qualifications To succeed in this position, you must have: Bachelor degree in Computer Science, Engineering or a related subject Should have worked on ASP.NET, MVC - 4.0/ 3.5/ 2.0 and Entity Framework (Mandatory) Hands on experience with .Net Core (Good to have) Development experience in SQL Store Procedures (Mandatory) Implementation experience with LinQ Query (Mandatory) Knowledge on Azure cloud services (Good to have) Develop WCF and Web API (SOAP \ REST) (Mandatory) Minimum expertise with AngularJs and jQuery (Mandatory) Knowledge in DevOps, TFS branching and merging (Mandatory) Technology Split: NET Frameworks: Proficient in developing applications using .NET Core and .NET Framework. Experienced with ASP.NET MVC to create robust and scalable web applications. Programming Languages: Strong programming skills in C# for backend development. Fluent in JavaScript for dynamic front-end interactions. Database Management: Experience in designing and managing SQL queries. Experience in working with SQL partition and sharding methodologies. Capable of debugging and developing complex queries, optimizing performance, and ensuring data security. Frontend Development: Proficient in HTML5 and CSS3 for structuring and designing web pages. Experience with responsive design and cross-browser compatibility. Development Tools and Environments: Familiar with Visual Studio and VS Code for integrated development environments. Experience with version control using Git. Web Technologies: Skilled in client-side technologies including JavaScript, jQuery, and AJAX. Experience in developing and debugging web services like REST APIs and SOAP Experience with GraphQL API technologies will be an advantage. Testing and Debugging: Capable of performing unit testing and integration testing to ensure application reliability and functionality. Proficient in debugging tools and techniques for both frontend and backend. Soft Skills: Strong problem-solving abilities and analytical skills. Additional Information This is a full-time position based out of Chennai.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Requirements Description and Requirements The MongoDB Administrator is responsible for managing and maintaining MongoDB database systems to ensure optimal performance, availability, and security. This role involves database design, implementation, monitoring, backup, and recovery tasks, as well as collaborating with development and operations teams to support application requirements. Key Responsibilities: Install,configure, and maintain MongoDB databases across development, testing, and production environments. Monitor database performance, implement changes, and apply new patches and versions when required. Ensure high availability and disaster recovery through replication, sharding, and backup strategies. Create and maintain users, roles, and permissions in accordance with security policies. Design and implement indexing, data modeling, and query optimization for performance tuning. Set up and manage replica sets and sharded clusters. Implement and maintain monitoring systems such as MongoDB Ops Manager, Prometheus, or similar tools. Develop and execute scripts for automation and database maintenance tasks. Collaborate with application developers to optimize data operations and troubleshoot issues. Maintain documentation related to database configuration, standard procedures, and processes. Conduct regular security audits and ensure compliance with organizational and regulatory standards. Additional Job Description Must have 5+ years of experience. Proven experience as a MongoDB Database Administrator. Strong understanding of MongoDB architecture, data modeling, and performance tuning. Experience with MongoDB Atlas and on-premises MongoDB installations. Familiarity with Linux/Unix system administration. Proficiency in scripting languages like Bash, Python, or PowerShell. Knowledge of DevOps tools and CI/CD pipelines is a plus. Experience with cloud platforms (AWS, Azure, GCP) is desirable. Solid understanding of backup and recovery practices. Strong problem-solving and analytical skills. EEO Statement At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service - all backed by TELUS, our multi-billion dollar telecommunications parent. Equal Opportunity Employer At TELUS Digital, we are proud to be an equal opportunity employer and are committed to creating a diverse and inclusive workplace. All aspects of employment, including the decision to hire and promote, are based on applicants’ qualifications, merits, competence and performance without regard to any characteristic related to diversity.

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Experience: 6+ Years ______________ Position Overview We are looking for a Senior MongoDB Database Developer or Architect with a minimum of 6 years of hands-on experience to lead and manage our MongoDB database infrastructure. This is a strategic role, ideal for someone who thrives in dynamic environments and has a strong background in MongoDB database administration, performance tuning, and architecture. The role will involve working across development, staging, and production systems to ensure scalability, reliability, and high performance of database operations. In addition to deep MongoDB expertise, familiarity with FiveTran, Snowflake, ETL processes, Node.js, JavaScript, and application design patterns for MongoDB will be highly valued. ______________ Key Responsibilities Monitor, maintain, and optimize MongoDB environments for performance and uptime. Configure and fine-tune database parameters for efficiency. Implement and manage robust backup and recovery strategies. Ensure data integrity, security, and compliance across all environments. Migrate databases from on-premises to cloud platforms (e.g., MongoDB Community to Atlas). Upgrade MongoDB versions and supporting agents/tools as needed. Application & Production Support Troubleshoot and resolve database and application-related issues. Provide expert-level support to engineering teams and end-users. Perform routine maintenance and system upgrades. Maintain comprehensive documentation for procedures, configurations, and best practices. Database Development Collaborate with development teams to implement MongoDB best practices and application design patterns. Create and maintain scripts for data extraction, transformation, and reporting. Enhance and maintain ETL pipelines, including integration with FiveTran and Snowflake. Capacity Planning Analyze usage trends and plan for future data growth and scaling. Recommend and implement data growth strategies and resource right-sizing. Monitor performance metrics and address bottlenecks proactively. Manage data lifecycle, including archival and retention strategies. Performance Optimization Optimize query execution plans, indexing strategies, and replication performance. Monitor and tune disk I/O, oplog usage, and overall system throughput. Implement server scaling and sharding strategies as necessary. Disaster Recovery & High Availability Design and test comprehensive disaster recovery and high-availability solutions. Ensure consistent backup jobs and regularly validate recovery processes. Work with cross-functional teams to maintain DR readiness. ______________ Preferred Skills & Technologies Strong expertise in MongoDB (including sharding, replication, performance tuning) Experience with MongoDB Atlas and Ops Manager Proficiency in ETL pipelines and tools (FiveTran, Snowflake) Hands-on scripting and development experience (JavaScript, Node.js) Experience with cloud migration and database modernization Familiarity with DevOps practices and CI/CD pipelines for database changes ______________ This is a remote opportunity open for both onshore (USA) and offshore (India) candidates.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

About ArthaChain ArthaChain is an ultra-scalable, AI-powered Layer 1 blockchain developed by Diigoo Tech Pvt. Ltd. Designed with SVCP (Social Verified Consensus Protocol), AI sharding, and advanced modular architecture, ArthaChain delivers unmatched performance — over 22 million TPS with sub-<2s finality — making it a future-proof infrastructure for global Web3 adoption. We are now entering our pre-seed investment round ($1.45M at $10M valuation) and are looking for a battle-tested Funding Advisor to help us close this round and secure institutional, VC, and grant-based funding. Who We Want We are not looking for generalists or entry-level advisors. This role is strictly for experienced professionals with a proven history of raising capital for blockchain/Web3 startups. You are someone who: Has closed multiple 6–7 figure rounds (seed to Series A) in the crypto/Web3 space. Has deep connections with VCs, crypto funds, DAOs, or family offices that actively invest in blockchain. Understands the nuances of tokenomics, regulatory dynamics, and crypto-native fundraising models (SAFT, SAFEs, equity-token hybrids, etc.) Can personally make introductions to the right investors, and not just advise from the sidelines. Key Responsibilities Strategically guide our pre-seed and seed round fundraising, including pitch optimization and valuation strategy. Leverage your existing VC/fund/angel network to secure warm intros and build momentum. Advise on funding instruments, equity/token allocations, and long-term capital structuring. Support deal negotiation, due diligence, and investor relations. Assist with applying to Web3 accelerators and grant programs. Minimum Requirements 5+ years experience in Web3 fundraising, VC, crypto banking, or strategic finance. Demonstrated success in funding blockchain or crypto-native companies. Deep knowledge of Web3 capital markets, tokenomics, and regulatory considerations. Ability to commit 10–15 hours/month and deliver measurable traction. Strong ethical grounding, we are a legit, compliance-first project. Compensation Retainer + Success Fee

Posted 1 week ago

Apply

5.0 - 8.0 years

1 - 5 Lacs

Chennai

On-site

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. ͏ Hands-on experience in data modelling for both OLTP and OLAP systems. In-depth knowledge of Conceptual, Logical, and Physical data modelling. Strong understanding of indexing, partitioning, and data sharding with practical experience. Experience in identifying and addressing factors affecting database performance for near-real-time reporting and application interaction. Proficiency with at least one data modelling tool (preferably DB Schema). Functional knowledge of the mutual fund industry is a plus. Familiarity with GCP databases like Alloy DB, Cloud SQL, and Big Query. Willingness to work from Chennai (office presence is mandatory) Chennai customer site, requiring five days of on-site work each week. Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 weeks ago

Apply

0 years

4 - 10 Lacs

Pune

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Manager, Software Engineering Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview Franchise and Legal Solutions team is looking for a strong, people focused Manager to lead development teams. We are looking for Managers who can bring unique perspectives, have strong people management and mentorship skills, and have experience leading teams through the ever changing technology landscape. In this role, you will lead multiple Teams through various projects which are critical to Mastercard, leveraging technologies such as Pivotal Cloud Foundry and microservice architecture while using Agile/SAFe methodologies. An ideal candidate is someone who has excellent verbal and written communication skills, who has a passion inspiring others and helping them grow their career, who enjoys solving problems in a challenging environment, and who has the desire to take their career to the next level. If any of these opportunities excite you, we would love to talk. Role Responsible for leading a team of IT Professionals for the development and delivery of quality software solutions in a timely and cost-effective manner. Hands-on technical leader that brings considerable experience doing application development for customer facing applications using a broad range of technologies. Interacts with technology staff including Software and Test Engineers, Product management, and Program management to identify scope of project/s and deliver services. Successfully lead definition, development and delivery of major cross-department initiatives with broad scope and long-term business implications. Partner with business to identify innovative solutions and drive strategic decisions. Develop business case in alignment with the MasterCard objectives. Actively look for opportunities for Innovation and creative problem solving. Make presentations to the Senior Management, Business Owners and other Stakeholders and help them in decision making process. Responsible for hiring and developing engineering talent Owner of one or more major functional areas of our code. All About You You are a hands-on, technical and people leader. You regularly dive deep into everything your team does. Your leadership style is based on building trust with your team, your peers, and your management. You are high-energy, detail-oriented, proactive and adaptive and can work in a dynamic environment. You possess strong communication skills - both verbal and written – and are able to get your point across efficiently to a wide range of audiences. You are obsessed with results, and effectively communicate objectives and how success will be measured to your team and other stakeholders. You have strong decision-making skills, and you have a bias for action rather than drawn-out analysis because you understand that most decisions are not irreversible. You regularly lead retrospection, and continually improve as a result. You have strong collaborative and organizational skills and you value transparency over avoiding scrutiny. You understand how to guide an engineer’s career including performance evaluation, coaching, and motivation. You are experienced in distributed system design and implementation, and understand how to build high-performance, high-reliability operation of large-scale distributed systems. You master consensus mechanisms, partitioning and sharding, and reliability engineering. You have expert level knowledge of software development processes, and you practice Agile at an advanced level. You have experience and knowledge of Test Driven and Domain Driven Development (TDD / DDD). You are an experienced user of one or more Application Lifecycle Management tools (e.g., Rally, Jira). As a leader, you own your code repositories. You are a Git expert, and have experience with one or more SCM hosting tools such as GitHub or BitBucket. Your repositories look like trees, not bushes. You are experienced with CI/CD. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Job Description: Key Responsibilities: Install, configure, and maintain MongoDB and other databases . Ensure database performance, security, and scalability . Implement replication, sharding, and backup strategies . Optimize queries, indexing, and storage for efficiency. Monitor database health using tools like Ops Manager, Prometheus, or Grafana . Troubleshoot database issues and ensure high availability . Automate database management tasks using Shell/Python scripting . Collaborate with development teams to optimize schema design and queries . Requirements: 4-6 years of experience in database administration . Strong expertise in MongoDB (preferred), MySQL, or PostgreSQL . Hands-on experience with replication, sharding, and high availability . Knowledge of backup, restore, and disaster recovery strategies . Experience in Linux environments and scripting (Shell, Python) . Familiarity with MongoDB Atlas, AWS RDS, or cloud-based databases . Preferred Qualifications: MongoDB Certification is a plus. Experience with DevOps tools like Docker, Kubernetes, or Ansible . Exposure to both SQL and NoSQL databases . Application Link: https://jobs.klimb.io/livekeeping/684baf41fb58884e2963ec8c?source=social_other&id=67adae179c45f1d2d0abf829

Posted 2 weeks ago

Apply

10.0 years

4 - 8 Lacs

Bengaluru

On-site

Redefine the future of customer experiences. One conversation at a time. We're changing the game with a first-of-its-kind, conversation-centric platform that unifies team collaboration and customer experience in one place. Powered by AI, built by amazing humans. Our culture is forward-thinking, customer-obsessed and built on an unwavering belief that connection fuels business and life; connections to our customers with our signature Amazing Service®, our products and services, and most importantly, each other. Since 2008, 100,000+ companies and 1M+ users rely on Nextiva for customer and team communication. If you're ready to collaborate and create with amazing people, let your personality shine and be on the frontlines of helping businesses deliver amazing experiences, you're in the right place. Build Amazing - Deliver Amazing - Live Amazing - Be Amazing We are seeking a passionate and experienced Staff Engineer to spearhead our growing development team. In this leadership role, you'll play a pivotal role in driving innovation and excellence across our software engineering efforts. Who You Are: You are a seasoned software engineer with a proven track record of success in building and leading high-performing teams. You possess a blend of technical expertise, strong leadership skills, and a passion for building elegant and efficient software solutions. What You'll Do: Technical Expertise: Bring in the best practices for writing high quality (bug free and acceptable performance), reliable, maintainable software. Architect and Design: Lead the design and architecture of complex software systems/problem, ensuring scalability, maintainability, and security. High impact work and excellence delivered : Champion a culture of continuous improvement, driving efficient development processes and high-quality code delivery. Make a positive impact in team's output by mentoring and coaching younger members, helping out and unblock people to achieve the objectives. Hands-on Problem Solving: Tackle intricate technical challenges and provide effective solutions. Communication & Collaboration: Lead by example with clear and concise communication. Collaborate effectively with stakeholders across various teams. Technical Skills: Proven experience as a Software Engineer with a minimum of 10+ years of experience ⁠Strong understanding of system design principles (CAP theorem, PACELC, USL, Consistency, Hashing, Sharding, Partitioning etc) especially how to tackle functional and non-functional requirements like scaling, security, reliability In-depth knowledge of modern software development methodologies (Agile, DevOps) Delivering high quality software with best practices like SOLID, Base Paradigm, design patterns, different architectural styles Expertise in building RESTful web applications using Java 11+ and Spring Framework Advanced understanding with tools like Maven, Gradle, Git, Docker, Kubernetes, and cloud platforms (GCP) is highly desired Deep Experience in at least one of MySQL/Postgres/MongoDB, caching solutions (Redis) is desired Nextiva DNA (Core Competencies) Nextiva's most successful team members share common traits and behaviors: Drives Results: Action-oriented with a passion for solving problems. They bring clarity and simplicity to ambiguous situations, challenge the status quo, and ask what can be done differently. They lead and drive change, celebrating success to build more success. Critical Thinker: Understands the "why" and identifies key drivers, learning from the past. They are fact-based and data-driven, forward-thinking , and see problems a few steps ahead. They provide options, recommendations, and actions, understanding risks and dependencies. Right Attitude : They are team-oriented, collaborative, competitive, and hate losing. They are resilient, able to bounce back from setbacks, zoom in and out, and get in the trenches to help solve important problems. They cultivate a culture of service, learning, support, and respect, caring for customers and teams. Total Rewards Our Total Rewards offerings are designed to allow our employees to take care of themselves and their families so they can be their best, in and out of the office. Our compensation packages are tailored to each role and candidate's qualifications. We consider a wide range of factors, including skills, experience, training, and certifications, when determining compensation. We aim to offer competitive salaries or wages that reflect the value you bring to our team. Depending on the position, compensation may include base salary and/or hourly wages, incentives, or bonuses. Medical - Medical insurance coverage is available for employees, their spouse, and up to two dependent children with a limit of 500,000 INR, as well as their parents or in-laws for up to 300,000 INR. This comprehensive coverage ensures that essential healthcare needs are met for the entire family unit, providing peace of mind and security in times of medical necessity. Group Term & Group Personal Accident Insurance - Provides insurance coverage against the risk of death / injury during the policy period sustained due to an accident caused by violent, visible & external means. Coverage Type - Employee Only Sum Insured - 3 times of annual CTC with minimum cap of INR 10,00,000 Free Cover Limit - 1.5 Crore Work-Life Balance ️ - 15 days of Privilege leaves per calendar year, 6 days of Paid Sick leave per calendar year, 6 days of Casual leave per calendar year. Paid 26 weeks of Maternity leaves, 1 week of Paternity leave, a day off on your Birthday, and paid holidays Financial Security - Provident Fund & Gratuity Wellness ‍ - Employee Assistance Program and comprehensive wellness initiatives Growth - Access to ongoing learning and development opportunities and career advancement At Nextiva, we're committed to supporting our employees' health, well-being, and professional growth. Join us and build a rewarding career! Established in 2008 and headquartered in Scottsdale, Arizona, Nextiva secured $200M from Goldman Sachs in late 2021, valuing the company at $2.7B.To check out what's going on at Nextiva, check us out on Instagram, Instagram (MX), YouTube, LinkedIn, and the Nextiva blog. #LI-AK1 #LI-Hybrid

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Thiruvananthapuram

On-site

Key Responsibilities: Develop RESTful and GraphQL APIs using Node.js and Express/Nest.js. Design and manage database schemas in MongoDB and SQL (PostgreSQL/MySQL). Write efficient, reusable, and scalable backend logic and database queries. Deploy, manage, and monitor services in Kubernetes environments (preferably using Helm). Collaborate with DevOps to implement CI/CD pipelines and optimize container workflows. Optimize application performance and troubleshoot issues in production. Work closely with frontend developers, product owners, and QA engineers in Agile sprints. Maintain security and data protection best practices in all backend services. Required Skills & Qualifications: 4+ years of backend development experience with Node.js . Strong expertise in MongoDB (aggregation pipelines, indexing, sharding, etc.). Good understanding of relational databases (SQL) and schema optimization. Hands-on experience with Kubernetes for deploying and managing containerized applications. Proficient with Docker, Helm, and Kubernetes YAML configurations. Strong knowledge of API development, JWT/OAuth2 authentication, and web security. Experience with version control systems like Git and workflows like GitFlow. Familiarity with unit testing and integration testing frameworks (Jest, Mocha, etc.). Experience working in Agile/Scrum teams. Job Types: Full-time, Permanent Location Type: In-person Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Current monthly salary? Least expected monthly salary? How early you can join? Experience: Node.js: 4 years (Required) Microservices: 4 years (Required) MongoDB: 4 years (Preferred) MySQL: 3 years (Preferred) Redis: 3 years (Preferred) GraphQL: 4 years (Preferred) react.js: 1 year (Preferred) total: 5 years (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Engineering Manager - Platforms Our Story Zepto is a fast-growing startup that delivers products in 10 minutes through an optimized network of dark stores that we're building across the country! Within a year after launching, we've raised $1.26B.Our investors include Y Combinator, Nexus Venture Partners, and many more! We’ve built out one of the best startup teams in India, with Senior Executives from Uber, Flipkart, Dream11, PhonePe, Goldman Sachs, Pepperfry, and institutions like Stanford, INSEAD, IITs, and IIMs. The magic of Zepto is our ability to consistently deliver 15000+ products in 10 minutes flat. It's the core of what we do, and the reason why we’ve been able to grow so fast with phenomenal customer love. Our mission is simple: to make 10-minute delivery the new normal. Engineering @ Zepto Why this role matters At Zepto , we deliver the 10-minute miracle to millions of customers every day — and at the heart of this promise lies our Core Platform team. This team powers the critical infrastructure that enables speed, scale, and intelligence across the company. We own the Customer Data Platform ecosystem , including the User Profile Store , real-time Event Pipelines , and Segmentation & Audience Management — serving 10M+ requests per minute . Our Journey Builder and Communication Platform enables Product and Marketing teams to launch personalized user experiences and campaigns in real time. We also manage Access Control systems for both customers and enterprise users, providing secure and scalable authentication and authorization. In addition, we offer a self-serve Machine Learning Platform that lets Data Scientists build, deploy, and serve models with millisecond latency. What you’ll do Own and drive the roadmap for core platform systems, balancing long-term architectural vision with short-term execution and sprint planning. Design and scale high-performance, low-latency backend services , capable of handling tens of millions of requests per minute while meeting strict SLAs and reliability goals. Build self-serve infrastructure capabilities that enable product, growth, and data science teams to move fast and independently. Lead the end-to-end lifecycle of critical infrastructure components , including real-time data pipelines, service orchestration, observability, and operational excellence. Develop and maintain secure, multi-tenant systems that handle authentication, authorization, and access control across internal and external surfaces. Mentor and grow a high-performing team of engineers through technical guidance, career development, and a strong culture of ownership and excellence. Collaborate cross-functionally with Product, Growth, Data, and Operations teams to align platform capabilities with business goals and user needs. What makes you a great fit 8+ years of backend/platform experience, including 2+ years leading engineering teams Proven ability to build and scale high-throughput, low-latency systems (10M+ RPM, <100ms p95) Experience delivering self-serve platform tools for internal teams like Product, Growth, or Data Science Strength in one or more areas: streaming systems, customer data infra, auth platforms, or ML serving infra Solid understanding of distributed systems — caching, sharding, queuing, and system observability Skilled at balancing architecture vision with hands-on execution Passion for mentorship, engineering excellence, and team development Strong cross-functional collaboration and communication in fast-paced environments

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Location: Chennai (Work From Office) Experience: 6–9 years Type: Contract About The Role We’re looking for a highly skilled Senior Data Modeller to lead the design and optimization of data models supporting enterprise applications and analytics on cloud infrastructure. You’ll work closely with engineering, product, and business stakeholders to deliver scalable, performant, and clean data structures that power mission-critical platforms. Key Responsibilities Lead the development of conceptual, logical, and physical data models for OLTP and OLAP systems. Collaborate with engineering teams to optimize schema design, indexing, and partitioning for performance and scalability. Define data modeling standards, governance rules, and best practices for schema evolution. Build models using tools like DBSchema, and integrate across GCP data services (BigQuery, CloudSQL, AlloyDB). Work closely with cross-functional teams to translate business requirements into efficient data models. Enable data quality, consistency, and reuse across multiple systems and business domains. Participate in code reviews, design validations, and performance tuning sessions. Must-Have Skills 6–9 years of experience in data modeling for enterprise platforms. Deep understanding of OLTP and OLAP modeling patterns and trade-offs. Strong expertise in indexing, partitioning, sharding, and performance tuning. Experience with data modeling tools like DBSchema, ER/Studio, or ERwin. Solid hands-on experience with GCP databases: BigQuery, CloudSQL, and AlloyDB. Strong SQL skills and ability to design efficient database queries. Ability to work independently and drive decisions around data architecture and governance. Nice-to-Haves Functional knowledge of mutual fund or BFSI domains. Experience with ElasticSearch for analytical or logging pipelines. Exposure to cloud-native architecture patterns and modern ETL workflows. Familiarity with data versioning, lineage, or metadata management. Qualifications Bachelor's or Master’s degree in Computer Science, Information Systems, or related field. Relevant certifications (e.g., Google Cloud – Professional Data Engineer, CDMP) are a plus. Why Join Us Work on complex, high-scale cloud platforms with a strong engineering team. Be part of a fast-growing global consulting network delivering cutting-edge data solutions. Enjoy a collaborative work culture, career development support, and competitive pay. Opportunity to influence data architecture decisions across cloud transformation projects. Skills: dbschema,cloudsql,sql,performance tuning,cloud,bigquery,erwin,sharding,er/studio,gcp,data modeling,alloydb,partitioning,architecture,oltp,data architecture,indexing,olap

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title : Senior Data Modeller Location: Chennai (Work From Office) Experience: 6–9 years Type: Contract About The Role We’re looking for a highly skilled Senior Data Modeller to lead the design and optimization of data models supporting enterprise applications and analytics on cloud infrastructure. You’ll work closely with engineering, product, and business stakeholders to deliver scalable, performant, and clean data structures that power mission-critical platforms. Key Responsibilities Lead the development of conceptual, logical, and physical data models for OLTP and OLAP systems. Collaborate with engineering teams to optimize schema design, indexing, and partitioning for performance and scalability. Define data modeling standards, governance rules, and best practices for schema evolution. Build models using tools like DBSchema, and integrate across GCP data services (BigQuery, CloudSQL, AlloyDB). Work closely with cross-functional teams to translate business requirements into efficient data models. Enable data quality, consistency, and reuse across multiple systems and business domains. Participate in code reviews, design validations, and performance tuning sessions. Must-Have Skills 6–9 years of experience in data modeling for enterprise platforms. Deep understanding of OLTP and OLAP modeling patterns and trade-offs. Strong expertise in indexing, partitioning, sharding, and performance tuning. Experience with data modeling tools like DBSchema, ER/Studio, or ERwin. Solid hands-on experience with GCP databases: BigQuery, CloudSQL, and AlloyDB. Strong SQL skills and ability to design efficient database queries. Ability to work independently and drive decisions around data architecture and governance. Nice-to-Haves Functional knowledge of mutual fund or BFSI domains. Experience with ElasticSearch for analytical or logging pipelines. Exposure to cloud-native architecture patterns and modern ETL workflows. Familiarity with data versioning, lineage, or metadata management. Qualifications Bachelor's or Master’s degree in Computer Science, Information Systems, or related field. Relevant certifications (e.g., Google Cloud – Professional Data Engineer, CDMP) are a plus. Why Join Us Work on complex, high-scale cloud platforms with a strong engineering team. Be part of a fast-growing global consulting network delivering cutting-edge data solutions. Enjoy a collaborative work culture, career development support, and competitive pay. Opportunity to influence data architecture decisions across cloud transformation projects. Skills: indexing,alloydb,dbschema,cloud,performance tuning,data modeling,oltp,sql,gcp databases,data architecture,data,partitioning,erwin,olap,er/studio,sharding,bigquery,cloudsql

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Data Modeller Experience: 5–8 Years Location: Chennai Notice Period: Immediate Joiners Preferred Employment Type: Contract About The Role We are looking for an experienced Data Modeller with a strong background in designing and optimizing data structures for both OLTP and OLAP systems on Google Cloud Platform (GCP) . The ideal candidate will have hands-on expertise in conceptual, logical, and physical data modelling , indexing strategies, and performance tuning. Experience with CloudSQL , BigQuery , and data modelling tools like DBSchema is essential. Key Responsibilities Design and develop conceptual, logical, and physical data models for structured and semi-structured data. Implement data models optimized for both transactional (OLTP) and analytical (OLAP) use cases. Define and apply indexing, partitioning, and data sharding strategies for performance and scalability. Collaborate with data engineers and developers to ensure models support reporting, analytics, and application requirements. Work with GCP data services like AlloyDB, CloudSQL, and BigQuery to design efficient cloud-native data solutions. Perform data profiling, quality checks, and support schema evolution and impact analysis. Use data modelling tools such as DBSchema (preferred) or other industry-standard tools. Tune database designs for near real-time reporting and performance optimization. Document metadata, lineage, and model specifications for team consumption and governance. Required Skills 5–8 years of experience in data modelling for enterprise systems. Expertise in OLTP/OLAP modelling, indexing, and database performance tuning. Hands-on experience with GCP data platforms – CloudSQL, AlloyDB, BigQuery. Proficiency in using data modelling tools (preferably DBSchema). Strong knowledge of partitioning, sharding, and database schema optimization. Good understanding of data governance, lineage, and best practices. Nice To Have Functional knowledge or domain exposure to the mutual fund or financial services industry. Experience with real-time analytics and streaming data environments. Familiarity with other cloud platforms (AWS, Azure) and their data services. What We Offer Opportunity to work on enterprise-level, cloud-first data architecture. Exposure to modern data platforms and modeling strategies on GCP. A collaborative, growth-focused work culture. Competitive salary and learning opportunities, including cloud certifications. Work with a team solving complex data problems in real-world financial domains. Skills: indexing,alloydb,database,analytics,dbschema,data modelling,google cloud platform,oltp,data quality checks,database performance tuning,gcp,data profiling,schema evolution,data governance,partitioning,olap,sharding,bigquery,modelling tools,cloudsql

Posted 2 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Thiruvananthapuram, Kerala

On-site

Key Responsibilities: Develop RESTful and GraphQL APIs using Node.js and Express/Nest.js. Design and manage database schemas in MongoDB and SQL (PostgreSQL/MySQL). Write efficient, reusable, and scalable backend logic and database queries. Deploy, manage, and monitor services in Kubernetes environments (preferably using Helm). Collaborate with DevOps to implement CI/CD pipelines and optimize container workflows. Optimize application performance and troubleshoot issues in production. Work closely with frontend developers, product owners, and QA engineers in Agile sprints. Maintain security and data protection best practices in all backend services. Required Skills & Qualifications: 4+ years of backend development experience with Node.js . Strong expertise in MongoDB (aggregation pipelines, indexing, sharding, etc.). Good understanding of relational databases (SQL) and schema optimization. Hands-on experience with Kubernetes for deploying and managing containerized applications. Proficient with Docker, Helm, and Kubernetes YAML configurations. Strong knowledge of API development, JWT/OAuth2 authentication, and web security. Experience with version control systems like Git and workflows like GitFlow. Familiarity with unit testing and integration testing frameworks (Jest, Mocha, etc.). Experience working in Agile/Scrum teams. Job Types: Full-time, Permanent Location Type: In-person Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Current monthly salary? Least expected monthly salary? How early you can join? Experience: Node.js: 4 years (Required) Microservices: 4 years (Required) MongoDB: 4 years (Preferred) MySQL: 3 years (Preferred) Redis: 3 years (Preferred) GraphQL: 4 years (Preferred) react.js: 1 year (Preferred) total: 5 years (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

Remote

Okay, here's a job description for a remote, offshore DevOps Engineer with the specified requirements : Job Title : Senior DevOps Engineer (Offshore) Location : Remote (Offshore) Duration : 6 Months (Potential for Extension) Interview Process : Skype Role Overview We are seeking a highly experienced and skilled Senior DevOps Engineer to join our team on a remote, offshore basis for an initial 6-month engagement. In this critical role, you will be responsible for ensuring the reliability, scalability, and efficiency of our infrastructure and deployment processes. You will leverage your deep expertise in Azure Cloud, Azure DevOps, MongoDB, Kafka, and automation to build and maintain a robust and high-performing environment. This role requires a seasoned professional with a proven track record of managing complex systems and driving automation initiatives. Responsibilities Design, implement, and manage our Azure Cloud infrastructure, ensuring security, scalability, and cost-effectiveness. Administer and optimize Azure DevOps for continuous integration and continuous delivery (CI/CD) pipelines. Take ownership of MongoDB administration, including performance tuning, scaling, backups, and ensuring high availability. Manage and maintain our Kafka clusters for data streaming and integration, ensuring stability and performance. Develop and implement automation strategies and scripts for infrastructure provisioning, configuration management, and application deployments. Troubleshoot and resolve infrastructure and application deployment issues in a timely and efficient manner. Collaborate with development teams to integrate and deploy applications seamlessly. Implement robust monitoring and alerting systems to proactively identify and address potential issues. Ensure adherence to security best practices across the infrastructure and deployment processes. Document infrastructure configurations, deployment procedures, and operational best practices. Participate in on-call rotations as needed to ensure system availability. Continuously evaluate and implement new tools and technologies to improve efficiency and reliability. Required Skills And Experience Minimum of 10 years of hands-on experience in DevOps or a related role. Extensive experience with Azure Cloud services (e.g., Virtual Machines, Azure Kubernetes Service (AKS), Azure Functions, Azure App Service, Azure Networking, Azure Storage). Deep expertise in Azure DevOps, including Pipelines, Repos, Boards, and Artifacts. Proven experience in MongoDB administration, including installation, configuration, performance tuning, scaling (sharding, replication), backup and recovery strategies, and security. Strong understanding and practical experience with Kafka for data streaming and integration, including cluster management, topic configuration, and monitoring. Significant experience in automation using various tools and scripting languages (e.g., Terraform, ARM templates, PowerShell, Bash, Python). Solid understanding of infrastructure-as-code (IaC) principles and practices. Experience with monitoring and logging tools (e.g., Azure Monitor, Prometheus, Grafana, ELK stack). Strong understanding of networking concepts and security principles. Excellent troubleshooting and problem-solving skills. Strong communication and collaboration skills. - Ability to work independently and effectively in a remote environment. - Bonus Skills Experience with other cloud platforms (e.g., AWS, GCP). Knowledge of containerization technologies (Docker, Kubernetes). Experience with configuration management tools (e.g., Ansible, Chef, Puppet). Familiarity with security scanning tools and practices. (ref:hirist.tech)

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Company: The healthcare industry is the next great frontier of opportunity for software development, and Health Catalyst is one of the most dynamic and influential companies in this space. We are working on solving national-level healthcare problems, and this is your chance to improve the lives of millions of people, including your family and friends. Health Catalyst is a fast-growing company that values smart, hardworking, and humble individuals. Each product team is a small, mission-critical team focused on developing innovative tools to support Catalyst’s mission to improve healthcare performance, cost, and quality. POSITION OVERVIEW: We are looking for a highly skilled Senior Database Engineer & Storage Expert with 5+ years of hands-on experience in managing and optimizing large-scale, high-throughput database systems. The ideal candidate will possess deep expertise in handling complex ingestion pipelines across multiple data stores and a strong understanding of distributed database architecture. The candidate will play a critical technical leadership role in ensuring our data systems are robust, performant, and scalable to support massive datasets ingested from various sources without bottlenecks. You will work closely with data engineers, platform engineers, and infrastructure teams to continuously improve database performance and reliability. performance bottlenecks. KEY RESPONSIBILITIES: • Query Optimization: Design, write, debug and optimize complex queries for RDS (MySQL/PostgreSQL), MongoDB, Elasticsearch, and Cassandra. • Large-Scale Ingestion: Configure databases to handle high-throughput data ingestion efficiently. • Database Tuning: Optimize database configurations (e.g., memory allocation, connection pooling, indexing) to support large-scale operations. • Schema and Index Design: Develop schemas and indexes to ensure efficient storage and retrieval of large datasets. • Monitoring and Troubleshooting: Analyze and resolve issues such as slow ingestion rates, replication delays, and performance bottlenecks. • Performance Debugging: Analyze and troubleshoot database slowdowns by investigating query execution plans, logs, and metrics. • Log Analysis: Use database logs to diagnose and resolve issues related to query performance, replication, and ingestion bottlenecks • Data Partitioning and Sharding: Implement partitioning, sharding, and other distributed database techniques to improve scalability. • Batch and Real-Time Processing: Optimize ingestion pipelines for both batch and real-time workloads. • Collaboration: Partner with data engineers and Kafka experts to design and maintain robust ingestion pipelines. • Stay Updated: Stay up to date with the latest advancements in database technologies and recommend improvements. REQUIRED SKILLS AND QUALIFICATIONS: • Database Expertise: Proven experience with MySQL/PostgreSQL (RDS), MongoDB, Elasticsearch, and Cassandra. • High-Volume Operations: Proven experience in configuring and managing databases for large-scale data ingestions. • Performance Tuning: Hands-on experience with query optimization, indexing strategies, and execution plan analysis for large datasets. • Database Internals: Strong understanding of replication, partitioning, sharding, and caching mechanisms. • Data Modeling: Ability to design schemas and data models tailored for high throughput use cases. • Programming Skills: Proficiency in at least one programming language (e.g., Python, Java, Go) for building data pipelines. • Debugging Proficiency: Strong ability to debug slowdowns by analyzing database logs, query execution plans, and system metrics. • Log Analysis Tools: Familiarity with database log formats and tools for parsing and analyzing logs. • Monitoring Tools: Experience with monitoring tools such as AWS CloudWatch, Prometheus, and Grafana to track ingestion performance. • Problem-Solving: Analytical skills to diagnose and resolve ingestion-related issues effectively. PREFERRED QUALIFICATIONS: • Certification in any of the mentioned database technologies. • Hands-on experience with cloud platforms such as AWS (preferred), Azure, or GCP. • Knowledge of distributed systems and large-scale data processing. • Familiarity with cloud-based database solutions and infrastructure. • Familiarity with large scale data ingestion tools like Kafka, Spark or Flink. EDUCATIONAL REQUIREMENTS: • Bachelor’s degree in computer science, Information Technology, or a related field. Equivalent work experience will also be considered

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Data Modeller Experience: 6–9 Years Location: Chennai Notice Period: Immediate Joiners Preferred About The Role We are looking for an experienced Data Modeller with a strong background in designing and optimizing data structures for both OLTP and OLAP systems on Google Cloud Platform (GCP) . The ideal candidate will have hands-on expertise in conceptual, logical, and physical data modelling , indexing strategies, and performance tuning. Experience with CloudSQL , BigQuery , and data modelling tools like DBSchema is essential. Key Responsibilities Design and develop conceptual, logical, and physical data models for structured and semi-structured data. Implement data models optimized for both transactional (OLTP) and analytical (OLAP) use cases. Define and apply indexing, partitioning, and data sharding strategies for performance and scalability. Collaborate with data engineers and developers to ensure models support reporting, analytics, and application requirements. Work with GCP data services like AlloyDB, CloudSQL, and BigQuery to design efficient cloud-native data solutions. Perform data profiling, quality checks, and support schema evolution and impact analysis. Use data modelling tools such as DBSchema (preferred) or other industry-standard tools. Tune database designs for near real-time reporting and performance optimization. Document metadata, lineage, and model specifications for team consumption and governance. Required Skills 6–9 years of experience in data modelling for enterprise systems. Expertise in OLTP/OLAP modelling, indexing, and database performance tuning. Hands-on experience with GCP data platforms – CloudSQL, AlloyDB, BigQuery. Proficiency in using data modelling tools (preferably DBSchema). Strong knowledge of partitioning, sharding, and database schema optimization. Good understanding of data governance, lineage, and best practices. Nice To Have Functional knowledge or domain exposure to the mutual fund or financial services industry. Experience with real-time analytics and streaming data environments. Familiarity with other cloud platforms (AWS, Azure) and their data services. What We Offer Opportunity to work on enterprise-level, cloud-first data architecture. Exposure to modern data platforms and modeling strategies on GCP. A collaborative, growth-focused work culture. Competitive salary and learning opportunities, including cloud certifications. Work with a team solving complex data problems in real-world financial domains Skills: oltp,cloud,gcp,cloudsql,bigquery,performance tuning,dbschema,data governance,olap,data modelling,indexing,partitioning,alloydb,platforms,data sharding

Posted 2 weeks ago

Apply

6.5 years

8 - 14 Lacs

Chennai

On-site

Job Title : Data Modeller – GCP Location : Chennai (On-site – 5 Days Working from Customer Location) Experience Required : Minimum 6.5 Years Budget : ₹18 to ₹20 LPA Work Hours : 11 AM to 8 PM IST Job Description : We are looking for an experienced Data Modeller with strong expertise in designing and implementing data models for OLTP and OLAP systems, with a focus on performance and scalability. Key Responsibilities : Hands-on Data Modelling for OLTP and OLAP systems In-depth knowledge of Conceptual, Logical, and Physical Data Modelling Strong experience in Indexing , Partitioning , and Data Sharding for optimising performance Understanding of factors impacting database performance for near-real time reporting and application interaction Proficiency with at least one Data Modelling Tool , preferably DBSchema or Erwin Good knowledge of GCP Databases such as AlloyDB , CloudSQL , and BigQuery Functional knowledge of the Mutual Fund Industry will be an added advantage Additional Details : This is a work-from-customer location role based in Chennai , office presence is mandatory Candidates not stationed in Chennai but willing to travel 5 days a week can apply — travel expenses will be covered by the project team Employment Type : Full-Time Industry : IT/Software, Financial Services Keywords : Data Modelling, OLTP, OLAP, GCP, AlloyDB, CloudSQL, BigQuery, DBSchema, Erwin, Data Sharding, Indexing, Partitioning, SQL, Mutual Fund, Data Engineer, Chennai Jobs, Tech Hiring. Job Types: Full-time, Permanent Pay: ₹814,912.86 - ₹1,497,845.15 per year Schedule: Monday to Friday Work Location: In person

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior Software Engineer - Backend About the firm: Sustainability lies at the core of Stantech AI. Our vision is to empower organizations to derive actionable insights—effectuating a smarter way of working. We operate on the premise that each client is unique and as such requires their own idiosyncratic solutions. Putting this principle into practice, we deliver tailor-made solutions to digitalize, optimize, and strategize fundamental processes underpinning client organizations. For more information, please refer to our website: www.stantech.ai Job Description: As a Senior Software Engineer at Stantech AI, you will play a pivotal role in designing, developing, and maintaining enterprise-grade backend services and APIs that cater to the unique needs of our clients. You will be a key member of our engineering team and will contribute to the success of projects by leveraging your expertise in Python, SQL, and modern DevOps practices. Key Responsibilities: Design, develop, and maintain high-performance backend applications and RESTful APIs using Python FastAPI framework. Optimize and maintain relational databases with SQL (data modeling, query optimization, and sharding) to ensure data integrity and scalability. Create, configure, and manage CI/CD pipelines using GitLab CI for automated build, test, and deployment workflows. Collaborate with cross-functional teams (data scientists, frontend engineers, DevOps) to gather requirements and deliver robust, scalable, and user-friendly solutions. Participate in architectural and technical decisions to drive innovation, ensure reliability, and improve system performance. Conduct code reviews, enforce best practices, and mentor junior engineers. Troubleshoot, diagnose, and resolve production issues in a timely manner. Stay up-to-date with industry trends, emerging technologies, and best practices. Bonus: Hands-on experience with server-level configuration and infrastructure—setting up load balancers, API gateways, and reverse proxies. Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Minimum 3 years of professional experience in backend development, with strong expertise in Python and SQL. Proven track record building and maintaining CI/CD pipelines using GitLab CI. Familiarity with containerization and orchestration technologies: Docker, Kubernetes. Solid understanding of software development lifecycle (SDLC) best practices, design patterns, and version control (Git). Excellent problem-solving, debugging, and communication skills. Ability to work independently and collaboratively in a fast-paced environment. Plus: Experience with front-end technologies (HTML, CSS, JavaScript) and cloud platforms (AWS, GCP, Azure). Financial Package: Competitive salary in line with experience: ₹10–20 Lakhs per annum, contingent on qualifications and experience.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies