Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Senior PHP Developer – Backend Engineering for AI SaaS Platform Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 4–8 years of backend development experience (including PHP, MySQL, REST APIs) Apply at : careers@darwix.ai Subject Line : Application – Senior PHP Developer – [Your Name] 🧠 About Darwix AI Darwix AI is India’s leading GenAI-powered platform for enterprise sales and customer engagement. Our flagship products like Transform+ , Sherpa.ai , and Store Intel power conversational intelligence, multilingual voice AI, real-time nudges, and store analytics for revenue and sales teams across India, MENA, and Southeast Asia. We work with large enterprises such as IndiaMart , Wakefit , GIVA , Sobha , and Bank Dofar , transforming how they engage with customers across BFSI, real estate, retail, and healthcare. Our backend system processes multilingual voice data, conversation analytics, CRM triggers, leaderboard nudges, and complex integrations with telephony, WhatsApp, CRMs, and LOS systems. This is your opportunity to build scalable backend systems with massive enterprise impact. 🎯 Role Overview We’re looking for a Senior PHP Developer to lead and strengthen the backend services that power our AI SaaS product suite. This role demands hands-on experience in building secure, scalable, and modular PHP-based backends—while integrating with various APIs, databases, and third-party platforms. You’ll work with a fast-paced product and engineering team consisting of AI/ML engineers, frontend developers (Angular/Ionic), DevOps, and product managers. You will be expected to own backend features end-to-end —from architecture and development to security and performance. 🔧 Key ResponsibilitiesBackend System Development Write clean, efficient, and secure PHP code using Laravel , CodeIgniter , or Core PHP Own development and maintenance of RESTful APIs , used across mobile, web, and AI modules Build microservices and reusable components for speech transcription , agent scoring , CRM integrations , and nudge automation Manage role-based access systems, auth tokens, and secure data workflows Database Architecture & Optimization Design and optimize complex relational schemas in MySQL Write efficient stored procedures, triggers, and data migration scripts Handle data integrity across modules such as call analysis, scoring, user logs, and nudges Support indexing, replication, and performance tuning as needed for scale Integration & System Interfacing Integrate with third-party APIs (CRM, calling tools, WhatsApp APIs, payment gateways) Set up webhook listeners for real-time triggers and background jobs Work with frontend (Angular/Ionic) teams to ensure clean API specs and testing DevOps Collaboration & Release Ownership Collaborate with DevOps engineers to deploy and monitor backend services Handle environment configs, version control (Git), and CI/CD coordination Participate in production hotfixes, release validations, and system rollbacks if needed Documentation & Code Reviews Maintain detailed backend documentation: API contracts, data flows, and logic mapping Conduct peer reviews and enforce backend development standards Assist in defining architecture decisions and system roadmap for scale ✅ Required Skills & Experience 4–8 years of hands-on backend development experience with PHP Strong experience with MySQL – joins, indexes, subqueries, performance optimization Solid grasp of REST APIs , secure token-based auth, and JSON/XML data handling Understanding of backend security (SQL injection, XSS, CSRF, rate limiting, encryption) Experience with Git , API testing tools (Postman), and backend debugging Exposure to high-performance systems, background jobs (CRON), and message queues ⚙️ Bonus Skills (Good to Have) Experience with Moodle backend or learning management systems Familiarity with Flutter and mobile app integration with PHP backends Worked on server-side push notifications , logs, and tracking frameworks Previous experience in SaaS product environments or multi-tenant platforms Experience building dashboards or integrations with AI engines / analytics APIs 💡 You’ll Thrive in This Role If You: Love building enterprise-grade systems that run in production at scale Are performance-obsessed and write code with a long-term architecture mindset Are comfortable owning modules from API definition to deployment Are process-oriented and thrive in a team that builds fast and iterates faster Want to contribute to a fast-growing AI company impacting large sales organizations 📬 How to Apply Send your resume to careers@darwix.ai Subject Line: Application – Senior PHP Developer – [Your Name] (Optional): Share links to your GitHub, open-source contributions, or portfolio of previous backend systems you’ve built. If you're looking to build products that power real-time multilingual AI , live dashboards , and mission-critical sales workflows , and you have the experience to lead and scale backend systems— Darwix AI is where your next big challenge awaits.
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Senior Analyst, Data-Marts & Reporting - Reporting and Analytics – Digital Data Analytics Innovation - Deloitte Support Services India Private Limited Are you a quick learner with a willingness to work with new technologies? Data-Marts and Reporting team offers you a particular opportunity to be an integral part of the Datamarts & Reporting – CoRe Digital | Data | Analytics | Innovation Group. The principle focus of this group is the research, development, maintain and documentation of customized solutions that e-enable the delivery of cutting- edge technology to firm's business centers. Work you will do As a Senior Analyst, you will research and develop solutions built on varied technologies like Microsoft SQL Server, MSBI Suite, MS Azure SQL, Tableau, .Net. You will support a team which provides high-quality solutions to the customers by following a streamlined system development methodology. In the process of acquainting yourself with various development tools, testing tools, methodologies and processes, you will be aligned to the following role: Role: Datamart Solution Senior Analyst As a Datamart Solution Analyst, you will be responsible for delivering technical solutions on building high performing datamart and reporting tools using tools/technologies like Microsoft SQL Server, MSBI Suite, MS Azure SQL, Tableau, .Net. Your key responsibilities include: Interact with end users to gather, document, and interpret requirements. Leverage requirements to design technical solution. Develop SQL objects and scripts based on design. Analyze, debug, and optimize existing stored procedures and views. Leverage indexes, performance tuning techniques, and error handling to improve performance of SQL scripts. Create and modify SSIS packages, ADF Pipelines for transferring data between various systems cloud and On-premise environments. Should be able to seamlessly work with different Azure services. Improve performance and find opportunities to improvise process to bring in efficiency in SQL, SSIS and ADF. Create, schedule and monitor SQL jobs. Build interactive visualizations in Tableau for leadership reporting. Proactively prioritize activities, handle tasks and deliver quality solutions on time. Communicate clearly and regularly with team leadership and project teams. Manage ongoing deliverable timelines and own relationships with end clients to understand if deliverables continue to meet client’s need. Work collaboratively with other team members and end clients throughout development life cycle. Research, learn, implement, and share skills on new technologies. Understand the customer requirement well and provide status update to project lead (US/USI) on calls and emails efficiently. Proactively prioritize activities, handle tasks and deliver quality solutions on time. Guide junior members in team to get them up to speed in domain, tool and technologies we work on. Continuously improves skills in this space by completing certification and recommended training. Obtain and maintain a thorough understanding of the MDM data model, Global Systems & Client attributes. Good understanding of MVC .Net, Sharepoint front end solutions. Good to have knowledge on Full stack development. The team CoRe - Digital Data Analytics Innovation (DDAI) sits inside Deloitte’s global shared services organization and serves Qualifications and experience Required: Educational Qualification: B.E/B.Tech or MTech (60% or 6.5 GPA and above) Should be proficient in understanding of one or more of the following Technologies: Knowledge in DBMS concepts, exposure to querying on any relational database preferably MS SQL Server, MS Azure SQL, SSIS, Tableau. Knowledge on any of the coding language like C#. NET or VB .Net would be added advantage. Understands development methodology and lifecycle. Excellent analytical skills and communication skills (written, verbal, and presentation) Ability to chart ones’ own career and build networks within the organization Ability to work both independently and as part of a team with professionals at all levels Ability to prioritize tasks, work on multiple assignments, and raise concerns/questions where appropriate Seek information / ideas / establish relationship with customer to assess any future opportunities Total Experience: 4-6 years of overall experience At least 3 years of experience in data base development, ETL and Reporting Skill set Required: SQL Server, MS Azure SQL, Azure Data factory, SSIS, Azure Synapse, Data warehousing & BI Preferred: Tableau, .Net Good to have: MVC .Net, Sharepoint front end solutions. Location: Hyderabad Work hours: 2 p.m. – 11 p.m. How you will grow At Deloitte, we have invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources, including live classrooms, team- based learning, and eLearning. Deloitte University (DU): The Leadership Center in India, our state-of-the-art, world- class learning center in the Hyderabad office, is an extension of the DU in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people, and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Disclaimer: Please note that this description is subject to change basis business/project requirements and at the discretion of the management. #EAG-Core Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300780
Posted 2 weeks ago
0 years
0 Lacs
Assam, India
On-site
The Database Administrator is responsible for the design, implementation, maintenance, and performance tuning of critical database systems to ensure high availability, security, and optimal performance. The Database Administrator Will Work Closely With Application Teams, System Administrators, And Project Stakeholders To Ensure That Database Systems Are Robust, Scalable, And Aligned With Organizational Goals, While Also Managing Data Integrity, Access Controls, And Compliance With Relevant Policies And Skilled in working with relational databases such as PostGres, MariaDB and nonrelational databases like MongoDB. Expertise in database design, normalization, and optimization. Knowledge of SQL and query optimization. Familiarity with backup and recovery procedures. Understanding of high availability and disaster recovery solutions. Experience with database security and access control. Proven track record of managing and maintaining large- scale databases. Experience with both on-premises and cloud-based database environments. Strong analytical and problem-solving skills related to database performance and scalability Installed, configured, and maintained database systems based on organizational needs. Implemented and optimized database parameters to ensure optimal performance Conducted performance tuning and optimization of queries and database structures. Monitored and analyzed system performance, making recommendations for improvements Designed and implemented backup and recovery strategies to ensure data integrity and availability. Conducted regular testing of backup and recovery procedures Provided timely and effective support for database- related issues. Conducted root cause analysis for incidents and implemented preventive measures. Maintained comprehensive documentation of database configurations, procedures, and best practices Responsibilities Database Strategy & Architecture : Contribute to the design and implementation of scalable and secure database solutions that align with organizational needs. Work collaboratively with IT and development teams to support the development of reliable and efficient database architectures. Apply database design best practices and assist in enforcing standards across development and production environments. Support the evaluation and adoption of new database tools, technologies, and frameworks under the guidance of technical leads. Database Administration & Maintenance Manage and maintain operational health of production and non-production databases, ensuring optima l uptime and performance. Perform routine database maintenance tasks such as backups, indexing, archiving, and patching. Implement and regularly test disaster recovery plans, ensuring data availability and integrity. Monitor system logs, resolve issues related to slow queries, deadlocks, or storage bottlenecks, and escalate where needed. Security & Compliance Ensure database security through role-based access control, encryption, and secure configurations. Monitor for unauthorized access or unusual activity, working with the security team to respond to threats. Support compliance initiatives by ensuring databases adhere to relevant regulatory standards (e.g., GDPR, HIPAA, or local data laws). Maintain and implement database security policies and assist in audits and reviews as required. Performance Tuning & Optimization Analyze database workloads to identify and address performance bottlenecks. Optimize SQL queries, indexes, and execution plans for better efficiency. Participate in capacity planning and help forecast database scaling needs. Collaborate with developers to review and optimize database schemas and application queries. Database Deployment & Integration Coordinate the deployment of database updates, patches, and schema changes with minimal operational impact. Support database migration and integration efforts across systems and applications. Assist with cloud platform integrations and ensure database components interact smoothly with analytics tools and data pipelines. Database Monitoring & Reporting Implement and manage monitoring tools to track database performance, uptime, and resource utilization. Generate routine health check reports and highlight areas for improvement. Provide input into database performance dashboards and reporting tools used by the IT or DevOps teams. Documentation & Best Practices Maintain accurate documentation for database configurations, maintenance procedures, and incident resolutions. Follow and contribute to database management policies and operational standards. Keep troubleshooting guides and knowledge base entries up to date for use by the IT support team. Collaboration With Business Teams Work closely with business and application teams to understand data requirements and support solution development. Ensure databases are structured to support reporting, analytics, and business intelligence tools. Assist in designing and maintaining data models that reflect evolving business processes. Qualification B.E. / B. Tech in any specialization or MCA. DBA certification or related certifications is preferable. Overall Experience in design, implementation and management of database systems. 7 or more years of experience in large and complex IT systems development and implementation projects. Experienced in Database Management. Fluency in English and Hindi (Speaking, reading & writing). Fluency in Assamese preferrable (ref:hirist.tech)
Posted 2 weeks ago
14.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Backdrop AVIZVA is a Healthcare Technology Organization that harnesses technology to simplify, accelerate, & optimize the way healthcare enterprises deliver care. Established in 2011, we have served as strategic enablers for healthcare enterprises, helping them enhance their overall care delivery. With over 14 years of expertise, we have engineered more than 150 tailored products for leading Medical Health Plans, Dental and Vision Plan Providers, PBMs, Medicare Plan Providers, TPAs, and more. Overview Of The Role As a System Analyst within a product development team in AVIZVA, you will be one of the front- liners of the team spearheading your product’s solution design activities alongside the product owners, system architect, lead developers while collaborating with all business & technology stakeholders. Job Responsibilities Gather & analyze business, functional, data requirements with the PO, & relevant stakeholders and derive system requirements from the same. Work with the system architect to develop an understanding of the product's architecture, components, Interaction, flow, and build clarity around the technological nuances & constructs involved. Develop an understanding of the various datasets relevant to the industry, their business significance and logical structuring from a data modeling perspective. Conduct in-depth industry research around datasets pertinent to the underlying problem statements. Identify, (data) model & document the various entities, relationships & attributes alongwith appropriate cardinality and normalization. Apply ETL principles to formulate & document data dictionaries, business rules, transformation & enrichment logic, for various datasets in question pertaining to various source & target systems in context. Define data flow, validations & business rules driving the interchange of data between components of a system or multiple systems. Define requirements around system integrations and exchange of data such as systems involved, services (APIs) involved, nature of integration, handshake details (data involved, authentication, etc.) Identify use-cases for exposure of data within an entity/dataset via APIs and define detailed API signatures and create API documentation. Provide clarifications to the development team around requirements, system design, integrations, data flows, scenarios. Support to other product teams dependent on the APIs, integrations defined by your product team, in understanding the endpoints, logics, business, entity structure etc. Provide backlog grooming support to the Product Owner through activities such as functional analysis and data analysis. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science or any other analytically inclined field of study. At least 4 years of relevant experience in roles such as Business Analyst, Systems Analyst or Business System Analyst. Experience in analysing & defining systems involving varying levels of complexity in terms of underlying components, data, integrations, flows, etc. Experience working with data (structured, semi-structureed), data modeling, writing database queries with hands-on SQL, and working knowledge of Elasticsearch indexes. Experience with Unstructured data will be a huge plus. Experience of identifying & defining entities & APIs, writing API specifications, & API consumer specifications. Ability to map data from various sources to various consumer endpoints such as a system, a service, UI, process, sub-process, workflow etc. Experience with data management products based on ETL principles, involving multitudes of datasets, disparate data sources and target systems. A strong analytical mindset with a proven ability to understand a variety of business problems through stakeholder interactions and other methods to ideate the most aligned and appropriate technology solutions. Exposure to diagrammatic analysis & elicitation of business processes, data & system flows using BPMN & UML diagrams, such as activity flow, use-cases, sequence diagrams, DFDs, etc. Exposure to writing requirements documentation such as BRD, FRD, SRS, Use-Cases, User-Stories etc. An appreciation for the systems’ technological and architectural concepts with an ability to speak about the components of an IT system, inter-component interactions, database, external and internal data sources, data flows & system flows. Experience (at least familiarity) of working with the Atlassian suite (Jira, & Confluence). Experience in product implementations & customisations through system configurations will be an added plus. Experience of driving UX design activities in collaboration with graphic & UI design teams, by means of enabler tools such as Wireframes, sketches, flow diagrams, information architecture etc. will be an added plus. Exposure to UX designing & collaboration tools such as Figma, Zeplin, etc. will be an added plus. Awareness or prior exposure to Healthcare & Insurance business & data will be a huge advantage.
Posted 2 weeks ago
11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: About the Company: At AT&T, we’re connecting the world through the latest tech, top-of-the-line communications and the best in connectivity. Our groundbreaking digital solutions provide intuitive and integrated experiences for millions of customers across online, retail and care channels. Join our mission to deliver compelling communication and entertainment experiences to customers around the world as we continue to evolve as a technology-powered, human-centered organization. As part of our team, you’ll transform the way we deliver a seamless customer experience with digital at the center of all you do. In our world, digital is much larger than just an eCommerce channel, we are transforming all channels to digitally perform as one team to create a better customer experience. As we move into 2024, the digital transformation will revolutionize the digital space and you can build a career that will propel your future. About the Job: This position is a Lead Cyber Security, responsible to design, implement and operate/administer Elastic Stack within the Dynamic Defense product development portfolio in Chief Security Office within AT&T. Experience Level: 11+ years Location: Hyderabad or Bengaluru Roles and Responsibilities: Design, implementation and operation/administration of Elastic Stack. Designing and implementing Elastic Stack scalability and availability/redundancy. Design, implement and troubleshoot Logstash, Metricbeat and Filebeat. Implement and troubleshoot log forwarding and ingestion into Elastic Stack with performance, scalability and availability as requirements. Create and update scripts to enhance automation, operations and management of the system with Python, shell scripts or Powershell. Manage and troubleshoot Elastic Cloud for Kubernetes instances. Providing thought leadership and direction on program improvements & optimizations Collaborates with team members to determine best practices and client requirements for needed software products. Ability to adapt to an evolving process and application. Willingness to experiment and try new approaches to solve old and new problems. Will work with onshore leaders to discuss staffing and resource issues and strategies. Supports innovation, strategic planning, technical proof of concepts, testing, lab work, and various other technical program management. Primary / Mandatory skills: Overall – 12+ years of IT experience 8+ Proven experience working across the elastic Stack. Has cloud experience with Elastic. Primarily Elastic Cloud for Kubernetes (ECK) Understand how to deploy nodes in Azure. Able to manage / support pipelines in Azure Create indexes / data streams Define ILM policies Able to parse data from different raw sources Able to enrich data Ability to troubleshoot Elastic indexes, shards, and errors. Able to work with free version of Elastic / build tools to assist in its operation. Understand how Logstash, Metricbeat, and Filebeat work. How to integrate as forwarders to Elastic and Kafka. Able to manage / support multiple elastic clusters. Able to architect ILM policies with node resources in mind. Has experience with elastic agents / fleet. Experience with design, implementation and support of Azure components, including databases and networking. Additional information (if any): Flexible to provide coverage in US morning hours upon need. Certification : CISSP or equivalent #Cybersecurity Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 2 weeks ago
14.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Backdrop AVIZVA is a Healthcare Technology Organization that harnesses technology to simplify, accelerate, & optimize the way healthcare enterprises deliver care. Established in 2011, we have served as strategic enablers for healthcare enterprises, helping them enhance their overall care delivery. With over 14 years of expertise, we have engineered more than 150 tailored products for leading Medical Health Plans, Dental and Vision Plan Providers, PBMs, Medicare Plan Providers, TPAs, and more. Overview Of The Role As a System Analyst within a product development team in AVIZVA, you will be one of the front- liners of the team spearheading your product’s solution design activities alongside the product owners, system architect, lead developers while collaborating with all business & technology stakeholders. Job Responsibilities Gather & analyze business, functional, data requirements with the PO, & relevant stakeholders and derive system requirements from the same. Work with the system architect to develop an understanding of the product's architecture, components, Interaction, flow, and build clarity around the technological nuances & constructs involved. Develop an understanding of the various datasets relevant to the industry, their business significance and logical structuring from a data modeling perspective. Conduct in-depth industry research around datasets pertinent to the underlying problem statements. Identify, (data) model & document the various entities, relationships & attributes alongwith appropriate cardinality and normalization. Apply ETL principles to formulate & document data dictionaries, business rules, transformation & enrichment logic, for various datasets in question pertaining to various source & target systems in context. Define data flow, validations & business rules driving the interchange of data between components of a system or multiple systems. Define requirements around system integrations and exchange of data such as systems involved, services (APIs) involved, nature of integration, handshake details (data involved, authentication, etc.) Identify use-cases for exposure of data within an entity/dataset via APIs and define detailed API signatures and create API documentation. Provide clarifications to the development team around requirements, system design, integrations, data flows, scenarios. Support to other product teams dependent on the APIs, integrations defined by your product team, in understanding the endpoints, logics, business, entity structure etc. Provide backlog grooming support to the Product Owner through activities such as functional analysis and data analysis. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science or any other analytically inclined field of study. At least 4 years of relevant experience in roles such as Business Analyst, Systems Analyst or Business System Analyst. Experience in analysing & defining systems involving varying levels of complexity in terms of underlying components, data, integrations, flows, etc. Experience working with data (structured, semi-structureed), data modeling, writing database queries with hands-on SQL, and working knowledge of Elasticsearch indexes. Experience with Unstructured data will be a huge plus. Experience of identifying & defining entities & APIs, writing API specifications, & API consumer specifications. Ability to map data from various sources to various consumer endpoints such as a system, a service, UI, process, sub-process, workflow etc. Experience with data management products based on ETL principles, involving multitudes of datasets, disparate data sources and target systems. A strong analytical mindset with a proven ability to understand a variety of business problems through stakeholder interactions and other methods to ideate the most aligned and appropriate technology solutions. Exposure to diagrammatic analysis & elicitation of business processes, data & system flows using BPMN & UML diagrams, such as activity flow, use-cases, sequence diagrams, DFDs, etc. Exposure to writing requirements documentation such as BRD, FRD, SRS, Use-Cases, User-Stories etc. An appreciation for the systems’ technological and architectural concepts with an ability to speak about the components of an IT system, inter-component interactions, database, external and internal data sources, data flows & system flows. Experience (at least familiarity) of working with the Atlassian suite (Jira, & Confluence). Experience in product implementations & customisations through system configurations will be an added plus. Experience of driving UX design activities in collaboration with graphic & UI design teams, by means of enabler tools such as Wireframes, sketches, flow diagrams, information architecture etc. will be an added plus. Exposure to UX designing & collaboration tools such as Figma, Zeplin, etc. will be an added plus. Awareness or prior exposure to Healthcare & Insurance business & data will be a huge advantage.
Posted 2 weeks ago
4.0 - 7.0 years
10 - 18 Lacs
Kochi, Chennai, Coimbatore
Hybrid
Hiring | PL/SQL Technical Consultant | 47 Years Exp | Pan India (Hybrid) | Big 4 Consulting Firm We are looking for PL/SQL Technical Consultants with 4 to 7 years of experience for a permanent opportunity with one of the Big 4 consulting firms . Immediate joiners preferred. Key Skills : PL/SQL, Joins, Surrogate Keys, Constraints, Datatypes, Indexes, Packages, Procedures, Functions, Triggers, Views, Collections, Exception Handling, Data Migration, SQL Tuning, Performance Optimization Location : Pan India (Hybrid) Experience : 4–7 Years Joiners : Immediate joiners preferred Client : One of the Big 4 Consulting Firms Send your CV to s.vijetha@randstad.in
Posted 2 weeks ago
4.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Required Skills & Qualifications Programming Languages & Frameworks: 4-5+ years of professional experience in Python development. 2+ years of experience with Django (Django REST Framework a strong plus). Data & Databases: Strong expertise in PostgreSQL: schema design, writing optimized SQL (indexes, partitions), migrations (Django migrations). Comfortable designing star-schema/dimension-fact table models. Experience with CSV/JSON parsing libraries (e.g., pandas, csv, dictreader) and writing resilient ETL scripts. Web Scraping & Automation: Hands-on experience with headless-browser automation tools such as Selenium or Playwright (Python bindings). Familiarity with handling OTP/2FA flows programmatically (e.g., integrating with Twilio, Vault, or custom prompt workflows). API Development & Security: Proficient building, testing, and documenting RESTful APIs (Django REST Framework, DRF serializers, viewsets). Strong understanding of JWT or token-based authentication, secure session management, and role-based ACL. Scheduling & Background Jobs: Experience setting up CRON, APScheduler, Celery (with Redis/RabbitMQ), or equivalent for periodic job orchestration. Knowledge of implementing retry logic, backoff strategies, and idempotency for long-running tasks. DevOps & Deployment: Familiar with Docker and containerization best practices for Python applications. Experience writing CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins). Exposure to cloud platforms (AWS, GCP, or Azure), specifically RDS or managed PostgreSQL, EC2/ECS, and secrets management (AWS Secrets Manager, Parameter Store). Logging & Monitoring: Skilled in integrating structured logging (with logging, StructLog, or log aggregation services like ELK/Elastic Stack, Splunk). Familiarity with error-tracking tools (e.g., Sentry) and writing health-check endpoints. Other Technical Skills: Proficient in Git version control, code reviews (GitHub/GitLab). Ability to write unit tests (pytest, Django TestCase) and integration tests. Strong understanding of REST API performance optimization and caching strategies (Redis/memcached). Soft Skills: Excellent problem-solving skills and attention to detail. Strong communication skills to collaborate with product owners, data analysts, and frontend developers. Self-motivated, able to prioritize tasks, and deliver on aggressive timelines. Familiarity with Agile/Scrum methodologies; comfortable working in sprints, attending stand-ups, and refining user stories. NP : Immediate to 30 Days preferred.
Posted 3 weeks ago
7.0 years
4 - 5 Lacs
Noida
On-site
Country India Working Schedule Full-Time Work Arrangement Virtual Commutable Distance Required No Relocation Assistance Available No Posted Date 10-Jul-2025 Job ID 10079 Description and Requirements Position Summary We are seeking a forward-thinking and enthusiastic Engineering and Operations Specialist to manage and optimize our MongoDB and Splunk platforms. The ideal candidate will have in-depth experience in at least one of these technologies, with a preference for experience in both. Job Responsibilities Worked with engineering and operational tasks for MongoDB and Splunk platforms, ensuring high availability and stability. Continuously improve the stability of the environments, leveraging automation, self-healing mechanisms, and AIOps. Develop and implement automation using technologies such as Ansible, Python, Shell. Manage CI/CD deployments and maintain code repositories. Utilize Infrastructure/Configuration as Code practices to streamline processes. Work closely with development teams to integrate database and observability/logging tools effectively Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex MongoDB databases version (6.0,7.0 ,8.0 and above) on Linux OS on (on-premises, cloud-based). Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and implement best Database and infrastructure security to meet the compliance. Monitor and tune MongoDB and Splunk clusters for optimal performance, identifying bottlenecks and troubleshooting issues. Analyze database queries, indexing, and storage to ensure minimal latency and maximum throughput. The Senior Splunk System Administrator will build, maintain, and standardize the Splunk platform, including forwarder deployment, configuration, dashboards, and maintenance across Linux OS . Able to debug production issues by analyzing the logs directly and using tools like Splunk. Work in Agile model with the understanding of Agile concepts and Azure DevOps. Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. MongoDB Certified DBA or Splunk Certified Administrator is a plus Experience with cloud platforms like AWS, Azure, or Google Cloud. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in MongoDB and working experience Splunk Administrator Technical Skills In-depth experience with either MongoDB or Splunk, with a preference for exposure to both. Strong enthusiasm for learning and adopting new technologies. Experience with automation tools like Ansible, Python and Shell. Proficiency in CI/CD deployments, DevOps practices, and managing code repositories. Knowledge of Infrastructure/Configuration as Code principles. Developer experience is highly desired. Data engineering skills are a plus. Experience with other DB technologies and observability tools are a plus. Extensive work experience Managed and optimized MongoDB databases, designed robust schemas, and implemented security best practices, ensuring high availability, data integrity, and performance for mission-critical applications. Working experience in database performance tuning with MongoDB tools and techniques. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Extensive experience in Database Backup and recovery strategy by design, configuration and implementation using backup tools (Mongo dump, Mongo restore) and Rubrik. Extensive experience in Configuration and enforced SSL/TLS encryption for secure communication between MongoDB nodes Working experience to Configure and maintain Splunk environments, developed dashboards, and implemented log management solutions to enhance system monitoring and security across Linux OS. Experience Splunk migration and upgradation on Standalone Linux OS and Cloud platform is plus. Perform application administration for a single security information management system using Splunk. Working knowledge of Splunk Search Processing Language (SPL), architecture and various components (indexer, forwarder, search head, deployment server) Extensive experience in both MongoDB database and Splunk replication between Primary and Secondary servers to ensure high availability and fault tolerance. Managed Infrastructure security policy as per best industry standard by designing, configurating and implementing privileges and policy on database using RBAC as well as Splunk. Scripting skills and automation experience using DevOps, Repos and Infrastructure as code. Working experience in Container (AKS and OpenShift) is plus. Working experience in Cloud Platform experience (Azure, Cosmos DB) is plus. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Strong problem-solving abilities and proactive approach to identifying and resolving issues. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities effectively. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible. Join us!
Posted 3 weeks ago
7.0 years
0 Lacs
India
On-site
Position: MongoDB Developer Experience: 7-8 years Key Responsibilities: Design and implement MongoDB database solutions for performance and scalability. Create, optimize, and maintain MongoDB collections, indexes, and schemas. Develop efficient queries for CRUD operations and aggregations. Integrate MongoDB with backend APIs and services. Monitor, troubleshoot, and improve database performance. Ensure data security and integrity across all systems. Collaborate with front-end, back-end, and DevOps teams to ensure seamless data flow. Create and maintain documentation related to database structure and code. Required Skills & Qualifications: Strong experience with MongoDB and NoSQL database design . Proficient in MongoDB query language , aggregation framework, and indexing. Experience with Node.js , Express , or other backend technologies. Familiarity with data modeling, sharding, and replication. Knowledge of MongoDB tools like Mongo Compass , Mongoose , or Robo 3T . Understanding of REST APIs and backend integration. Ability to write clean, maintainable, and efficient code. Good understanding of version control tools like Git . Strong analytical and problem-solving skills. Preferred Qualifications: MongoDB certification or related training. Experience with cloud-hosted databases (MongoDB Atlas, AWS DocumentDB). Familiarity with performance tuning and monitoring tools. Prior experience working in Agile/Scrum environments. Job Type: Full-time Schedule: Day shift Work Location: In person Speak with the employer +91 7877727352
Posted 3 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Position Summary We are seeking a forward-thinking and enthusiastic Engineering and Operations Specialist to manage and optimize our MongoDB and Splunk platforms. The ideal candidate will have in-depth experience in at least one of these technologies, with a preference for experience in both. Job Responsibilities Worked with engineering and operational tasks for MongoDB and Splunk platforms, ensuring high availability and stability. Continuously improve the stability of the environments, leveraging automation, self-healing mechanisms, and AIOps. Develop and implement automation using technologies such as Ansible, Python, Shell. Manage CI/CD deployments and maintain code repositories. Utilize Infrastructure/Configuration as Code practices to streamline processes. Work closely with development teams to integrate database and observability/logging tools effectively Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex MongoDB databases version (6.0,7.0 ,8.0 and above) on Linux OS on (on-premises, cloud-based). Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and implement best Database and infrastructure security to meet the compliance. Monitor and tune MongoDB and Splunk clusters for optimal performance, identifying bottlenecks and troubleshooting issues. Analyze database queries, indexing, and storage to ensure minimal latency and maximum throughput. The Senior Splunk System Administrator will build, maintain, and standardize the Splunk platform, including forwarder deployment, configuration, dashboards, and maintenance across Linux OS . Able to debug production issues by analyzing the logs directly and using tools like Splunk. Work in Agile model with the understanding of Agile concepts and Azure DevOps. Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. MongoDB Certified DBA or Splunk Certified Administrator is a plus Experience with cloud platforms like AWS, Azure, or Google Cloud. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in MongoDB and working experience Splunk Administrator Technical Skills In-depth experience with either MongoDB or Splunk, with a preference for exposure to both. Strong enthusiasm for learning and adopting new technologies. Experience with automation tools like Ansible, Python and Shell. Proficiency in CI/CD deployments, DevOps practices, and managing code repositories. Knowledge of Infrastructure/Configuration as Code principles. Developer experience is highly desired. Data engineering skills are a plus. Experience with other DB technologies and observability tools are a plus. Extensive work experience Managed and optimized MongoDB databases, designed robust schemas, and implemented security best practices, ensuring high availability, data integrity, and performance for mission-critical applications. Working experience in database performance tuning with MongoDB tools and techniques. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Extensive experience in Database Backup and recovery strategy by design, configuration and implementation using backup tools (Mongo dump, Mongo restore) and Rubrik. Extensive experience in Configuration and enforced SSL/TLS encryption for secure communication between MongoDB nodes Working experience to Configure and maintain Splunk environments, developed dashboards, and implemented log management solutions to enhance system monitoring and security across Linux OS. Experience Splunk migration and upgradation on Standalone Linux OS and Cloud platform is plus. Perform application administration for a single security information management system using Splunk. Working knowledge of Splunk Search Processing Language (SPL), architecture and various components (indexer, forwarder, search head, deployment server) Extensive experience in both MongoDB database and Splunk replication between Primary and Secondary servers to ensure high availability and fault tolerance. Managed Infrastructure security policy as per best industry standard by designing, configurating and implementing privileges and policy on database using RBAC as well as Splunk. Scripting skills and automation experience using DevOps, Repos and Infrastructure as code. Working experience in Container (AKS and OpenShift) is plus. Working experience in Cloud Platform experience (Azure, Cosmos DB) is plus. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Strong problem-solving abilities and proactive approach to identifying and resolving issues. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities effectively. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
REQUIRED SKILLS: Design and develop SQL Server stored procedures, functions, views, and triggers to be used during the ETL process Designing and developing SSIS / SQL ETL solutions to acquire and prepare data from numerous upstream systems for processing by QRM Builds data transformations with SSIS including importing data from files, moving data from one database platform to another Debug and tune SSIS or other ETL processes to ensure accurate and efficient movement of data Design, implement and maintain database objects (tables, views, indexes, ) and database security Experience working with SSIS packages Proficient with ETL tools such as Microsoft SSIS Analyze and develop strategies and approaches to import and transfer data between source, staging, and ODS/Data Warehouse destinations Test and prepare ETL processes for deployment to production and non-production environments Support system and acceptance testing including the development or refinement of test plans. Interested Candidates Can Share your CV on abhishek.tiwari@CuretechServices.com. Notice- Immediate to 15 Days.
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
andhra pradesh
On-site
As a Senior PL/SQL Developer, you will be responsible for utilizing your 4+ years of hands-on experience in SQL and SACM process. Your expertise in SQL scripting will be crucial as you write Packages and Functions, ensuring efficient query performance tuning. Your E2E understanding of the SACM process will enable you to monitor and review its execution at a high level, maintaining consistency with the organization's current culture and IT Service Management strategy. You will also coordinate with all other IT processes to meet process objectives and metrics for success. Your role will involve being part of the SACM configuration management plan, covering Identification, Configuration Control, reporting, and auditing. Additionally, you will analyze, design, develop, troubleshoot, and debug software programs for commercial or end-user applications. Collaborating as a member of the software engineering division, you will integrate external customer specifications and implement changes to existing software architecture while building new products and development tools. In this non-routine and complex work environment, your advanced technical and business skills will be essential. Your responsibilities will include leading and mentoring team members, providing direction, and contributing significantly to the success of the projects. Your qualifications should include a BS or MS degree or equivalent experience relevant to the functional area, along with knowledge of Oracle 11g and above. Your experience in designing/architecting databases with large volumes and complexity will be valuable, along with your excellent development skills and extensive knowledge of PLSQL programming. You will be involved in writing PL/SQL Stored Procedures, Functions, Cursors, Triggers, and Packages, as well as troubleshooting and debugging system and data errors. Your proficiency in performance tuning, ability to understand business requirements and data models, and experience in automated SQL-LOADER for loading data from flat files will be key to your success. Your expertise in Oracle advanced SQL Programming, including Analytical functions, Sub Queries, indexes, and Set Operators, will be crucial. Moreover, your proficiency in creating, modifying, and effectively using database objects like Tables, Constraints, Sequences, Indexes, Group Functions, and Views will be essential for meeting project requirements effectively.,
Posted 3 weeks ago
5.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Specialist, Technical Professional Services Experience 5 to 8 years of experience in Implementation Data Programming Work timings 2 PM to 11 PM IST from office. weekend support Excellent Programming skills on Oracle PL/SQL, and SQR (optional) Must have good command over SQL & PL/SQL concepts, PL/SQL debugging, Writing Packages, Stored procedures and functions, triggers, views, Exception handling and their types, Constraints, Indexes and Partitions, Cursors and Oracle collections SQL Queries, Stored Procedures and PL\SQL Ability to collaborate with various stakeholders (SME, Architects, Core Technology experts, BAs, etc) Willing to work on new and existing products. Must have proven experience working on large product. Exposure of banking domain Ability to provide estimation and scheduling for assignments Strong analytical, troubleshooting and issue resolution skills Excellent verbal and written communication and interpersonal skills Ability to work within a team environment Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 3 weeks ago
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About HCLTech HCL Technologies is a next-generation global technology company that helps enterprises reimagine their businesses for the digital age. Our technology products and services are built on four decades of innovation, with a world-renowned management philosophy, a strong culture of invention and risk-taking, and a relentless focus on customer relationships. HCL also takes pride in its many diversity, social responsibility, sustainability, and education initiatives. Through its worldwide network of R&D facilities and co-innovation labs, global delivery capabilities, across countries, HCL delivers holistic services across industry verticals to leading enterprises, including 250 of the Fortune 500 and 650 of the Global 2000. Why Us We are one of the fastest-growing large tech companies in the world, with offices in 60+ countries across the globe and 222,000 employees. Our company is extremely diverse with 165 nationalities represented. We offer the opportunity to work with colleagues across the globe. We offer a virtual-first work environment, promoting a good work-life integration and real flexibility. We are invested in your growth, offering learning and career development opportunities at every level to help you find your own unique spark. We offer comprehensive benefits for all employees. We are a certified great place to work and a top employer in 17 countries, offering a positive work environment that values employee recognition and respect. The driving force behind that work, our people, are diverse, creative, and passionate, raising the bar for excellence on a regular basis. We, in turn, work hard to bring out the best in them as we strive to help them find their spark and become the best version of themselves that they can be. "Come join us in reshaping the future”. We are actively seeking experienced professionals for key roles anywhere from India: KEY FEATURES OF THE POSITION Functional / Technical Development and delivery experience with T24 and TAFJ R23 or higher in a multi-company multi-application server set-up Passionate about technologies, building robust and scalable interfaces and local developments Stake holder management – working closely with Finance, Ops, business change engineers, and project managers to drive and manage IT delivery Ensure awareness, involvement and support from the key stakeholders and participants by building strong project teams and maintaining robust communication on the project status throughout its life cycle Hands on experience in analysis, design, coding, and implementation of complex and custom-built solution Work collaboratively with team to achieve goals. Experience working in a Safe Agile environment Hands on experience in Design Studio, Gitlab, JBOSS, SQL developer, Temenos Unit Test framework, JIRA, Job schedulers (AWA) Experience in usage of logging and monitoring tools like Tivoli, Dynatrace and Splunk Deep understanding of T24 COB and Services framework to build robust solutions Hands on experience with Integration & Interaction frameworks and Streaming solutions Investigate and resolve production issues (RTB) to help maintain a stable production environment; remain cool and effective in crisis Have functional understanding of Private Banking and Finance modules in T24 Participate in reviews meetings and provide updates on project progress Effectively manage the development resources and deliveries from the squad Fair understanding of the deployment architecture and infrastructure Fair understanding of the Oracle DB concepts like indexes, SQL query design and Linux OS scripts Client / Stakeholder Management (internal & external) Stake holder management – working closely with Finance, Ops, business change engineers, and project managers to drive and manage IT delivery Ensure awareness, involvement and support from the key stakeholders and participants by building strong project teams and maintaining robust communication on the project status throughout its life cycle SKILLS REQUIREMENTS OF THE POSITION Professional and Technical Professional Minimum 15 years of development experience in T24 platforms Deep understanding of the Private banking modules Implementation experience of the SWIFT messaging, interfacing patterns, STP processes Excellent personal organisation and ability to prioritise and carry out multiple tasks. Able to influence and drive projects to meet key milestones and overcome challenges Able to translate functional requirements to efficient and fit-for-purpose technical solutions Technical Must Have: T24 R23 and higher TAFJ R23 and higher Multi-Company set-up and multi-app server set-up Hands on with Design Studio, Source code repository, Job schedulers, SQL developer Experience with T24 development on Cloud environments Experience in Temenos Unit Test framework Integration framework, Outbox Event Streaming and Interaction framework (IRIS), MQ, JBOSS T24 COB and Services framework Desirable: Exposure to T24 Upgrade and migration processes T24 performance optimisation and T24 data Archiving Experience with Tivoli, Dynatrace, Splunk or similar tools for logging and monitoring Oracle DB concepts relevant to T24, Linux scripts Understanding of deployment architecture and infrastructure Experience in T24 Upgrades and data migration best practices If you or someone you know fits these roles and are eager to join our dynamic team, please share reference profiles to manjunatha.hs@hcltech.com/krithiga.k@hcltech.com Early joiners are preferred.
Posted 3 weeks ago
14.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Backdrop AVIZVA is a Healthcare Technology Organization that harnesses technology to simplify, accelerate, & optimize the way healthcare enterprises deliver care. Established in 2011, we have served as strategic enablers for healthcare enterprises, helping them enhance their overall care delivery. With over 14 years of expertise, we have engineered more than 150 tailored products for leading Medical Health Plans, Dental and Vision Plan Providers, PBMs, Medicare Plan Providers, TPAs, and more. Overview of The Role As a Senior Product Owner, you’ll be a go-to person for your product, leading the product’s cross-functional team and taking end-to-end ownership from defining features all the way to making them release-ready. As the chief point of contact for the clients/stakeholders & SMEs, you’ll brainstorm product ideas, vision, & strategy to create a healthy product backlog and product roadmap aligned with all the stakeholders. As the key internal liaison for the design and development teams, you will support their progress by resolving queries, removing blockers, and fostering collaboration through cross-functional brainstorming and continuous support. Job Responsibilities Continuously stay updated with market trends, customer needs, and industry standards while gaining a deep understanding of the product's domain knowledge and its ecosystem. Proactively seek knowledge from relevant stakeholders to enhance understanding of the product landscape. Drive collaboration with the Scrum Master to identify and address any impediments or challenges that may arise during the sprint cycle, proactively seeking solutions to keep the team on track. Gain insights into quality assurance practices, including development, testing, and release processes, to ensure that the team maintains high standards of quality throughout the product development lifecycle. Lead the product vision, scope, and go-to-market strategy from inception to delivery. Motivate and guide the team to achieve sprint goals and deliver high-quality work increments, while taking ownership of creating and maintaining a clear, prioritised product backlog. Act as the primary stakeholder for the product, ensuring alignment with stakeholder needs while driving the product forward. Own the articulation of the vision solution for technical leads and teams, providing them with a clear understanding of the desired technical direction. Take ownership of ensuring that Scrum practices are effectively implemented within the product development process, working in tandem with the Scrum Master to maintain alignment with Agile principles. Manage the product backlog, ensuring it is effectively prioritized with well-defined and actionable user stories along with sprint reviews and UAT handoff calls to ensure stakeholder expectations are met, while also managing product reporting to stakeholders and executives. Take ownership of ensuring that the work delivered meets the agreed-upon timelines and milestones, collaborating with the Scrum Master to monitor progress and address any delays or impediments. Own the definition of quality standards for the product, ensuring that all work meets these standards and that any deviations are addressed promptly. Champion the product by articulating its value proposition and ensuring alignment with business objectives, while driving the UX design process through facilitating sessions and collaborating with UI/UX design teams. Drive discussions with technical leads and teams based on a comprehensive understanding of the technology stack and its implications on the product. Motivate and guide the team to prioritise quality in all aspects of their work, from development to testing to release, driving continuous improvement and optimisation of processes. Coordinate with cross-functional teams to orchestrate the development process and ensure alignment with product vision. Orchestrate sprint planning sessions and backlog grooming sessions in collaboration with the Scrum Master, ensuring that they are conducted effectively and that the team has a clear understanding of the work ahead. Collaborate with the Scrum Master and other stakeholders to identify and address any bottlenecks or challenges that may impact the delivery timeline or quality of work. Actively participate in sprint planning, review meetings, and retrospectives to ensure smooth execution of development activities. Engage with stakeholders and cross-functional teams to drive continuous improvement and foster a culture of collaboration. Contribute actively to discussions and decisions around delivery timelines and quality standards, offering insights and guidance from a product-focused perspective. Skills & Qualifications Bachelor’s or Master’s degree in any related field or equivalent qualification. 5-8 years of relevant (product owner, business analyst) experience with excellent communication skills , great analytical, problem-solving and critical thinking skills. A strong analytical mindset with a proven ability to understand a variety of business problems through stakeholder interactions and other methods. The prowess to ideate innovative IT solutions by means of established products as well as custom IT solutions. Ability to interpret nuanced technological facets from business inputs gathered through stakeholder interactions to facilitate the requisite bridging between them and the members of the product development team. Solid knowledge and appreciation of Agile fundamentals with product development experience following Scrum, & Kanban. Extensive experience in transforming stakeholder vision into a detailed and well-structured product backlog while crafting a clear and actionable product roadmap. Expertise around various kinds of requirement documentation formats such as BRD, FRD, SRS, Use-Cases, User-Stories, and creating other documents such as Data Flow Diagrams (DFDs), System Flows, Context diagrams, etcs Hands-on experience on diagrammatic analysis & representation of business processes, data & system flows using BPMN & UML diagrams such as activity flow, sequence diagrams, DFDs, etc using tools such as MS Visio, draw.io and other industry-popular tools. Must have experience driving UI/UX design activities in collaboration with graphic and UI design teams using enabler tools like wireframes, sketches, flow diagrams, and information architecture, along with hands-on expertise in Atlassian tools such as JIRA and Confluence; familiarity with Bitbucket is a plus. Hands-on experience in SQL, including writing simple to moderately complex queries, along with knowledge of Logical Data Modeling (ER Modeling), System Integrations and APIs. Should be familiar with wrapper APIs, Elastic-search indexes, and AWS S3 will be an advantage. Experience of working on Healthcare Insurance domain-focused IT products and /or Industry knowledge would be a huge plus.
Posted 3 weeks ago
5.0 years
6 - 20 Lacs
India
On-site
Job Description: Senior Database Developer (MySQL & AWS Expert) Location: Hyderabad, India Experience: 5+ Years (Preferably 7+ Years) Employment Type: Full-time Role Overview: We are looking for an exceptionally strong Database Developer with 5+ years of hands-on experience specializing in MySQL database development on Amazon AWS Cloud. The ideal candidate should have deep expertise in high-performance query tuning, handling massive datasets, designing complex summary tables, and implementing scalable database architectures. This role demands a highly analytical and problem-solving mindset, capable of delivering optimized and mission-critical database solutions. Key Responsibilities: • Design, develop, and optimize highly scalable MySQL databases on AWS cloud infrastructure. • Expert-level performance tuning of queries, indexes, and stored procedures for mission-critical applications. • Handle large-scale datasets, ensuring efficient query execution and minimal latency. • Architect and implement summary tables for optimized reporting and analytical performance. • Work closely with software engineers to design efficient data models, indexing strategies, and partitioning techniques. • Ensure high availability, disaster recovery, and fault tolerance of database systems. • Perform root-cause analysis of database bottlenecks and implement robust solutions. • Implement advanced replication strategies, read/write separation, and data sharding for optimal performance. • Work with DevOps teams to automate database monitoring, backups, and performance metrics using AWS tools. • Optimize stored procedures, triggers, and complex database functions to enhance system efficiency. • Ensure best-in-class data security, encryption, and access control policies. Must-Have Skills: • Proven expertise in MySQL query optimization, indexing, and execution plan analysis. • Strong knowledge of AWS RDS, Aurora, and cloud-native database services. • Hands-on experience in tuning high-performance, high-volume transactional databases. • Deep understanding of database partitioning, sharding, caching, and replication strategies. • Experience working with large-scale datasets (millions to billions of records) and ensuring low-latency queries. • Advanced experience in database schema design, normalization, and optimization for high availability. • Proficiency in query profiling, memory management, and database load balancing. • Strong understanding of data warehousing, ETL processes, and analytics-driven data models. • Expertise in troubleshooting slow queries and deadlocks in a production environment. • Proficiency in scripting languages like Python, Shell, or SQL scripting for automation. Preferred Skills: • Experience with big data technologies like Redshift, Snowflake, Hadoop, or Spark. • Exposure to NoSQL databases (MongoDB, Redis) for hybrid data architectures. • Hands-on experience with CI/CD pipelines and DevOps database management. • Experience in predictive analytics and AI-driven data optimizations. Educational Qualification: • Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Salary & Benefits: • Top-tier compensation package for highly skilled candidates. • Fast-track career growth with opportunities for leadership roles. • Comprehensive health benefits and performance-based bonuses. • Exposure to cutting-edge technologies and large-scale data challenges. If you are a world-class MySQL expert with a passion for solving complex database challenges and optimizing large-scale systems, apply now! Job Types: Full-time, Permanent Pay: ₹634,321.11 - ₹2,091,956.36 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Language: English (Required) Work Location: In person Expected Start Date: 21/07/2025
Posted 3 weeks ago
7.0 years
0 Lacs
Noida
On-site
City/Cities Noida Country India Working Schedule Full-Time Work Arrangement Hybrid Relocation Assistance Available No Posted Date 09-Jul-2025 Job ID 10384 Description and Requirements Position Summary We are seeking a forward-thinking and enthusiastic Engineering and Operations Specialist to manage and optimize our MongoDB and Splunk platforms. The ideal candidate will have in-depth experience in at least one of these technologies, with a preference for experience in both. Job Responsibilities Worked with engineering and operational tasks for MongoDB and Splunk platforms, ensuring high availability and stability. Continuously improve the stability of the environments, leveraging automation, self-healing mechanisms, and AIOps. Develop and implement automation using technologies such as Ansible, Python, Shell. Manage CI/CD deployments and maintain code repositories. Utilize Infrastructure/Configuration as Code practices to streamline processes. Work closely with development teams to integrate database and observability/logging tools effectively Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex MongoDB databases version (6.0,7.0 ,8.0 and above) on Linux OS on (on-premises, cloud-based). Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and implement best Database and infrastructure security to meet the compliance. Monitor and tune MongoDB and Splunk clusters for optimal performance, identifying bottlenecks and troubleshooting issues. Analyze database queries, indexing, and storage to ensure minimal latency and maximum throughput. The Senior Splunk System Administrator will build, maintain, and standardize the Splunk platform, including forwarder deployment, configuration, dashboards, and maintenance across Linux OS . Able to debug production issues by analyzing the logs directly and using tools like Splunk. Work in Agile model with the understanding of Agile concepts and Azure DevOps. Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. MongoDB Certified DBA or Splunk Certified Administrator is a plus Experience with cloud platforms like AWS, Azure, or Google Cloud. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in MongoDB and working experience Splunk Administrator Technical Skills In-depth experience with either MongoDB or Splunk, with a preference for exposure to both. Strong enthusiasm for learning and adopting new technologies. Experience with automation tools like Ansible, Python and Shell. Proficiency in CI/CD deployments, DevOps practices, and managing code repositories. Knowledge of Infrastructure/Configuration as Code principles. Developer experience is highly desired. Data engineering skills are a plus. Experience with other DB technologies and observability tools are a plus. Extensive work experience Managed and optimized MongoDB databases, designed robust schemas, and implemented security best practices, ensuring high availability, data integrity, and performance for mission-critical applications. Working experience in database performance tuning with MongoDB tools and techniques. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Extensive experience in Database Backup and recovery strategy by design, configuration and implementation using backup tools (Mongo dump, Mongo restore) and Rubrik. Extensive experience in Configuration and enforced SSL/TLS encryption for secure communication between MongoDB nodes Working experience to Configure and maintain Splunk environments, developed dashboards, and implemented log management solutions to enhance system monitoring and security across Linux OS. Experience Splunk migration and upgradation on Standalone Linux OS and Cloud platform is plus. Perform application administration for a single security information management system using Splunk. Working knowledge of Splunk Search Processing Language (SPL), architecture and various components (indexer, forwarder, search head, deployment server) Extensive experience in both MongoDB database and Splunk replication between Primary and Secondary servers to ensure high availability and fault tolerance. Managed Infrastructure security policy as per best industry standard by designing, configurating and implementing privileges and policy on database using RBAC as well as Splunk. Scripting skills and automation experience using DevOps, Repos and Infrastructure as code. Working experience in Container (AKS and OpenShift) is plus. Working experience in Cloud Platform experience (Azure, Cosmos DB) is plus. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Strong problem-solving abilities and proactive approach to identifying and resolving issues. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities effectively. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible. Join us!
Posted 3 weeks ago
1.5 - 2.0 years
1 - 2 Lacs
India
On-site
We are looking for a skilled RDLC & SQL Developer to join our team at eDominer. In this role, you will be responsible for designing and developing reports using RDLC and HTML, ensuring they meet business requirements and functional designs. Your work will directly support our EXPAND smERP on the Cloud platform, enhancing the reporting capabilities for our clients. Job Duties and Responsibilities: Design and develop dynamic and visually appealing reports using RDLC and HTML. Understand business and functional requirements, translate them into technical specifications, and develop reports accordingly. Perform thorough testing and QA of reports to ensure accuracy and functionality. Create and optimize SQL queries, views, indexes, functions, and stored procedures to support report development. Design and develop sub-reports and charts in Crystal Reports or RDLC for enhanced data visualization. Utilize ASP.Net and VB coding with the Dataset object to manage data before integrating it into RDLC or Crystal reports. Collaborate with the support team to resolve tickets related to reporting issues, problems, and queries. Optimize database objects and report templates to ensure high performance and efficient report generation. Stay updated and willing to learn additional reporting tools to enhance the capabilities of EXPAND smERP on the Cloud. Requirements: 1.5 to 2 years of experience in report development using RDLC and Crystal Reports. Must have a BE, B Tech, B.Sc. IT, MSc IT, or MCA background. Strong knowledge of T-SQL query writing and optimization. Proficient in creating SQL queries, views, indexes, functions, and stored procedures; experienced in ASP.Net, and VB coding using Dataset object. Experience in designing and developing sub-reports and charts in Crystal Reports or RDLC. Excellent communication and troubleshooting abilities. Willingness to learn additional reporting tools and adapt to new technologies. Ability to work under tight deadlines and deliver high-quality reports on time. Job Location: Kolkata Perks and Benefits: Competitive salary structure with performance-based incentives. Opportunities for professional growth and career advancement. A collaborative and innovative work environment. Contact Us to Apply: If you are passionate about report development and eager to contribute to a dynamic team, we invite you to apply for this position. Please send your updated CV to hr@edominer.com for further processing About eDominer: eDominer has been a pioneer in business software development since 1995, focusing on business automation. Our flagship product, EXPAND smERP, is a cost-effective, reliable, and user-friendly ERP solution catering to various industries, including manufacturing and export businesses. Explore our business units: Parent Company: Our Product: EXPAND smERP: Job Types: Full-time, Permanent, Fresher Pay: ₹10,000.00 - ₹20,000.00 per month Benefits: Leave encashment Paid sick time Paid time off Provident Fund Shift: Fixed shift Work Location: In person
Posted 3 weeks ago
10.0 - 15.0 years
0 Lacs
Tada, Andhra Pradesh, India
On-site
We are currently seeking new talented individuals to join our Tendering & Costing Team. Job Title: TENDERING & COSTING Location: Chennai / Tada, Andhra Pradesh Responsibilities: Able to analyse the adequacy and completeness of the commercial offers received for evaluation Is able to propose technical alternatives (within the scope of the commercial offers received) in order to optimize the total cost. Is able to estimate indexes to recalculate costs according to the varying conditions of job execution. Has knowledge of the metallurgical processes (casting, forging, heat treatments, etc.) and can determine their impact on the purchasing costs of semi-finished products. Can recognize the material components of the machinery shown on the drawing. Has knowledge of the taxation applied by the different countries (for import and export) on the materials and manufactured machines and knows how to use it for comparing costs Is able to quickly process the data contained in the database in order to evaluate the expected best cost for the various types of E cost (EI from subsidiaries and EE from third parties) Is able to propose solutions aimed at optimizing the machines mix, production allocations and make-buy, to the commercial functions Is able to extrapolate the purchasing costs for new machines Technical Requirements: BE/B TECH in Mechanical Engineering. Exposure to Steel Industry at least for Ten Years, in Project Management, Purchase, QA&QC, Tendering & Estimation, Production, Equipment Manufacturing. 10-15 Years of Experience. Soft Skills: Collaborates effectively with people with different points of view. Is recognized within the team. Is able to anticipate situations of potential conflict, finding effective solutions. Autonomously manages situations where commitment and dedication required Knows the customers in detail and responds effectively to their requests. Works in partnership with customers, anticipating potential problems. Pushes the team to put the customer first Drives implementation of improvement projects. Consider the improvements and share them with his/her team and colleagues. Fosters team and colleagues to work on continuous improvement Takes care of his/her own growth plan. Always sets new goals expanding its knowledge. Brings enthusiasm to the team. Very well Conversant with MS Office Danieli Group started as a family-owned business in 1914 and has grown into one of the leading manufacturers of plants and machines for producing metals, meeting the needs and ambitions of customers worldwide. With more than twenty-five divisions comprising valuable, international and unique talents, the Group is rooted in the same values that have accompanied it since its founding: innovation, team spirit, excellence and reliability.
Posted 3 weeks ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary The SQL Database Administrator is responsible for the design, implementation, and support of database systems for applications across MSSQL Dtababase platfor m (MSQL 2019,20222 server) Administrator is a part of the Database end to end delivery team working and collaborating with Application Development, Infrastructure Engineering and Operation Support teams to deliver and support secured, high performing and optimized database solutions. Database Administrator specializes in the SQL database platform. Job Responsibilities Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex SQL & Sybase databases. Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and compliance. Identifies and resolves problem utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of database management; Consults and advises application development teams on database security, query optimization and performance. Writes scripts for automating DBA routine tasks and documents database maintenance processing flows per standards. Implement industry best practices while performing database administration task Work in Agile model with the understanding of Agile concepts Collaborate with development teams to provide and implement new features. Able to debug production issues by analyzing the logs directly and using tools like Splunk. Begin tackling organizational impediments Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 10+ years of IT and Infrastructure engineering work experience. Experience (In Years) 10+ Years Total IT experience & 7+ Years relevant experience in SQL Server + Sybase Database Technical Skills Database Management: expert in managing and administering SQL Server, Azure SQL Server, and Sybase databases, ensuring high availability and optimal performance. Data Infrastructure & Security: Expertise in designing and implementing robust data infrastructure solutions, with a strong focus on data security and compliance. Backup & Recovery: Skilled in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Performance Tuning & Optimization: Adept at performance tuning and optimization of databases, leveraging advanced techniques to enhance system efficiency and reduce latency. Cloud Computing & Scripting: Experienced in cloud computing environments and proficient in operating system scripting, enabling seamless integration and automation of database operations. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Strong database analytical skills to improve application performance. Should have strong working Knowledge of database performance Tuning, Backup & Recovery, Infrastructure as a Code and Observability tools (Elastic). Must have experience of Automation tools and programming such as Ansible and Python. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Excellent Analytical and Problem-Solving skills Experience managing geographically distributed and culturally diverse workgroups with strong team management, leadership and coaching skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Prior experience in handling state side and offshore stakeholders Experience in creating and delivering Business presentations. Demonstrate ability to work independently and in a team environment About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 3 weeks ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About us Visit Health is changing the face of employee health and wellness in India. 1 mn+ users across 200+ large Indian conglomerates to new age start-ups trust Visit as their Health and Wellness Partner, catering to the missing and unexplored Wellness Primary Healthcare needs. Traditionally, Employee Health Benefits were synonymous with insurance/hospitalization benefits only; ie, Secondary Care. Today, 90% of individual healthcare and wellness-related expenses in India are out of pocket on Primary Care; be it Mental Wellness, Fitness, Nutrition, Diagnostics, Medicines, and most of all, regular doctor appointments. The Covid-19 Pandemic has not only caused such expenses to increase multi-fold but also created a need for accessible systems. In a professional setting, such offerings are either lacking or provided in an ad hoc, broken manner through various individual platforms. As a result, the experience for an employee is disconnected, with limited utilization and engagement. This is where Visit Health comes in - a one-stop solution for all employee health benefits needs. We help Companies build a Customized Wellness solution focused on Primary Care aspects such as Fitness, Mental Health, Doctor Teleconsultation, OPD programs, etc. for employees and their families, thereby reducing out-of-pocket expenses and creating healthier workforces. We have stitched up the broken pieces of employee health benefits in India to make one streamlined platform while increasing employee engagement through gamification. Visit Health has raised in total of $250million and is backed by renowned investors such as PolicyBazaar, Twitter Co-founder Biz Stone, and Kunal Bahl of Snapdeal. Don't just take our word. Check us out @ https://vsyt.me/o/app We are looking for a Data Engineer to join our high-energy team You will have a direct impact on the tooling/development cycles/testing frameworks. You'll work as part of a high-energy team that's scaling across all engineering functions. Pushing high-quality code on NodeJS, ensuring Data integrity across multiple data storages (DB/Kafka/S3, etc) while also balancing the pros/cons of speed/quality will be critical. As part of your day-to-day work, you will Process data and information according to guidelines. Help develop reports and analyses. Support the data warehouse in identifying and revising reporting requirements. Evaluate changes and updates to source production systems. Identify and develop data pipelines to support data standardization. Handle multiple sources of data, structure large data sets to find usable information, and the ability to come up with standardized schema. Use graphs, infographics, data dumps, cohorts, heat maps, clusters, and other methods to visualize data. Must-haves 4 to 6 years of relevant experience. Should have various skills and qualifications that help them solve problems using large amounts of data. Good knowledge of databases such as MySQL (scripts, procedures, functions, indexes, relationships, schema management) and No SQL. Good Coding skills in languages such as Python. Proficiency in Microsoft Excel. Analytical and problem-solving skills. Knowledge of data gathering, cleaning, and transforming techniques. Reporting and data visualization skills using software like Tableau, PowerBI, or any other tool. Understanding of data warehousing and ETL techniques. Strong written/verbal communication skills. Ability to analyze existing tools and databases and provide software solution recommendations. Understanding of addressing and metadata standards.
Posted 3 weeks ago
6.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Skills required: Strong SQL(minimum 6-7 years experience), Datawarehouse, ETL Data and Client Platform Tech project provides all data related services to internal and external clients of SST business. Ingestion team is responsible for getting and ingesting data into Datalake. This is Global team with development team at Shanghai, Pune, Dublin and Tampa. Ingestion team uses all Big Data technologies like Impala, Hive, Spark and HDFS. Ingestion team uses Cloud technologies such as Snowflake for cloud data storage. Responsibilities You will gain an understanding of the complex domain model and define the logical and physical data model for the Securities Services business. You will also constantly improve the ingestion, storage and performance processes by analyzing them and possibly automating them wherever possible. You will be responsible for defining standards and best practices for the team in the areas of Code Standards, Unit Testing, Continuous Integration, and Release Management. You will be responsible for improving performance of queries from lake tables views You will be working with a wide variety of stakeholders source systems, business sponsors, product owners, scrum masters, enterprise architects and possess excellent communication skills to articulate challenging technical details to various class of people. You will be working in Agile Scrum and complete all assigned tasks JIRAs as per Sprint timelines and standards. Qualifications 5 8 years of relevant experience in Data Development, ETL and Data Ingestion and Performance optimization. Strong SQL skills are essential experience writing complex queries spanning multiple tables is required. Knowledge of Big Data technologies Impala, Hive, Spark nice to have. Working knowledge of performance tuning of database queries understanding the inner working of the query optimizer, query plans, indexes, partitions etc. Experience in systems analysis and programming of software applications in SQL and other Big Data Query Languages. Working knowledge of data modelling and dimensional modelling tools and techniques. Knowledge of working with high volume data ingestion and high volume historic data processing is required. Exposure to scripting language like shell scripting, python is required. Working knowledge of consulting project management techniques methods Knowledge of working in Agile Scrum Teams and processes. Experience in data quality, data governance, DataOps and latest data management techniques a plus. Education Bachelors degree University degree or equivalent experience
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Purpose Prime is a bundle of services that investment banks and other major financial institutions offer to hedge funds and similar clients. The clients need such services when borrowing securities or cash for the purpose of netting to allow a specific asset to achieve a higher return. It includes a suite of applications catering to different streams like Cash, Synthetics, Trade Processing, Margin and Financing. PB Portal acts as a content integration framework allowing other CIB web applications to represent data for users in similar fashion, look and feel having common controls within web browser. Cash PB caters to equities, Synthetics PB on swaps, trade processing on processing of execution orders and allocating to the client accounts, financing caters to leveraging capabilities of the client/hedge to do business on margin and Margin systems deal with the actual daily margin calculations. Sr Developer with core technical capabilities relevant to application, can propose good solutions technically and gain function knowledge well to junior members in team. Should be able to bridge the gap between junior member's knowledge and application need. Should focus to keep technical debt under control and have best DEVOPS setup always for team. Responsibilities Direct Responsibilities Application development / support / enhancements / bug-fixing / mentoring Demonstrate Good understanding of the Functional aspects of the application Reporting progress to the Team Lead Escalation of problems to local management and suggest solutions Ensuring that the project and organization standards are followed during various phases of software development life-cycle and day to day development work Maintain administration tasks; i.e. Jira to record progress against tasks, Wiki for documentation Meeting deadlines deliverables 9 Contributing Responsibilities Develop solutions with respect to the specifications and fit to the architectural and infrastructure constraints of the organizational platform Propose solutions and approaches, supply impact analysis Work with quality in mind (scope, defect, performance, testing) Estimate their work, and report on their progress Liaise with production release and support team in the context of the application in charge Technical & Behavioral Competencies Visual Basic.NET Object Oriented Programming C#, WinForms, WPF, Restful Web API Sybase/Sql Server , TSQL, stored procedure, view, performance tuning concepts , indexes etc Dev Ops - Jenkins, Sonar, BitBucket, Rundeck etc Nice to have: Shell/perl scripting Skills Referential Specific Qualifications (if required) Behavioural Skills: (Please select up to 4 skills) Ability to collaborate / Teamwork Attention to detail / rigor Communication skills - oral & written Ability to deliver / Results driven Transversal Skills: (Please select up to 5 skills) Analytical Ability Ability to understand, explain and support change Ability to develop and adapt a process Ability To Develop Others & Improve Their Skills Choose an item. Education Level: Bachelor Degree or equivalent Experience Level
Posted 3 weeks ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Us Alyke is a fast-growing, product-first startup redefining how people make real friendships. We're building the next-generation social experience rooted in authenticity, fun, and meaningful connection. Our app is live and scaling across markets with real user traction. Role Overview We’re looking for a Senior Backend Developer who thrives in a fast-paced startup environment, writes high-quality production-ready code, and can own the maintenance, optimization, and evolution of our backend systems . You’ll work closely with our Product, Frontend, QA, and DevOps teams to ensure our systems are scalable, secure, and high-performing at all times. Key Responsibilities Take end-to-end ownership of critical backend services in production Design, build, and maintain robust APIs and microservices using Node.js and TypeScript Optimize and maintain MongoDB queries, schemas, and indexes for performance and scalability Architect and maintain integrations with ElasticSearch for high-performance search and analytics Design and implement event-based cron jobs and task pipelines to drive platform automation Integrate and manage background queues and workers using Amazon SQS Monitor, debug, and continuously improve backend performance and reliability using observability tools (e.g., Datadog, Sentry) Collaborate with Product, Frontend, and QA teams to deliver scalable and bug-free features Conduct code reviews and mentor junior developers Requirements 4+ years of backend development experience in Node.js with TypeScript Advanced skills in MongoDB : Aggregation pipelines, indexing strategies, and query optimization Hands-on experience with ElasticSearch for text search, ranking, and filtering Proficiency with Amazon SQS and queue-based task processing Solid understanding of asynchronous programming , event-driven architecture , and cron-based workflows Experience in maintaining and debugging production systems at scale Familiarity with Docker and CI/CD pipelines Strong debugging, code quality, and system design skills Git experience with collaborative code practices Nice to Have Experience with Redis , Kafka , or similar Exposure to cloud infrastructure (AWS, GCP, etc.) Understanding of background job runners , task queues, and distributed systems
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough