Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities Developer role is responsible for developing, testing and maintaining the application/s with established processes. Develop and maintain technical designs based on requirements Develop application code for programs while following coding standards Develop and execute unit tests Complete Analysis & documentation as required by the project Support application testing and resolve test defects Report status updates as required by the project Follow established project execution processes Get actively involved in Training, self-development & knowledge sharing Qualifications Atleast 2 years of experience building solution using AWS Services like Step function, Lambda, Glue, Dynamo, RDS on complex system. Proficient with Python programming. Experience with React Js Preferrable having Asset Management/Pension domain knowledge. Other Additional Information Excellent problem solving and analytical skills, good documentation skills, strong communications and inter-personal skills, good time management skills. Good aptitude, positive attitude. Must be a good team player. Good learnability and quick grasping, stretch mindset Expertize in multiple applications/functionalities, Domain skills and inclination to learn it quickly. Familiarity with MS office, JIRA and SharePoint, High aptitude, excellent problem solving and analytical skills.
Posted 18 hours ago
5.0 years
20 - 30 Lacs
Pune, Maharashtra, India
On-site
This role is for one of Weekday's clients Salary range: Rs 2000000 - Rs 3000000 (ie INR 20-30 LPA) Min Experience: 5 years Location: Kochi, Pune, Chennai JobType: full-time Requirements We are seeking an experienced and motivated Cloud DevOps Engineer to join our high-performing technology team. In this role, you will lead the design, development, and deployment of scalable, secure, and reliable cloud-native solutions on AWS . You will work closely with development, QA, and operations teams to manage containerized environments, implement CI/CD automation, and ensure high availability of cloud-based applications and infrastructure. This role also involves mentoring junior team members and fostering a DevOps culture across projects. Key Responsibilities: Cloud Architecture & Development Design and develop robust AWS-based serverless and containerized applications using services like Lambda, ECS, and EKS Develop and manage infrastructure as code (IaC) using Terraform or AWS CDK Create secure and cost-optimized solutions that adhere to cloud best practices. CI/CD Automation & Deployment Set up, maintain, and enhance CI/CD pipelines using GitLab CI, GitHub Actions, Jenkins, or ArgoCD Automate release workflows and infrastructure provisioning to accelerate development cycles. Monitoring & Observability Implement observability tools such as Datadog, New Relic, or Dynatrace to monitor application and infrastructure health Analyze system performance metrics and troubleshoot issues proactively. Collaboration & Mentorship Collaborate with cross-functional teams in an Agile environment to deliver production-grade solutions Participate in code reviews, architecture reviews, and mentor junior engineers. Required Skills & Qualifications: Technical Expertise Strong experience with modern programming languages such as TypeScript, Python, or Go Deep knowledge of AWS Serverless technologies and managed database services like Amazon RDS and DynamoDB Solid hands-on experience with Kubernetes, AWS EKS/ECS, or OpenShift. Tool Proficiency Expertise in developer tools such as Git, Jira, and Confluence Proficient in Terraform or AWS CDK for IaC Familiarity with Secrets Management tools like HashiCorp Vault. Soft Skills Strong problem-solving, analytical thinking, and communication skills Ability to take ownership, drive initiatives, and mentor team members. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer) or equivalent Experience with multi-cloud environments (Azure, GCP) Exposure to AI/ML services such as Amazon SageMaker or Amazon Bedrock Understanding of SSO protocols like OAuth2, OpenID Connect, or SAML Familiarity with Kafka, Amazon MSK, and Contact Center solutions like Amazon Connect Knowledge of FinOps practices for cloud cost optimization
Posted 18 hours ago
5.0 years
0 Lacs
Kochi, Kerala, India
On-site
This role is for one of Weekday's clients Location: Chennai, Pune, Kochi JobType: full-time Requirements We are looking for a skilled and versatile JAVA FSD AWS Developer to join our client's Agile/SAFe development teams. In this role, you will participate in the design, development, integration, and deployment of enterprise-grade applications built on both modern cloud-native architectures (AWS) . You will ensure high-quality, testable, secure, and compliant code while collaborating in a fast-paced Agile setup Key Responsibilities: Agile Participation & Code Quality Active involvement in Scrum and SAFe team events, including planning, daily stand-ups, reviews, and retrospectives Create and validate testable features, ensuring coverage of both functional and non-functional requirements Deliver high-quality code through practices like Pair Programming and Test-Driven Development (TDD) Maintain operability, deployability, and integration readiness of application increments Ensure full compliance with internal frameworks such as PITT and established security protocols (SAST, DAST). Development & Integration Develop software solutions using a diverse tech stack: TypeScript, Java, SQL, Python, COBOL, Shell scripting Spring Boot, Angular, Node.js, Hibernate Work across multiple environments and technologies including Linux, Apache, Tomcat, Elasticsearch, IBM DB2 Build and maintain web applications, backend services, and APIs using modern and legacy technologies. AWS & Cloud Infrastructure Hands-on development and deployment with AWS services,EKS, ECR, IAM, SQS, SES, S3, CloudWatch Develop Infrastructure as Code using Terraform Ensure system reliability, monitoring, and traceability using tools like Splunk, UXMon, and AWS CloudWatch. Systems & Batch Integration Work with Kafka, particularly Streamzilla Kafka from PAG, for high-throughput messaging Design and consume both REST and SOAP APIs for integration with third-party and internal systems Manage and automate batch job scheduling via IBM Tivoli Workload Scheduler (TWS/OPC) and HostJobs Required Skills & Experience: 5+ years of experience in full stack development, DevOps, and mainframe integration Strong programming experience in: Languages: TypeScript, Java, Python, COBOL, Shell scripting Frameworks & Tools: Angular, Spring Boot, Hibernate, Node.js Databases: SQL, IBM DB2, Elasticsearch Proficient in AWS Cloud Services including container orchestration, IAM, S3, CloudWatch, SES, SQS, and Terraform Strong understanding of API development and integration (REST & SOAP) Experience in secure software development using SAST/DAST, TDD, and compliance frameworks (e.g., PITT) Familiarity with Kafka messaging systems, particularly Streamzilla Kafka Monitoring and observability experience using tools like Splunk, UXMon, or equivalents Preferred Qualifications: Experience with PCSS Toolbox or similar enterprise tooling Prior exposure to highly regulated industries (e.g., automotive, banking, insurance) Bachelor's or Master's degree in Computer Science, Information Technology, or related fields Certifications in AWS or DevOps tools are a plus
Posted 18 hours ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary We are seeking a forward-thinking and enthusiastic Engineering and Operations Specialist to manage and optimize our MongoDB and Splunk platforms. The ideal candidate will have in-depth experience in at least one of these technologies, with a preference for experience in both. Job Responsibilities Worked with engineering and operational tasks for MongoDB and Splunk platforms, ensuring high availability and stability. Continuously improve the stability of the environments, leveraging automation, self-healing mechanisms, and AIOps. Develop and implement automation using technologies such as Ansible, Python, Shell. Manage CI/CD deployments and maintain code repositories. Utilize Infrastructure/Configuration as Code practices to streamline processes. Work closely with development teams to integrate database and observability/logging tools effectively Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex MongoDB databases version (6.0,7.0 ,8.0 and above) on Linux OS on (on-premises, cloud-based). Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and implement best Database and infrastructure security to meet the compliance. Monitor and tune MongoDB and Splunk clusters for optimal performance, identifying bottlenecks and troubleshooting issues. Analyze database queries, indexing, and storage to ensure minimal latency and maximum throughput. The Senior Splunk System Administrator will build, maintain, and standardize the Splunk platform, including forwarder deployment, configuration, dashboards, and maintenance across Linux OS . Able to debug production issues by analyzing the logs directly and using tools like Splunk. Work in Agile model with the understanding of Agile concepts and Azure DevOps. Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. MongoDB Certified DBA or Splunk Certified Administrator is a plus Experience with cloud platforms like AWS, Azure, or Google Cloud. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in MongoDB and working experience Splunk Administrator Technical Skills In-depth experience with either MongoDB or Splunk, with a preference for exposure to both. Strong enthusiasm for learning and adopting new technologies. Experience with automation tools like Ansible, Python and Shell. Proficiency in CI/CD deployments, DevOps practices, and managing code repositories. Knowledge of Infrastructure/Configuration as Code principles. Developer experience is highly desired. Data engineering skills are a plus. Experience with other DB technologies and observability tools are a plus. Extensive work experience Managed and optimized MongoDB databases, designed robust schemas, and implemented security best practices, ensuring high availability, data integrity, and performance for mission-critical applications. Working experience in database performance tuning with MongoDB tools and techniques. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Extensive experience in Database Backup and recovery strategy by design, configuration and implementation using backup tools (Mongo dump, Mongo restore) and Rubrik. Extensive experience in Configuration and enforced SSL/TLS encryption for secure communication between MongoDB nodes Working experience to Configure and maintain Splunk environments, developed dashboards, and implemented log management solutions to enhance system monitoring and security across Linux OS. Experience Splunk migration and upgradation on Standalone Linux OS and Cloud platform is plus. Perform application administration for a single security information management system using Splunk. Working knowledge of Splunk Search Processing Language (SPL), architecture and various components (indexer, forwarder, search head, deployment server) Extensive experience in both MongoDB database and Splunk replication between Primary and Secondary servers to ensure high availability and fault tolerance. Managed Infrastructure security policy as per best industry standard by designing, configurating and implementing privileges and policy on database using RBAC as well as Splunk. Scripting skills and automation experience using DevOps, Repos and Infrastructure as code. Working experience in Container (AKS and OpenShift) is plus. Working experience in Cloud Platform experience (Azure, Cosmos DB) is plus. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Strong problem-solving abilities and proactive approach to identifying and resolving issues. Excellent communication and collaboration skills. Ability to work in a fast-paced environment and manage multiple priorities effectively. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 18 hours ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Independently lead Data or Data Science and AI global initiatives part of the overall Enterprise and Divisional Data or Data Science Strategy to contribute solving unmet medical needs, in collaboration with various stakeholders. -Role model a culture of analytical and data driven decision making and data/data sciences across Novartis leadership in research, development, commercial, manufacturing and support functions. -Deep technical experts with key accountability for a significant project, business activity. -Shape future direction for own part of the organization based on the overall organization strategy provided by business leadershipIf managing a team: empowers the team and provides guidance and coaching. About The Role Major accountabilities: Develops and executes roadmap for the team to innovate in multiple business verticals by transforming the way to solve a problem using Effective data management, Data Science and Artificial Intelligence. Strategically coordinates, prioritizes and efficiently allocates the team resources to critical initiatives: plans resources proactively, anticipates and actively manages change, sets stakeholder expectations as required, identifies operational risks and enable the team to drives issues to resolution, balances multiple priorities and minimizes surprise escalations -Design, develop and deliver various data science based insights, outcomes and innovation (using mathematics, computer science, statistics, engineering, management science, technology, economics, etc.) and create “proof of concepts and blueprints” to drive faster, timely, highly precise, workable and proactive decision making based on data based insights and science and shape strategic glidepath of the company. Design, develop and deliver various master data, data governance, data quality initiatives to drive the business priorities, scale foundational capabilities to support data science initiatives as well as the data driven decision making -Selection of innovative methods including machine learning, deep learning, other cognitive computing methods and artificial intelligence -Exploratory analytics to identify trends, patterns or process drivers -Support of User Experience Design and development of advanced analytics products -Collaborates with globally dispersed internal stakeholders, external partners and Institutions and cross-functional teams to solve critical business problems, drive operational efficiencies and innovative approaches, -Provides expert input into strategy for their Business Partners -Proactively evaluates the need of technology and novel scientific software, visualization tools and new approaches to computation to increase efficiency and quality of the Novartis data sciences practices -Provides agile consulting, guidance and non-standard exploratory analysis for unplanned urgent problem -Acts as a catalyst for innovation in Data management, Data Science and AI -May lead a Team / Function or indepth technical expertise in a scientific / technical field depending upon the career path (Manager/Individual contributor) -Coaches and empowers junior Data professionals and Data Scientists across Novartis. Contribute to the development of Novartis data management and data science capabilities -Distribution of marketing samples (where applicable) Essential Requirements 10+ years of relevant experience in Data Science. Desirable Skills Applied Mathematics. Artificial Intelligence (Ai). Aws (Amazon Web Services). Big Data. Building Construction. Cloud (Computing). Computer Science. Data Governance. Data Literacy. Data Management. Data Quality. Data Science. Data Strategy. Electrical Transformers. Machine Learning (Ml). Master Data Management. Professional Services. Python (Programming Language). R (Programming Language). Random Forest Algorithm. Statistical Analysis. Time Series Analysis. You’ll receive: You can find everything you need to know about our benefits and rewards in the Novartis Life Handbook. https://www.novartis.com/careers/benefits-rewards Commitment To Diversity And Inclusion Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Accessibility and accommodation Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to diversityandincl.india@novartis.com and let us know the nature of your request and your contact information. Please include the job requisition number in your message Join our Novartis Network: If this role is not suitable to your experience or career goals but you wish to stay connected to hear more about Novartis and our career opportunities, join the Novartis Network here: https://talentnetwork.novartis.com/network Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards
Posted 18 hours ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Overview: We are seeking a talented Data Engineer with expertise in Apache Spark, Python / Java and distributed systems. The ideal candidate will be skilled in creating and managing data pipelines using AWS. Key Responsibilities: Design, develop, and implement data pipelines for ingesting, transforming, and loading data at scale. Utilise Apache Spark for data processing and analysis. Utilise AWS services (S3, Redshift, EMR, Glue) to build and manage efficient data pipelines. Optimise data pipelines for performance and scalability, considering factors like partitioning, bucketing, and caching. Write efficient and maintainable Python code. Implement and manage distributed systems for data processing. Collaborate with cross-functional teams to understand data requirements and deliver optimal solutions. Ensure data quality and integrity throughout the data lifecycle. Qualifications: Proven experience with Apache Spark and Python / Java. Strong knowledge of distributed systems. Proficiency in creating data pipelines with AWS. Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience). Proven experience in designing and developing data pipelines using Apache Spark and Python. Experience with distributed systems concepts (Hadoop, YARN) is a plus. In-depth knowledge of AWS cloud services for data engineering (S3, Redshift, EMR, Glue). Familiarity with data warehousing concepts (data modeling, ETL) is preferred. Strong programming skills in Python (Pandas, NumPy, Scikit-learn are a plus). Experience with data pipeline orchestration tools (Airflow, Luigi) is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with additional AWS services (e.g., AWS Glue, AWS Lambda, Amazon Redshift). Familiarity with data warehousing and ETL processes. Knowledge of data governance and best practices. Have a good understanding of the oops concept. Hands-on experience with SQL database design Experience with Python, SQL, and data visualization/exploration tools
Posted 18 hours ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description & Requirements Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen. The Server Engineer will report to the Technical Director. Responsibilities Design, develop, and run a fast, scalable, highly available game service all the way from conception to delivery to live service operations Work with designers, client engineering, and production teams to achieve gameplay goals Implement security best practices and original techniques to keep user data secure and prevent cheating Create and run automated testing, readiness testing, and deployment plans Monitor the performance and costs of the server infrastructure to improve our game Design and implement data transformation layers using Java/Spring/AWS/Protobuf Collaborate with game server and web frontend teams to define API contracts Manage Release Ops / Live Ops of web services Qualifications We encourage you to apply if you can meet most of the requirements and are comfortable opening a dialogue to be considered. 4+ years development of scalable back-end services BS degree in Computer Science or equivalent work experience Proficiency in PHP, Java Experience with Cloud services like Amazon Web Services or Google Cloud Platform Experience with Redis Experience with Database Design and usage of large datasets in both relational (MySQL, Postgres) and NoSQL (Couchbase, DynamoDB) environments Experience defining API contracts and collaborating with cross-functional teams Bonus 3+ years of experience developing games using cloud services like AWS, Azure, Google Cloud Platform, or similar Proficient in technical planning, solution research, proposal, and implementation Background using metrics and analytics to determine the quality or priority Comfortable working across client and server codebases Familiar with profiling, optimising, and debugging scalable data systems Passion for making and playing games About Electronic Arts We’re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth. We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do. Electronic Arts is an equal opportunity employer. All employment decisions are made without regard to race, color, national origin, ancestry, sex, gender, gender identity or expression, sexual orientation, age, genetic information, religion, disability, medical condition, pregnancy, marital status, family status, veteran status, or any other characteristic protected by law. We will also consider employment qualified applicants with criminal records in accordance with applicable law. EA also makes workplace accommodations for qualified individuals with disabilities as required by applicable law.
Posted 18 hours ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Project Manager Experience: 8+ Years Location: Hyderabad Work Mode: Onsite/Hybrid (as per project needs) Availability: Immediate Joiners or Serving Notice Period Only Job Summary: We are seeking a seasoned IT Project Manager with strong technical knowledge and leadership capabilities to oversee software development projects across various domains and technologies. The ideal candidate will have hands-on experience in managing cross-functional teams, using modern project management methodologies, and delivering scalable IT solutions on time and within budget. Key Responsibilities: Manage full lifecycle IT projects – from initiation, planning, execution to closure. Lead cross-functional development teams including backend, frontend, DevOps, QA, and UI/UX. Define project goals, success criteria, scope, and deliverables that support business objectives. Allocate resources and track project performance using Agile/Scrum or hybrid methodologies. Communicate clearly with internal teams, clients, and stakeholders on project progress and risks. Ensure timely delivery by closely monitoring sprints, timelines, and deliverables. Create and maintain comprehensive project documentation and risk registers. Use project tracking tools like Jira, Trello, Confluence, MS Project, or Asana . Collaborate with architects and tech leads on system design, scalability, and security. Manage project budgets, vendor coordination, and procurement activities if required. Technical & IT Skills (Must-Have Exposure): Programming Languages: Java, JavaScript, Python, SQL, Shell Scripting (understanding level) Backend Technologies: Java (Spring Boot), Node.js, .NET Core Frontend Technologies: Angular, React, HTML5, CSS3, JavaScript, TypeScript Cloud Platforms: AWS, Azure, GCP – with exposure to services like EC2, S3, Lambda, RDS, etc. Databases: MySQL, PostgreSQL, Oracle, MongoDB DevOps Tools: Git, Jenkins, Docker, Kubernetes, Maven, Ansible Project & Collaboration Tools: Jira, Confluence, Slack, MS Teams, Trello, Asana Version Control: Git, GitHub, GitLab, Bitbucket CI/CD Concepts: Build pipelines, deployment automation API Technologies: RESTful APIs, Postman, Swagger Core Skills Required: Excellent project planning, estimation, and budgeting skills Strong people management and team leadership abilities Proficient in risk management and stakeholder communication Deep understanding of SDLC , Agile/Scrum , and hybrid models Strong problem-solving and analytical thinking Excellent verbal and written communication skills Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 8+ years of overall IT experience with 4+ years as a Project Manager PMP, PRINCE2, or Certified Scrum Master (CSM) certification is highly desirable Nice to Have: Domain expertise in Banking/Finance, Healthcare, E-commerce, or Enterprise SaaS Familiarity with Agile scaling frameworks like SAFe Experience managing both in-house and vendor-driven development Work Mode: Onsite/Hybrid – Hyderabad (Candidates should be open to working from the office as per project requirements) Availability: Immediate joiners or serving notice period will be given priority.
Posted 18 hours ago
8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Title: Senior Java Developer Location: Pune, Maharashtra, India Employment Type: Full-Time Experience Required: 8+ Years Position Overview We are seeking an experienced and highly skilled Senior Java Developer to join our technology team. The ideal candidate will have a strong background in Java development, microservices architecture, and cloud technologies. This role involves leading complex software development initiatives, mentoring junior developers, and collaborating across teams to deliver high-quality, scalable solutions. Key Responsibilities Application Development: Design, develop, and maintain robust, high-performance applications using Java 8 and related frameworks (Spring Boot, Hibernate, etc.). Architecture & Design: Lead the design and implementation of scalable software solutions that align with architectural standards and business requirements. API Development: Create and manage RESTful APIs and integrate with third-party systems as needed. Cloud Integration: Utilize AWS services (EC2, S3, Lambda, RDS, etc.) for application deployment and cloud-native development. DevOps & CI/CD: Implement and maintain Continuous Integration and Continuous Deployment (CI/CD) pipelines using Jenkins, GitLab CI, or equivalent tools. Code Quality & Reviews: Conduct code reviews, ensure adherence to coding standards, and continuously improve development practices. Mentorship: Provide technical leadership and mentorship to junior team members, fostering knowledge sharing and skills development. Cross-functional Collaboration: Work closely with product managers, QA, DevOps, and other stakeholders in an Agile environment to ensure timely and successful project delivery. Hybrid Work Leadership: Effectively manage and contribute in both remote and in-office settings, ensuring smooth communication and project continuity. Required Qualifications Minimum 8 years of hands-on experience in Java application development. Strong proficiency in Java 8+ , Spring Boot, Hibernate, Maven, and other modern Java technologies. Demonstrated experience in designing and consuming RESTful APIs and working within a microservices architecture . Practical experience with AWS services and cloud-based application deployment. Solid understanding of CI/CD pipelines , version control (Git), and DevOps practices. Excellent analytical and problem-solving abilities. Strong communication skills with the ability to articulate technical concepts to both technical and non-technical stakeholders. Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related discipline. Preferred Qualifications Familiarity with Docker , Kubernetes, or other containerization and orchestration tools. Experience with monitoring and logging tools such as ELK Stack, CloudWatch, or Prometheus. Knowledge of NoSQL databases or distributed systems.
Posted 18 hours ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Mandatory- 5+years of experience in Mendix platform Minimum 5 years hands-on experience building applications with Mendix platform (version 8+ strongly preferred). * Mendix Advanced Developer Certification is highly desirable. * Expert knowledge of Mendix core components including domain modeling, microflow development, and page construction. * Demonstrated ability to integrate Mendix applications with REST/SOAP web services. * Experience with source control systems (Git), continuous integration/deployment workflows, and automated deployment processes. * Working knowledge of Java/JavaScript for custom Mendix extensions when required. * Strong foundation in user experience design principles and responsive web development. * Outstanding analytical thinking and interpersonal communication abilities. Preferred Additional Experience * Background in deploying Mendix applications across cloud platforms (AWS/Azure) or on-premises infrastructure. * Understanding of business process management, workflow automation, or enterprise integration frameworks. * Experience with DevOps toolchain and Agile project management platforms (such as JIRA, Azure DevOps).
Posted 18 hours ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job responsibilities: Review product requirements and specifications and provide feedback. Selection or identification of test cases for automation from existing test case documentation. Applying Designing and Test Automation Strategy Document Creating, Enhancing, Debugging and Running Test Cases. Configure Rest Assured Automation framework using Java. Configure WebdriverIO project (WDIO) using JavaScript. Work in an agile environment, follow process guidelines and deliver tasks. Participate in software architecture, design discussions and code reviews. Be proactive, take ownership and be accountable Stay up to date with new testing tools and test strategies. Hands on experience in java, Rest Assured, Webdriver.10, Jmeter etc. Required Skills: Familiar with Java, JavaScript and/or other programing languages. Familiar with OOPS concepts, SDLC and STLC Models. Detail oriented. Ability to empathize with customers. Qualifications: Bachelor's degree in computer science, Engineering, or a related field. 2 + years' experience in SDET. About RTDS: Founded in 2010, Real Time Data Services (RTDS) is a group of companies excelling in global information technology, specializing in Cloud Computing and Cloud Telephony. We empower businesses worldwide with technologically advanced solutions that streamline operations and enhance efficiency. Being a market leader, we've catered to 10,000+ clients across the globe, consistently guaranteeing exceptional services. Our Mission: To become the forefront of Tech Global Leaders in Cloud Computing by striving towards excellence in our robust products & services, providing a competitive edge to our customers. Our Vision: Our vision is to achieve excellence through continuous improvement, innovation, and integrity, driven by a results-oriented and collaborative approach. Our Brands: AceCloud: AceCloud is a leading provider of high-performance, affordable cloud solutions for SMBs and enterprises. Its comprehensive suite of services includes: Public Cloud Private Cloud Cloud GPUs Kubernetes Infrastructure as a Service (IaaS) AWS Services Ace Cloud is working closely with AWS for the SMB and Startup verticals PAN India. We specialize in Cloud Assessment, AWS Migration, Application & Database Modernization as well as Data Analytic, Machine Learning and AI. With a strong emphasis on innovation and customer satisfaction, Ace Cloud offers single-click deployment and 24/7 human support to ensure seamless operations for its clients. Learn more: https://acecloud.ai/ Ace Cloud Hosting: Headquartered in Florida, USA, Ace Cloud Hosting is a leader in managed hosting with over 15 years of expertise in cloud-based technologies. Its services include: Accounting/Tax Application Hosting, Managed Security Services Managed IT Services and Hosted Virtual Desktop Solutions Learn More: https://www.acecloudhosting.com/ Key Highlights: Industry Experience: 15+ years in the industry serving over 8,000 clients globally with a team of 600+ employees Data Center Partners: 10+ data center partners located across the USA, UK, and India Strategic Partnerships: Microsoft Direct Partner under the CSP Program. Intuit Authorized Commercial Hosting Provider. AWS Advanced Consulting Partner with Storage & SMB Competencies VMware Enterprise Partner for Infrastructure & Desktop Virtualization solutions Accreditations and Memberships: ISO/IEC 27001:2022 Certified Registered with NASSCOM Member of the Internet Telephony Services Providers’ Association in the UK. Awards and Recognitions: Customer Service Department of the Year Stevie Award (2024) CPA Practice Advisor Readers' Choice Awards (2023) VMware Accelerating Cloud Provider Partner Award (2020) K2 Quality Award for Customer Satisfaction (2019) Great User Experience Award by FinancesOnline (2018) User Favourite Award by Accountex USA (2016) Contact Information Website: https://www.myrealdata.in
Posted 18 hours ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Hi, Hope you are doing well. This is Marudhu from Wall Street. our client is looking for a Systems Administrator in Madhapur, HY (Onsite). Your experience and skills match the client's needs; please share your updated resume if you are interested. If not, kindly ignore. Job Title: Systems Administrator Location: Hyderabad, IN (Onsite) Project: Long Term Contract Interview: In-person Must have: Single Sign-On (SSO), Google Workspace, AWS, Endpoint Detection and Response (EDR), MS Office, Okta, and Bitbucket. Job Description: Minimum Requirements: Strong understanding of Single Sign-On (SSO) solutions, particularly Okta, and experience with Google Workspace. Experience with Amazon Web Services (AWS) or other cloud platforms. Proficiency with collaboration tools such as Slack and Microsoft Office Suite. Knowledge of Endpoint Detection and Response (EDR) solutions and Mobile Device Management (MDM) systems. Familiarity with Microsoft Entra, including Azure Active Directory, Conditional Access, and its various components. Excellent troubleshooting skills with the ability to work in fast-paced environments. Key Responsibilities: Implement and maintain SSO solutions (Okta, Google Workspace, etc.) for seamless access across multiple platforms. Monitor and optimize Microsoft Office 365 and other SaaS applications for performance and security. Configure and manage EDR and MDM systems to secure endpoints and mobile devices. Manage and optimize Microsoft Entra configurations to ensure secure access controls. Provide technical support, respond to user requests, troubleshoot issues, and escalate when necessary. Stay informed on the latest IT trends and implement best practices. Okta SSO and SCIM Configuration: Configure and maintain Okta SSO integration with applications like Office 365, AWS, and Google Workspace. Implement SCIM (System for Cross-domain Identity Management) for efficient user provisioning and de-provisioning. Ensure a smooth login experience for employees and partners. RBAC Rule Maintenance: Define and maintain Role-Based Access Control (RBAC) policies in Okta. Update RBAC rules based on new hires or role changes. Monitor RBAC logs for potential security issues and adjust policies as needed. Endpoint Protection: Configure and manage Mobile Device Management (MDM) solutions (Entra, Mosyle, etc.) to enforce security compliance. Set up and manage EDR tools (e.g., Sentinel One) for detecting and responding to endpoint security threats. Analyse and monitor EDR logs to identify and mitigate security risks. Endpoint Troubleshooting: Investigate system and event logs to diagnose and troubleshoot endpoint issues (Mac and PC). Work with users to identify problems and provide resolutions or escalate to higher-level support when necessary. Application Provisioning and De-provisioning: Manage application provisioning, ensuring correct access controls and authentication mechanisms. Handle de-provisioning for departing employees or role transitions, ensuring timely revocation of access. Other Tasks: Track IT service desk metrics like ticket resolution rates and response times. Collaborate with other IT teams, including Security Operations, for integrated systems management. Stay updated with emerging technologies and industry best practices to enhance IT services. Thanks & Regards, Marudhu Pandian Sr. Technical Recruiter Email : mpandian@wallstreetcs.com Wall Street Consulting Services, LLC | 100 Overlook Center,2nd Floor, Princeton, NJ 08540 | www.wallstreetcs.com
Posted 18 hours ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role Overview: As a Senior AI/ML Engineer, you will lead the design, development, and deployment of advanced machine learning models and AI solutions. Your expertise will drive innovation across various applications, from natural language processing (NLP) to deep learning and recommendation systems. You will collaborate with cross-functional teams to integrate AI capabilities into products and services, ensuring scalability, efficiency, and ethical standards. Key Responsibilities: Model Development & Deployment: Lead the end-to-end lifecycle of machine learning models, including data collection, preprocessing, model selection, training, evaluation, and deployment. Implement deep learning architectures (e.g., CNNs, RNNs, Transformers) and NLP models to solve complex problems. Optimize models for performance, scalability, and real-time inference. Collaboration & Leadership: Mentor and guide junior engineers and data scientists, fostering a culture of continuous learning and innovation. Collaborate with product managers, software engineers, and data engineers to align AI solutions with business objectives. Communicate complex technical concepts to both technical and non-technical stakeholders. Research & Innovation: Stay abreast of the latest advancements in AI/ML research and industry trends. Evaluate and integrate emerging technologies and methodologies to enhance model performance and capabilities. Contribute to the development of AI strategies and roadmaps. Ethics & Compliance: Ensure AI solutions adhere to ethical guidelines, data privacy regulations, and organizational standards. Implement fairness, accountability, and transparency in AI models to mitigate bias and ensure equitable outcomes. Technical Skills & Qualifications: Programming Languages: Proficiency in Python, with experience in libraries such as NumPy, Pandas, and Scikit-learn. Familiarity with other languages like Java, Scala, or C++ is a plus. Machine Learning Frameworks: Expertise in frameworks like TensorFlow, PyTorch, Keras, and Scikit-learn. Experience with MLOps tools and practices for model versioning, deployment, and monitoring. Data Engineering: Strong understanding of data pipelines, ETL processes, and working with large-scale datasets. Experience with SQL and NoSQL databases, as well as cloud platforms like AWS, GCP, or Azure. Soft Skills: Excellent problem-solving abilities and analytical thinking. Strong communication and interpersonal skills. Ability to work effectively in a collaborative, fast-paced environment. Educational Background: Bachelor's or master's degree in computer science, data science, machine learning, or a related field. Ph.D. is advantageous but not required. Preferred Experience: 8+ years in AI/ML engineering or related roles, with a proven track record of deploying production-grade models. Experience in specific domains such as healthcare, finance, or e-commerce is a plus. Why Join Us: Opportunity to work on cutting-edge AI projects with a talented team. Collaborative and inclusive work culture. Competitive compensation and benefits package. Continuous learning and professional development opportunities.
Posted 18 hours ago
2.0 - 3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Technical Program Manager I/II Program Management Team/Bioinformatics BU (Full Time Employment) Excelra is looking for highly skilled and motivated members for the Program Management Team with technical experience across multiple domains (AI/ML,NGS/human genetics/statistics, software development, project management, agile methodology) in bioinformatics to join the Bioinformatics team. The successful candidate will be expected to demonstrate the following competencies: 1. Project management as well as program management across different projects in key client accounts. 2. Liaison the relationship between client and Excelra 3. Contribute to efforts to increase operational efficiency among projects. 4. Collaboration with delivery for successful delivery of projects 5. Interact with the technical stakeholders of clients to identify new business opportunities. Qualifications: • PhD with 2-3 years of industrial experience in Bioinformatics, Computer Science, Bioengineering, Computational Biology, or related field. • Strong track record of scRNA-Seq and other Omics (NGS) data analysis and downstream systemic interpretation are must. • Good understanding of current best practices in computational biology data-management (NGS/Transcriptiomics/Microarray/Proteomics/Clinical-trials/Text-mining, etc.). • Expertise in one or more programming/scripting languages such as R, Python, shell-script for complex data analysis. • Experience with Docker and Nextflow. • Familiarity with machine learning is an added advantage. • Experience with Linux environment or cloud computing (AWS/GCP) is a plus. • Experience with Relational database such as Postgres, MySQL, or Oracle, is a plus. • Experience with version control systems such as GitHub, is a plus. • Ability to generate in-silico experimental workflows and in-depth knowledge in proprietary and public biological databases, methods, and tools. • Ability to generate scientific hypothesis along with good scientific/technical communication skills would be preferred. Roles & Responsibilities: The selected candidate will be part of the Excelra’s Bioinformatics group. You will be responsible to work with a team in research data analysis in both prospective and retrospective manner to generate the novel hypothesis. • Work with diverse customers/partners in drug discovery and development • Work with multi-disciplinary team of PhD level scientists • Responsible for day-to-day operational activities of Omics (NGS) related projects • Keeping up to date with relevant scientific and technical developments • present the results and scientific hypotheses to internal and external stake holders • Good communication and analytical skills combined with ability to recognize and implement project plans. • Writing research papers, reports, reviews, and su
Posted 18 hours ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Data Science Specialist Experience Level: 6+ Years (Overall), 3+ Years (Relevant) Job Summary: We are seeking a highly skilled and experienced Data Science Specialist to join our team. The ideal candidate will have a strong foundation in data science , with a proven ability to design, develop, and deploy intelligent systems. This role requires expertise in advanced analytics, statistical modeling, and machine learning, with a focus on delivering scalable solutions. Exposure to Microsoft AI tools and agentic AI paradigms will be a strong advantage. Key Responsibilities: Design and implement advanced data science models to solve business problems. Work with structured and unstructured data to derive insights using statistical and machine learning techniques. Collaborate with stakeholders to identify use cases and develop data-driven strategies. Develop and deploy AI solutions on cloud platforms, preferably Microsoft Azure. Support and implement MLOps practices to ensure seamless deployment and monitoring of models. Explore and experiment with Agentic AI paradigms including autonomous agents, multi-agent systems, and orchestration frameworks. Document models, processes, and workflows and ensure compliance with best practices. Mandatory Skills: Strong experience in Data Science , including ML algorithms, statistical modeling, and data wrangling. Hands-on experience with Python/R and associated data science libraries (e.g., pandas, scikit-learn, TensorFlow, PyTorch). Solid understanding of data processing and visualization tools. Good-to-Have Skills: Experience with Microsoft AI tools and services (e.g., Azure Machine Learning, Cognitive Services, Azure OpenAI). Familiarity with Agentic AI paradigms – including autonomous agents, multi-agent systems, and orchestration frameworks. Strong exposure to cloud platforms , with preference for Microsoft Azure . Experience with MLOps tools and practices, such as MLflow, Kubeflow, or Azure ML pipelines. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, AI, or a related field. 6+ years of overall industry experience with at least 3 years in core data science roles. Excellent problem-solving skills and ability to work in a collaborative, team-oriented environment. SigniTeq is a Global Technology Services Company focused on building New-Age Technology Solutions and Augmenting Technology Teams for many World’s Leading Brands and Large Global Enterprises., We bring together some of the Brightest Minds in Open-Source and Emerging Technologies, operating from our Offices in India, UAE, USA, Mexico, and Australia and engage a strong presence of Technology Resources across 100+ Countries. Our Credentials Include Wonderful Workplaces To Shape Your Career - 2021 by The CEOStory, Top 20 Most Promising Blockchain Companies - 2020 by CIOReview, Winner Of Innovative Startup Solution Challenge For Combating Covid 19, Govt Of India Winner Of WhatsApp Challenge by Facebook Corporation ISO 9001 – 2015 Quality Certified Company ISO 27001 – 2013 ISMS Certified Company An AWS Partner We Offer · 5 Days Working · Medical Coverage · World Class Client Brands · Prime Office Location · Great Employee Engagements · Emerging Technology Practices · Learning Experience From Leaders For more information please visit : www.SigniTeq.com
Posted 18 hours ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Vichara is a Financial Services focused products and services firm headquartered in NY and building systems for some of the largest i-banks and hedge funds in the world. Job Description Design, build, and manage scalable ELT/ETL pipelines using Snowflake, AWS Glue, S3, and other AWS services; Write and optimize complex SQL queries, stored procedures, and data transformation logic; Support and improve existing data processes, and participate in continuous performance tuning. Implement data quality checks and monitoring to ensure data accuracy, consistency, and reliability. Qualifications At least 3 years’ experience as a Data Engineer with experience in development and maintenance support Hands on development experience in Python and SQL SQL Tuning: Proficient in SQL and understand how to use query metrics to evaluate performance and tune SQL to improve query run time. Experience with AWS services: S3, Glue, EMR, Lambda, CloudWatch, etc; or Azure ADF etc, Strong expertise in Snowflake, including query optimization, Snowpipe, and data modeling. Experience in SSIS and Fabric will be a plus Additional Information Compensation - 35- 50 lakhs pa Benefits: Extended health care Dental care Life insurance
Posted 18 hours ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Software Development Engineer II (SDE-II) - Backend As a Software Development Engineer II (SDE-II) - Backend, you will play a critical role in designing, developing, and maintaining scalable, efficient, and reliable server-side applications. You will contribute and as well as mentor a team of developers, collaborating with cross-functional teams, including front-end developers, designers, and project managers, to deliver high-quality solutions that meet our clients' requirements. This position requires 3-5 years of experience in Node.js development, with a deep understanding of backend technologies and strong expertise in Object-Oriented Programming (OOP) concepts. Responsibilities: Design, develop, and maintain complex server-side applications using Node.js, applying OOP principles and best practices. Collaborate closely with front-end developers to integrate user-facing elements with server-side logic, ensuring seamless functionality and a great user experience. Architect and implement efficient data storage and retrieval mechanisms, leveraging databases and APIs effectively. Write clean, reusable, and testable code, following industry standards and best practices Conduct thorough code reviews, providing constructive feedback to ensure code quality, maintainability, and adherence to coding standards. Troubleshoot and debug applications, identifying and resolving performance and functionality issues in a timely manner. Mentor and guide junior developers, fostering a culture of continuous learning and growth within the team. Stay up-to-date with emerging technologies and trends in backend development, particularly in the Node.js ecosystem, and evaluate their applicability to our projects. Collaborate with project managers and stakeholders to define project requirements, estimate effort, and contribute to project planning and execution. Drive the adoption of best practices, tools, and frameworks to improve development efficiency and code quality. Participate in Agile development processes, including sprint planning, daily stand-ups, and retrospectives, ensuring timely delivery of high-quality software. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience). Strong understanding of backend development principles, best practices, and architectural patterns. Expertise in Node.js and JavaScript, with proven experience in developing scalable and robust server-side applications. Solid understanding and practical application of Object-Oriented Programming (OOP) concepts, such as encapsulation, inheritance, and polymorphism Experience working with databases, both SQL and NoSQL, and designing efficient data models. Proficiency in designing and implementing RESTful APIs and web services. Familiarity with frontend technologies such as HTML, CSS, and JavaScript frameworks/libraries (e.g., React, Angular). Strong knowledge of system design principles and ability to architect scalable and resilient backend solutions. Experience with performance optimization, debugging, and profiling tools Excellent problem-solving and analytical skills, with the ability to propose innovative solutions to complex technical challenges. Strong communication and collaboration abilities, with the capability to effectively communicate technical concepts to both technical and non-technical stakeholders. Demonstrated leadership skills and experience in mentoring and guiding junior developers Proactive mindset, self-motivated, and driven to continuously improve skills and stay up-to-date with industry trends. Preferred Skills: Experience with Express.js or similar Node.js frameworks. Knowledge of cloud platforms, such as AWS or Azure, and experience in designing and deploying applications on cloud infrastructure. Familiarity with containerization technologies like Docker. Understanding of testing frameworks (e.g., Mocha, Chai) and test-driven development (TDD). Note: The years of experience mentioned in the job description are only indicative and can be flexible based on the candidate's skills and potential. About Us: At Swivl, we are on a mission to transform the Field Service Management (FSM) industry for small and midsize businesses (SMBs). Our enterprise-level FSM software is designed to revolutionize how industries such as plumbing, electrical, landscaping, roofing, and handyman services operate. With nearly a decade of real-world testing and refinement, our FSM platform has already delivered substantial growth and profitability for field service businesses. With recent funding, we are now positioned to scale our technology, optimize our UI/UX, and launch innovative features that will further disrupt the FSM landscape. Powered by JazzHR 3BF4523ex8
Posted 19 hours ago
3.0 - 6.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Position Overview: As a Principal Software Engineer, you will play a pivotal role in shaping the technical future of our SaaS platform. This is more than a hands-on coding role (though you’ll do plenty of that); it’s an opportunity to lead from the front: setting engineering direction, influencing architecture at scale, and mentoring the next generation of technical leaders. You’ll architect secure, scalable, serverless solutions, primarily in TypeScript and AWS and help drive the modernization of our core systems. You’ll also be instrumental in embedding AI as a core capability across our products. From steering design decisions to mentoring teams on responsible AI use, you'll help us unlock new possibilities while upholding trust, transparency, and ethical standards. If you thrive at the intersection of technical depth, strategic influence, and emerging technology, this is your chance to make a company-wide impact. Key Responsibilities Shape engineering strategy with broad organizational impact, influencing long-term architectural direction across multiple teams and products. Drive platform evolution, identifying common pain points and leading scalable, reusable solutions across systems. Engage in deep architectural discussions, evaluating trade-offs and optimizing for security, performance, and maintainability. Shape and influence AI-related design decisions, ensuring alignment with product goals and ethical, effective use. Architect end-to-end AI-enabled systems with embedded governance, monitoring, and adaptability to regulatory change. Educate teams on foundational AI concepts, model capabilities, prompt engineering, and responsible AI practices. Act as a thought leader in the AI space, helping the organization mature its understanding and adoption of AI-driven technologies. Communicate complex technical topics clearly across technical and non-technical audiences. Motivate, mentor, and inspire engineers across teams, fostering a culture of high performance, technical curiosity, and continuous improvement. Regularly demo work, share knowledge across the department, and drive a culture of collaboration. Required Experience/Skills 15+ years of professional experience delivering secure applications in an agile environment. Demonstrated understanding of AI concepts, including model behavior, prompt optimization, and AI governance. Experience designing or integrating AI-powered systems with transparency, monitoring, and regulatory readiness in mind. Comfort mentoring others in AI-related best practices and helping shape organizational knowledge in this space . Ideal candidates will be able to demonstrate exceptionally strong technical, commercial, communication and leadership skills, and be driven, resourceful, and not intimidated by the significant challenges around integration of diverse products on disparate technology stacks. Strong ability to architect with AWS using Infrastructure-as-Code tools such as Terraform, CDK, or CloudFormation. Strong understanding of distributed data storages (e.g. Aurora, DynamoDB, S3) and how to build a scalable platform using them. Strong understanding of Event Driven Architecture and its applications. Passion for optimizing software delivery, automating routine tasks, and building secure and resilient platforms. Proficiency in developing RESTful APIs using NodeJS and/or TypeScript in Open API specifications, visually stunning user interfaces using Angular/React. Experience working with Docker in development. Experience on SQL databases and optimizations. Bachelors’ degree in Computer Science, Engineering, Math, or related field.
Posted 19 hours ago
0 years
0 Lacs
Kochi, Kerala, India
Remote
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Senior Security Consulting EY Technology: Technology has always been at the heart of what we do and deliver at EY. We need technology to keep an organization the size of ours working efficiently. We have 250,000 people in more than 140 countries, all of whom rely on secure technology to be able to do their job every single day. Everything from the laptops we use, to the ability to work remotely on our mobile devices and connecting our people and our clients, to enabling hundreds of internal tools and external solutions delivered to our clients. Technology solutions are integrated in the client services we deliver and is key to us being more innovative as an organization. EY Technology supports our technology needs through three business units: Client Technology (CT) - focuses on developing new technology services for our clients. It enables EY to identify new technology-based opportunities faster and pursue those opportunities more rapidly. Enterprise Workplace Technology (EWT) – EWT supports our Core Business Services functions and will deliver fit-for-purpose technology infrastructure at the cheapest possible cost for quality services. EWT will also support our internal technology needs by focusing on a better user experience. Information Security (Info Sec) - Info Sec prevents, detects, responds, and mitigates cyber-risk, protecting EY and client data, and our information management systems. The opportunity As a Security Consultant within EY’s internal Global Information Security team, the individual will be a trusted security advisor to the Client Technology Platforms Delivery organization within IT Services. The Client Technology Platforms delivery organization is responsible for end-to-end delivery of technology programs and projects supporting EY’s Client Techmology service lines including delivery of a global managed services platform, big data and analytics solutions as well as individual line of business solutions and services. This role will directly engage in supporting a team of architects, engineers, and product managers for delivery on programs and projects, defining security risks and controls, providing security guidance, identifying and prioritizing security-related requirements, promoting secure-by-default designs and facilitating delivery of information security services throughout the system development life cycle (SDLC). The role will also develop and directly communicate appropriate risk treatment and mitigation options to address security vulnerabilities translated vulnerabilities into business risk terminology for communication to business stakeholders. Your Key Responsibilities Support a technical team with a focuse on the following responsibilities: Define security architectures and provide pragmatic security guidance that balance business benefit and risks Engage IT project teams throughout the SDLC to identify and prioritize applicable security controls and provide guidance on how to implement these controls Perform threat modeling and risk assessments of information systems, applications and infrastructure Maintain and enhance the Information Security risk assessment and certification methodologies Define security configuration standards for shared and multi-tenant platforms and technologies Develop appropriate risk treatment and mitigation options to address security risks identified during security review or audit Translate technical vulnerabilities into business risk terminology for business units and recommend corrective actions to customers and project stakeholders Provide knowledge sharing and technical assistance to other team members Act as Subject Matter Expert (SME) in responsible technologies and have deep technical understanding of responsible services and technology portfolios Skills And Attributes For Success Significant working security experience and knowledge in the design, implementation, and operation of security controls in the following areas: Identity and Access Management – Experience with Azure Active Directory (AAD) based Identity and Access Management and Authorization design and integration with API, IDaaS, and Federation technologies. Cloud Security – Technical understanding of virtualization, cloud infrastructure, and public cloud offerings and experience designing security configuration and controls within cloud-based solutions e.g., Microsoft Azure and Azure PAAS services or another cloud platform (GCP, AWS, IBM, AliCloud, etc.) Infrastructure Security – Experience with the integration of cloud native infrastructure security technologies and solutions into business solution architectures including the integration of identity & access management, Web Application Firewalls (WAFs), Application and API Gateways, intrusion detection and prevention, security monitoring, and data encryption solutions. Application Security - Experience with the design and testing of security controls for multi-tier business solutions including the design of application-level access and entitlement management, data tenancy and isolation, encryption, and logging. Working familiarity with REST API and microservices architecture. Strong leadership and organizational skills Ability to appropriately balance firm security needs with business impact & benefit Ability to facilitate compromise to incrementally advance security strategy and objectives Ability to team well with others to facilitate and enhance the understanding & compliance to security policies Although not required, it is preferred that candidates possess additional working security experience and knowledge in one or more of the following areas: Operational Security – Experience with defining operational security models and procedures for business solutions including the operation and maintenance of infrastructure and application security controls. Information Security Standards – Knowledge of common information security standards such as: ISO 27001/27002, CSA and CIS Controls, NIST CSF, PCI/DSS, FEDRAMP. Product Management – working with broader business and technology teams on aspects of security that affect all phases of PI Planning from concept to design to implementation and then operational support. Agile & DevSecOps Methodologies – Experience promoting automated security features in pipelines and security testing as a central feature in Agile workflows as a contributing member within an Agile development or DevOps environment. To qualify for the role, you must have: Advanced degree in Computer Science or a related discipline; or equivalent work experience. Candidates are preferred to hold or be actively pursuing related professional certifications within the GIAC family of certifications or CISSP, CISM or CISA or similar cloud-security oriented certifications. Five or more years of experience in the management of a significant Information Security risk management function Experience in managing the communication of security findings and recommendations to IT project teams, business leadership and technology management executives Ideally, you’ll also have Exceptional judgment, tact, and decision-making ability Flexibility to adjust to multiple demands, shifting priorities, ambiguity, and rapid change Excellent , communication, organizational, and decision-making skills Strong English language skills are required What Working At EY Offers We offer a competitive remuneration package where you’ll be rewarded for your individual and team performance. Our comprehensive Total Rewards package includes support for flexible working and career development, and with FlexEY you can select benefits that suit your needs, covering holidays, health and well-being, insurance, savings, and a wide range of discounts, offers and promotions. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 19 hours ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Company Description At Design Pods, we empower individuals with essential skills through comprehensive training and certification programs tailored for today's digital landscape. Our diverse offerings include AI development, web development, digital marketing, and mobile application development. We also provide industry-recognized certifications in AWS, Azure, Microsoft, TOGAF, and GCC. Additionally, our internship opportunities offer practical experience in fields such as PHP and Laravel, MERN Stack, digital marketing, and UI/UX design. Role Description This is a full-time, on-site role for a Data Scientist Trainer located in Kochi. The Data Scientist Trainer will be responsible for developing and delivering training programs in data science, preparing instructional materials, and providing hands-on exercises. Day-to-day tasks include conducting training sessions, assessing student progress, offering feedback, and staying up-to-date with the latest developments in data science. The trainer will also collaborate with other trainers and industry professionals to ensure the course content remains current and relevant. Qualifications Proficiency in Data Science, Data Analytics, and Data Analysis Strong knowledge of Statistics and Data Visualization techniques Excellent communication and presentation skills Experience in developing training materials and conducting training sessions Ability to assess student progress and provide constructive feedback Good organizational skills and attention to detail Prior experience in the education or training industry is a plus Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or related field
Posted 19 hours ago
1.0 - 3.0 years
6 - 12 Lacs
Pune, Gurugram
Work from Office
Role Overview: We are seeking a motivated and talented Junior-Level Python Fullstack Developer. You will play a crucial role in building and enhancing our enterprise solutions and be part of a dynamic startup environment. Responsibilities: Collaborate with cross-functional teams to design, develop, and deploy AI solutions using Python and related technologies. Take ownership of end-to-end software development, from requirement analysis to deployment and maintenance. Fine-tune, deploy, and integrate LLMs. Write clean, efficient, and maintainable code while adhering to coding standards and best practices. Participate in code reviews, provide constructive feedback, and mentor junior developers when needed. Collaborate on user experience. Stay up-to-date with emerging trends and advancements in Generative AI. Qualifications: Mandatory: Associate level cloud certification required. 1 to 3 years of experience in Python Fullstack development (Django/ Flask/ FastAPI, SQL, ReactJS) A degree in computer science or in a relevant field is a plus. Backend development for a cloud-native application Demonstrated interest in Generative AI with hands-on experience in implementing AI models is highly preferred. Strong problem-solving skills and the ability to troubleshoot complex issues independently. Excellent communication skills and a collaborative mindset to work effectively within a team. Self-driven, motivated, and proactive attitude with a willingness to learn and adapt in a fast-paced startup environment. Benefits: Competitive compensation package (potentially with performance-based bonuses or startup equity). Opportunity to work on cutting-edge technology in the emerging field of Generative AI. A collaborative and inclusive company culture that encourages innovation and growth. Professional development opportunities and mentorship from experienced industry professionals. If you are an enthusiastic Python Developer with a passion for web application development and Generative AI, and you're excited about joining an early-stage startup that values creativity and innovation, we encourage you to apply and be part of our journey to transform knowledge management and insights discovery.
Posted 19 hours ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
This role is for one of Weekday's clients Location: Chennai, Pune, Kochi JobType: full-time Requirements We are looking for a skilled and versatile JAVA FSD AWS Developer to join our client's Agile/SAFe development teams. In this role, you will participate in the design, development, integration, and deployment of enterprise-grade applications built on both modern cloud-native architectures (AWS) . You will ensure high-quality, testable, secure, and compliant code while collaborating in a fast-paced Agile setup Key Responsibilities: Agile Participation & Code Quality Active involvement in Scrum and SAFe team events, including planning, daily stand-ups, reviews, and retrospectives Create and validate testable features, ensuring coverage of both functional and non-functional requirements Deliver high-quality code through practices like Pair Programming and Test-Driven Development (TDD) Maintain operability, deployability, and integration readiness of application increments Ensure full compliance with internal frameworks such as PITT and established security protocols (SAST, DAST). Development & Integration Develop software solutions using a diverse tech stack: TypeScript, Java, SQL, Python, COBOL, Shell scripting Spring Boot, Angular, Node.js, Hibernate Work across multiple environments and technologies including Linux, Apache, Tomcat, Elasticsearch, IBM DB2 Build and maintain web applications, backend services, and APIs using modern and legacy technologies. AWS & Cloud Infrastructure Hands-on development and deployment with AWS services,EKS, ECR, IAM, SQS, SES, S3, CloudWatch Develop Infrastructure as Code using Terraform Ensure system reliability, monitoring, and traceability using tools like Splunk, UXMon, and AWS CloudWatch. Systems & Batch Integration Work with Kafka, particularly Streamzilla Kafka from PAG, for high-throughput messaging Design and consume both REST and SOAP APIs for integration with third-party and internal systems Manage and automate batch job scheduling via IBM Tivoli Workload Scheduler (TWS/OPC) and HostJobs Required Skills & Experience: 5+ years of experience in full stack development, DevOps, and mainframe integration Strong programming experience in: Languages: TypeScript, Java, Python, COBOL, Shell scripting Frameworks & Tools: Angular, Spring Boot, Hibernate, Node.js Databases: SQL, IBM DB2, Elasticsearch Proficient in AWS Cloud Services including container orchestration, IAM, S3, CloudWatch, SES, SQS, and Terraform Strong understanding of API development and integration (REST & SOAP) Experience in secure software development using SAST/DAST, TDD, and compliance frameworks (e.g., PITT) Familiarity with Kafka messaging systems, particularly Streamzilla Kafka Monitoring and observability experience using tools like Splunk, UXMon, or equivalents Preferred Qualifications: Experience with PCSS Toolbox or similar enterprise tooling Prior exposure to highly regulated industries (e.g., automotive, banking, insurance) Bachelor's or Master's degree in Computer Science, Information Technology, or related fields Certifications in AWS or DevOps tools are a plus
Posted 19 hours ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: Design and implement Infrastructure/Cloud Architecture for a small/mid size projects Outcomes Design and implement the architecture for the projects Guide and review technical delivery by project teams Provide technical expertise to other projects Measures Of Outcomes # of reusable components / processes developed # of times components / processes reused Contribution to technology capability development (e.g. Training Webinars Blogs) Customer feedback on overall technical quality (zero technology related escalations) Relevant Technology certifications Business Development (# of proposals contributed to # Won) # white papers/document assets published / working prototypes Outputs Expected Solution Definition and Design: Define Architecture for the small/mid-sized kind of project Design the technical framework and implement the same Present the detailed design documents to relevant stakeholders and seek feedback Undertake project specific Proof of Concepts activities to validate technical feasibility with guidance from the senior architect Implement best optimized solution and resolve performance issues Requirement Gathering And Analysis Understand the functional and non-functional requirements Collect non-functional requirements (such as response time throughput numbers user load etc.) through discussions with SMEs business users Identify technical aspects as part of story definition especially at an architecture / component level Project Management Support Share technical inputs with Project Managers/ SCRUM Master Help SCRUM Masters / project managers to understand the technical risks and come up with mitigation strategies Help Engineers and Analysts overcome technical challenges Technology Consulting Analysis of technology landscape process tools based on project objectives Business And Technical Research Understand Infrastructure architecture and its' criticality to: analyze and assess tools (internal/external) on specific parameters Understand Infrastructure architecture and its criticality to Support Architect/Sr. Architect in drafting recommendations based on findings of Proof of Concept Understand Infrastructure architecture and its criticality to: analyze and identify new developments in existing technologies (e.g. methodologies frameworks accelerators etc.) Project Estimation Provide support for project estimations for business proposals and support sprint level / component level estimates Articulate estimation methodology module level estimations for more standard projects with focus on effort estimation alone Proposal Development Contribute to proposal development of small to medium size projects from technology/architecture perspective Knowledge Management & Capability Development:: Conduct technical trainings/ Webinars to impart knowledge to CIS / project teams Create collaterals (e.g. case study business value documents Summary etc.) Gain industry standard certifications on technology and architecture consulting Contribute to knowledge repository and tools Creating reference architecture model reusable components from the project Process Improvements / Delivery Excellence Identify avenues to improve project delivery parameters (e.g. productivity efficiency process security. etc.) by leveraging tools automation etc. Understand various technical tools used in the project (third party as well as home-grown) to improve efficiency productivity Skill Examples Use Domain/ Industry Knowledge to understand business requirements create POC to meet business requirements under guidance Use Technology Knowledge to analyse technology based on client's specific requirement analyse and understand existing implementations work on simple technology implementations (POC) under guidance guide the developers and enable them in the implementation of same Use knowledge of Architecture Concepts and Principles to provide inputs to the senior architects towards building component solutions deploy the solution as per the architecture under guidance Use Tools and Principles to create low level design under guidance from the senior Architect for the given business requirements Use Project Governance Framework to facilitate communication with the right stakeholders and Project Metrics to help them understand their relevance in project and to share input on project metrics with the relevant stakeholders for own area of work Use Estimation and Resource Planning knowledge to help estimate and plan resources for specific modules / small projects with detailed requirements in place Use Knowledge Management Tools and Techniques to participate in the knowledge management process (such as Project specific KT) consume/contribute to the knowledge management repository Use knowledge of Technical Standards Documentation and Templates to understand and interpret the documents provided Use Solution Structuring knowledge to understand the proposed solution provide inputs to create draft proposals/ RFP (including effort estimation scheduling resource loading etc.) Knowledge Examples Domain/ Industry Knowledge: Has basic knowledge of standard business processes within the relevant industry vertical and customer business domain Technology Knowledge: Has deep working knowledge on the one technology tower and gain more knowledge in Cloud and Security Estimation and Resource Planning: Has working knowledge of estimation and resource planning techniques Has basic knowledge of industry knowledge management tools (such as portals wiki) UST and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT) Technical Standards Documentation and Templates: Has basic knowledge of various document templates and standards (such as business blueprint design documents etc) Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for (non-functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard and requirements management tools (e.g.MS Excel) Additional Comments JD Role Overview We’re seeking an AWS Certified Solutions Architect with strong Python and familiarity with .NET ecosystems to lead an application modernization effort. You will partner with cross-functional development teams to transform on-premises, monolithic .NET applications into a cloud-native, microservices-based architecture on AWS. ________________________________________ Key Responsibilities Architect & Design: o Define the target state: microservices design, domain-driven boundaries, API contracts. o Choose AWS services (EKS/ECS, Lambda, State Machines/Step Functions, API Gateway, EventBridge, RDS/DynamoDB, S3, etc.) to meet scalability, availability, and security requirements. Modernization Roadmap: o Assess existing .NET applications and data stores; identify refactoring vs. re-platform opportunities. o Develop a phased migration strategy Infrastructure as Code: o Author and review CloudFormation. o Establish CI/CD pipelines (CodePipeline, CodeBuild, GitHub Actions, Jenkins) for automated build, test, and deployment. Development Collaboration: o Mentor and guide .NET and Python developers on containerization (Docker), orchestration (Kubernetes/EKS), and serverless patterns. o Review code and design patterns to ensure best practices in resilience, observability, and security. Security & Compliance: o Ensure alignment with IAM roles/policies, VPC networking, security groups, and KMS encryption strategies. o Conduct threat modelling and partner with security teams to implement controls (WAF, GuardDuty, Shield). Performance & Cost Optimization: o Implement autoscaling, right-sizing, and reserved instance strategies. o Use CloudWatch, X-Ray, Elastic Stack and third-party tools to monitor performance and troubleshoot. Documentation & Knowledge Transfer: o Produce high-level and detailed architecture diagrams, runbooks, and operational playbooks. o Lead workshops and brown-bags to upskill teams on AWS services and cloud-native design. o Drive day to day work to the 24 by 7 IOC Team. ________________________________________ Must-Have Skills & Experience AWS Expertise: o AWS Certified Solutions Architect – Associate or Professional o Deep hands-on with EC2, ECS/EKS, Lambda, API Gateway, RDS/Aurora, DynamoDB, S3, VPC, IAM Programming: o Proficient in Python for automation, Lambdas, and microservices. o Working knowledge of C#/.NET Core for understanding legacy applications and guiding refactoring. Microservices & Containers: o Design patterns (circuit breaker, saga, sidecar). o Containerization (Docker), orchestration on Kubernetes (EKS) or Fargate. Infrastructure as Code & CI/CD: o CloudFormation, AWS CDK, or Terraform. o Build/test/deploy pipelines (CodePipeline, CodeBuild, Jenkins, GitHub Actions). Networking & Security: o VPC design, subnets, NAT, Transit Gateway. o IAM best practices, KMS, WAF, Security Hub, GuardDuty. Soft Skills: o Excellent verbal and written communication. o Ability to translate complex technical concepts to business stakeholders. o Proven leadership in agile, cross-functional teams. ________________________________________ Preferred / Nice-to-Have Experience with service mesh (AWS App Mesh, Istio). Experience with Non-Relational DBs (Neptune, etc.). Familiarity with event-driven architectures using EventBridge or SNS/SQS. Exposure to observability tools: CloudWatch Metrics/Logs, X-Ray, Prometheus/Grafana. Background in migrating SQL Server, Oracle, or other on-prem databases to AWS (DMS, SCT). Knowledge of serverless frameworks (Serverless Framework, SAM). Additional certifications: AWS Certified DevOps Engineer, Security Specialty. ________________________________________ Skills Python,Aws Cloud,Aws Administration
Posted 19 hours ago
8.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Role: AWS DevOps Engineer Experience: 8+ Years Location: Gurgaon (5 days Work from Office) Joining: Immediate joiners preferred Job Summary: We are seeking a highly skilled and experienced AWS DevOps Engineer to join our dynamic and growing team. The ideal candidate will have a strong background in AWS cloud infrastructure, automation, CI/CD, and infrastructure-as-code (IaC) to support the development and deployment of scalable, secure, and highly available systems. Key Responsibilities: Design, implement, and manage scalable, secure, and highly available cloud infrastructure on AWS. Develop and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, AWS CodePipeline, or similar. Automate infrastructure provisioning using IaC tools such as Terraform, CloudFormation, or AWS CDK. Monitor system performance, availability, and reliability using tools like CloudWatch, Prometheus, Grafana, etc. Collaborate with development, QA, and security teams to streamline build, test, and deployment processes. Implement security best practices across all stages of development and deployment. Troubleshoot production issues and perform root cause analysis. Optimize costs, improve performance, and ensure compliance in the cloud environment. Participate in on-call rotations and incident response. Required Skills & Qualifications: Bachelors degree in Computer Science, Engineering, or related field. 3+ years of hands-on experience in DevOps, with at least 2 years focused on AWS. Expertise in AWS services such as EC2, S3, RDS, VPC, Lambda, ECS/EKS, IAM, CloudFormation, CloudWatch, and others. Strong scripting skills (e.g., Python, Bash, Shell). Experience with containerization and orchestration (Docker, Kubernetes, ECS, or EKS). Proficient with version control systems like Git. Deep understanding of networking, security, and system administration in cloud environments. Familiarity with agile development practices and tools (e.g., Jira, Confluence). Preferred Qualifications: AWS Certified DevOps Engineer or other relevant AWS certifications. Experience with observability tools such as ELK Stack, Datadog, or Splunk. Knowledge of serverless architecture and event-driven programming. Experience working in regulated environments (e.g., HIPAA, SOC2, ISO27001).
Posted 19 hours ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title - Enterprise Technology Engineer Location - Pune Must Have Skills - C#, Cloud Services and Architecture, Logging and Monitoring, Infrastructure You will work with As a member of a high-energy, top-performing team of engineers, collaborating with product managers and platform teams to ensure the availability, performance, and reliability of core internal tooling systems that empower enterprise engineering workflows. Let me tell you about the role As a Core Tooling Products Support Engineer, you will be responsible for the daily operations, incident response, and performance monitoring of digital tooling services. You will work closely with engineering teams to address issues, recommend improvements, and contribute to the operational maturity of our tooling platforms. This is a hands-on role where your problem-solving skills will directly impact developer productivity and system reliability! What you will deliver Monitor and maintain the health and performance of internal tooling platforms , ensuring smooth service. Solve operational issues, conduct root cause analysis, and apply fixes or workarounds. Collaborate with software engineers and platform teams to address support escalations and dependencies. Contribute to tooling enhancements, documentation, and automation of support tasks. Track service-level indicators and assist in ensuring uptime, performance, and compliance standards. Participate in incident reviews and help strengthen post-incident learning and documentation. What you will need to be successful (experience and qualifications) Technical skills we need from you Bachelor’s degree in Computer Science, Engineering, or a related field—or equivalent practical experience. 3+ years of experience in software support, internal tooling, or platform operations. Proficient in Python. Familiarity with TypeScript and C#; must be comfortable troubleshooting across platforms. Understanding of cloud platforms such as AWS or Azure, and the services supporting engineering tooling. Experience with relational and NoSQL databases, especially for querying and operational tuning. Essential skills Ability to troubleshoot tooling issues across layers—code, configuration, integration, or environment. Basic experience with monitoring tools, logging systems, and incident response practices. Willingness to document recurring problems, solutions, and service behaviors. Strong communication skills and ability to collaborate with engineers and technical users. Skills that set you apart Exposure to CI/CD platforms, developer tooling, or engineering enablement products. Experience working in support roles within compliance-focused or regulated environments. Curiosity to learn and improve tools, processes, and systems that impact engineering productivity. About bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Even though the job is advertised as full time, please contact the hiring manager or the recruiter as flexible working arrangements may be considered.
Posted 19 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi