Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the role: The Senior Staff Software Development Engineer in Test position is a hands-onlead role that will oversee the work of a small group of SDETs and test engineers and will be responsible for all aspects of test automation including web, mobile, API, microservice, and white box testing efforts. The role contributes to the success by delivering the tools and automation-driven testing landscape for our evolving platform and applications, as it relates to the delivery of our omni-channel applications. Role and Responsibilities: Guide the work of a group of SDETs in one or more functional areas with responsibility for all aspects of test automation including framework enhancements & being an evangelist of quality. Review source code for potential problems, help debug & triage issues and isolate fixes. Guide tool analysis, create proof of concept models, and make recommendations to support the tools selection process. Analyze, recommend, and implement industry best practices for coding guidelines, peer reviews, git workflow, process workflow, quality gates, CI/CD process, etc. Actively participate in reviews (walkthroughs) of technical specifications and program code with architects & developers, communicating design, requirements, feature set, functionality, and limitations of systems/applications to the team. Promote Quality engineering processes, practices, and standards both within and across teams. Enable CI/CD integration for various automated test suites. Ensure proper test reporting, alerting, and quality gates are defined and enforced. Continuously evaluate opportunities for improvements. Invest in building robust data seeding techniques and test execution techniques to get reliable test results. Collaborate with other teams including Release Management, SRE, Performance Engineering, Project Management, and Application Support teams for the successful delivery of new system features. Proactively involved with product owners from feature inception through functional validation to launch, while always looking for potential quality issues. Design and document comprehensive test strategies, testing guidelines, standard operating procedures, utilities, and tool to improve the efficiency of test automation. Triage customer issues, analyze production metrics, and provide root cause analysis to the engineers, and recommend system hardening measures. Knowledge and awareness of which customers are receiving new code/updates according to the Release Schedule. Provides guidance to SDETs and Associates and Seniors. Serves as a SME for multiple areas of application and monitors the success of their mentees. Provides oversight over a small group of SDETs. Ensure they are following SOP and adhering to design and coding guidelines. Proactively provide updates/reports to Sr. Leadership Lead the maintenance of the test environments, test data creations, ensuring they are consistent with staging/production configurations. Work closely with SRE and Technical Implementations coming up with tooling to improve environment stability, accuracy, and maintenance. Job Requirement: Typically requires a minimum of 12 years of related experience; or 8 years and an advanced degree. Bachelor’s degree in computer science, Software Engineering, or related field Proficient in coding with the breadth of knowledge up and down the technology stack and extensive implementation of object-oriented programming, data structures, design patterns, system architecture, etc, in one or more programming languages such as C#, Java, Python, JS or similar. Familiar with Terraforms, Shell scripting, and Powershell scripting. Expert in various full stack - open source testing or COTS tools such as Selenium/ Cypress/ Playwright for Web testing, Appium/Espresso for Mobile testing and Rest Assured/ Http Client for API testing. Proven track record in evaluating various tools and framework, setting best practices, coding guidelines and review process. Well versed in building, maintaining and enhancing test automation framework using industry best practice such as page-object model, data-driven framework, behavioral driven development, etc using testing framework such as JUnit, NUnit, TestNg, Pytest or Cucumber. Experience in setting up Git workflow (eg: Bitbucket, Gitlab), build automation tools (Gradle, Maven, Nuget etc), and artifact management using tools like JFrog, Proget, etc. Experience in establishing various automated quality gates and enabling automated test execution on cloud devices (Saucelabs/ Browserstack) using various CI/CD tools such as Gitlab, Github, Jenkins, Bamboo, TeamCity, CircleCI, etc. Expert in building test strategies, test plans, and automation strategies with a variety of test types such as Smoke, Functional, Regression testing, etc. using various test case optimization techniques. Working knowledge of Agile/DevOps development methodologies such as Scrum and Kanban. Working knowledge of relational databases (eg: SQL Server, Postgres) and non-relational databases (eg: MongoDB, DynamoDB). Ability to write complex queries including Joins, Aggregate functions, etc. Ability to point out the underlying challenges in data architecture, store procedures using database monitors/profilers, etc. Experience to Whitebox Testing techniques (Unit & Integration tests), including the use of tools like SonarQube, JaCoCO, etc. Reviewing the automated checks on code quality and coverage. Exposure to performance testing practices using tools like JMeter, Gatling, etc. Knowledge of different API architectures such as REST, GraphQL, Webhooks,WCF, and gRPC protocols. UI architectures /concepts such as Micro frontend, single page applications, etc. Well-versed in testing and reviewing different layers in Microservice architecture, Event-driven/messaging architecture (Kafka,SQS), Kubernetes platform, and Service virtualization to improve testability. Experience in designing and improving test workflows and processes. Experience in creating quality metrics, and analyzing metrics to understand trends and risks. A seasoned collaborator, extremely effective at cross-functional collaboration i.e. bringing Product, Design and Engineering together, apt in decision making & project management. Ability to pull logs from different environments (Production, Staging, Integrated environments) and resolve difficult to reproduce scenarios. Must be comfortable with PagerDuty for Production errors, heavy involvement with SRE, and troubleshoot for owned areas as well as assisting other SDETs Strong written and verbal communication skills Preferred: An accomplished technologist passionate about System Design and Architecture with a breadth of knowledge across the technology landscape, and depth in automation and test strategies. Extensive experience being a coach to the team, well versed in evaluating SDET candidates and building a high-performing team.
Posted 1 day ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Job Title: Full Stack Developer (API, DBMS, App & Portal Development, UAT Testing) Company: Zest Rover Holidays (Parent Company: CYC Group) Location: Kolkata (On-site) Position Type: Full-Time Company Overview Zest Rover Holidays, a brand under CYC Group , is a market leader in domestic and international travel, MICE (Meetings, Incentives, Conferences, and Exhibitions), and customized travel solutions. Backed by CYC Group’s diverse operations across Travel & Tourism, Hospitality, Events, and Digital Products, we serve clients pan-India and globally. Our commitment to innovation, excellence, and customer satisfaction drives every project we undertake. Role Overview We are looking for a talented Full Stack Developer with proven expertise in complex API integrations , DBMS , application and portal development , and UAT testing . You will be responsible for building scalable, high-performance systems while ensuring seamless integration between front-end and back-end solutions. Key Responsibilities Design, develop, and maintain web applications, portals, and mobile apps . Create, integrate, and manage complex APIs with third-party systems. Design, optimize, and manage databases for scalability and performance. Build and maintain microservices architecture and migrate legacy components. Deploy, monitor, and manage servers on AWS / Cloud platforms. Conduct UAT (User Acceptance Testing) to ensure product quality and functionality. Implement unit tests and automation for consistent performance. Collaborate with cross-functional teams for UI/UX, product development, and QA. Troubleshoot and optimize for high traffic and concurrent usage . Required Skills & Experience Bachelor’s/Master’s degree in Computer Science, Software Engineering, or related field. 3–5 years of experience in Full Stack Development . Strong knowledge of JavaScript frameworks (Angular/React) and backend technologies (Java, Node.js, etc.). Expertise in API integration and DBMS (MySQL, PostgreSQL; exposure to MongoDB, Redis, CouchDB, DynamoDB is a plus). Experience with Kafka, ElasticSearch , and asynchronous programming. Proficiency in UAT testing and agile methodologies. Knowledge of application servers (Tomcat, Jetty, Undertow). Cloud management experience (AWS, Google Cloud, Azure). Strong problem-solving, debugging, and optimization skills. Vacancies: 2 Industry: Travel & Tourism / Hospitality / Events / Digital Solutions Employment Type: Full-Time (On-site) Location: Kolkata 📩 How to Apply : Interested candidates can send their resumes hr@cycgroup.in with the subject "Application for Full stack Developer." Websites: www.cycgroup.in Contact Us: 9147054648
Posted 2 days ago
30.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
AWS Developer (Python/PySpark) Job Overview We are seeking an experienced AWS Developer proficient in Python and PySpark to design, develop, and maintain scalable, serverless data processing and workflow automation solutions on AWS. The ideal candidate will build Lambda functions, Step Functions, Glue ETL jobs, and integrate various AWS services to support complex data pipelines and backend logic. Key Responsibilities Develop, deploy, and maintain AWS Lambda functions using Python to implement backend logic, file transformations, API integrations, and event-driven triggers from S3, EventBridge, DynamoDB, and SNS. Implement robust error handling, retry mechanisms, timeout management, and custom logging within Lambda functions to ensure resilience and observability. Design and manage AWS Step Functions to orchestrate complex workflows that involve multiple Lambda functions, AWS Glue jobs, and external API calls. Configure parallel executions, error states, and conditional branches within Step Functions to optimize workflow reliability and efficiency. Integrate Step Functions with external systems and AWS services such as SNS and EventBridge using native service integrations. Develop and maintain AWS Glue jobs and PySpark scripts for batch ETL processes, data transformations, and format conversions (e.g., JSON to CSV). Utilize Glue Crawlers to automate metadata cataloging and manage datasets within the AWS Glue Data Catalog. Perform data processing workflows moving data from raw to enriched states using AWS Glue, S3, RDS, DynamoDB, Athena, and other services as required. Collaborate with cross-functional teams to implement scalable data solutions and troubleshoot production issues. AWS Services Experience AWS Glue (jobs, crawlers, Data Catalog) AWS Lambda AWS Step Functions Amazon S3 Amazon EventBridge Amazon SNS & SQS Amazon DynamoDB Amazon Athena Amazon RDS (SQL Server) Amazon EC2, EKS, Network Load Balancer (NLB) Required Skills & Qualifications Strong proficiency in Python and PySpark programming for data processing and ETL development. Solid understanding of core programming concepts, including error handling, retry logic, and asynchronous workflows. Hands-on experience developing serverless applications using AWS Lambda and related services. Experience building and orchestrating complex workflows using AWS Step Functions. Expertise in creating and managing AWS Glue ETL pipelines, including writing Glue scripts in PySpark and Python shell jobs. Familiarity with AWS data services like S3, DynamoDB, Athena, and RDS for storage and querying. Ability to design scalable, fault-tolerant, and maintainable cloud-native data solutions. Experience working in Agile environments and collaborating with technical and business stakeholders Interested candidates are invited to submit their resumes and a cover letter detailing their relevant experience and why they are a great fit for this role. Feel free to customize this template to better fit your specific needs and company culture! If you need any further adjustments or additional details, just let me know. Matrix Global Services is a leading multinational corporation that provides innovative and comprehensive solutions in technology, consulting, and outsourcing. With a history of over 30 years, Matrix has established itself as a trusted partner for businesses across various industries, consistently delivering exceptional results. We're a network of firms in 10+ countries with over 13,000 people. At Matrix, we pride ourselves on our commitment to excellence and our ability to adapt to the ever-changing needs of our clients. Our team of highly skilled professionals is adept at understanding complex business challenges and tailoring solutions that drive sustainable growth and profitability. Our wide range of services includes cutting-edge technology solution services, strategic consulting, digital transformation, cloud computing, cybersecurity, and managed services. Whether it's developing customized software applications, streamlining business processes, implementing robust IT infrastructure, or managing complex projects, our expertise and industry knowledge enable us to deliver value-added solutions that meet each client's unique requirements. Come and join a winning team! You'll be challenged, have fun, and be a part of a highly respected organization! Matrix offers a competitive base salary and a full benefit package. Benefits include medical, dental, 401K, STD, HSA, PTO, and more. EQUAL OPPORTUNITY EMPLOYER: Matrix is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind. Matrix is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions at Matrix are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion or belief, family or parental status, or any other status protected by the laws or regulations in our locations. Matrix will not tolerate discrimination or harassment based on any of these characteristics. Matrix encourages applicants of all ages.
Posted 2 days ago
19.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a skilled Systems Architect to lead the design and implementation of advanced contact center solutions leveraging Amazon Connect and its integrated services. This role demands technical expertise in designing CCaaS architectures and ensuring seamless integration with CRM, WFM, and other unified communication platforms. Responsibilities Define technical design and make critical decisions for integrations architecture Produce the target state for implementations based on AWS Connect (Bring-Your-Own-Telephony) Support project estimations and contribute to accurate scoping Design end-to-end CCaaS architecture using Amazon Connect, Salesforce SCV, Lambda, and related AWS services Align call routing and handling logic with Salesforce SCV routing requirements Implement security best practices within Amazon Connect to ensure compliance and safeguard customer data Define and execute backup and disaster recovery strategies for Amazon Connect flows, call recordings, DynamoDB configurations, and other integrated services Collaborate with DevOps teams to automate deployment of Amazon Connect resources Monitor and enhance the performance of Amazon Connect to optimize customer service operations Offer recommendations for feature upgrades and process improvements based on Amazon Connect's evolving capabilities Requirements Extensive hands-on experience with Amazon Connect, Amazon Lex, AWS CLI, Kinesis, S3, and RDS, with 13–19 years in the IT field Strong expertise in Salesforce SCV integration, S3, CloudWatch, and Amazon Connect APIs Background in dynamic IVR flow design utilizing Amazon Lex, Polly, Lambda, and DynamoDB Proficiency in speech analytics, transcription, and sentiment analysis using Contact Lens Familiarity with monitoring tools such as CloudWatch, CloudTrail, and Datadog Competency in creating integrations across WFM, CRM, Unified Communications, and contact center solutions as added advantage Demonstrated capability to implement and maintain security measures within Amazon Connect for compliance and data protection
Posted 2 days ago
19.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a skilled Systems Architect to lead the design and implementation of advanced contact center solutions leveraging Amazon Connect and its integrated services. This role demands technical expertise in designing CCaaS architectures and ensuring seamless integration with CRM, WFM, and other unified communication platforms. Responsibilities Define technical design and make critical decisions for integrations architecture Produce the target state for implementations based on AWS Connect (Bring-Your-Own-Telephony) Support project estimations and contribute to accurate scoping Design end-to-end CCaaS architecture using Amazon Connect, Salesforce SCV, Lambda, and related AWS services Align call routing and handling logic with Salesforce SCV routing requirements Implement security best practices within Amazon Connect to ensure compliance and safeguard customer data Define and execute backup and disaster recovery strategies for Amazon Connect flows, call recordings, DynamoDB configurations, and other integrated services Collaborate with DevOps teams to automate deployment of Amazon Connect resources Monitor and enhance the performance of Amazon Connect to optimize customer service operations Offer recommendations for feature upgrades and process improvements based on Amazon Connect's evolving capabilities Requirements Extensive hands-on experience with Amazon Connect, Amazon Lex, AWS CLI, Kinesis, S3, and RDS, with 13–19 years in the IT field Strong expertise in Salesforce SCV integration, S3, CloudWatch, and Amazon Connect APIs Background in dynamic IVR flow design utilizing Amazon Lex, Polly, Lambda, and DynamoDB Proficiency in speech analytics, transcription, and sentiment analysis using Contact Lens Familiarity with monitoring tools such as CloudWatch, CloudTrail, and Datadog Competency in creating integrations across WFM, CRM, Unified Communications, and contact center solutions as added advantage Demonstrated capability to implement and maintain security measures within Amazon Connect for compliance and data protection
Posted 2 days ago
5.0 years
0 Lacs
India
Remote
About Zeller At Zeller, we’re champions for businesses of all sizes, and proud to be a fast-growing Australian scale-up reimagining business banking and payments. We believe in a level playing field, where all businesses benefit from access to smarter payments and financial services solutions that accelerate their cash flow, help them get paid faster, and give them a better understanding of their finances. So we’re hard at work building the tools to make it happen. Zeller is growing fast, backed by leading VCs, and brings together a global team of passionate payment and tech industry professionals. With an exciting roadmap of innovative new products under development, we are building a supportive and high performing team to inspire change in the outdated banking solutions. If you are passionate about innovation, thrive in dynamic environments, embrace new possibilities, hate bureaucracy, and can’t think of anything more exciting than evolving the status-quo, then read on to learn more. Job Description As a Senior Software Engineer, you will be responsible for architecting and developing cloud native, highlyavailable, robust and secure applications in the AWS environment. You will have experience in web,mobile, backend, API, database development and should have experience in leading a team of software engineers.With automation and maintenance being at the heart of our engineering principles, this position willhave the enviable opportunity to adopt and promote best practices, bleeding edge technologies andtrends. Not limited to a single product area or type, this role will work in a cross functional team withskill sets in full stack software engineering, devops, infrastructure, quality assurance to architect. Youwill collaborate with a cross-disciplinary team to own product software development, contribute andpromote standards and engineering best practices, support operational activities such as; processautomation, compliance activities, SLA upkeep requirements. You’ll be tasked with translating business or product requirements into technical designs, hands-onimplementation of the designs to see through its testing and deployment into various environmentssuch as; development, stress testing, integration testing, staging and production. You will enjoy thefun of development from scratch in some application components while adhering to the companyengineering standards, frameworks and best practices. You will also be a collaborative engineercapable of observing and contributing to existing works by other team members. Automation and maintenance is key, you will be excited to see through your contributions into production and maintaintheir longevity in the mission-critical environment. Role Responsibilities And Experiences Analytical and be able to work with fuzzy requirements Methodologically translate discussions with stakeholders, documents, own researchfindings into technical designs and implementation stepsPrior Experience in handling a team of software engineers.Build to last and go production mindset versus build as proof-of-concept Strong background in softwareengineering and design patterns Experience in microservices and serverless architecture Knowledge in architecture patterns such as; CQRS, event-sourcing Design, develop, and deploy microservices and serverless applications using Node.js , TypeScript, and AWSUnit tests using Jest -, along with Supertest and Postman as supporting tools. Experience with NestJs Good knowledge in multi-threaded and socket programming Instinctive desire to maintain code quality,tidiness and zero technical debtStrong understanding of testing practices (TDD/BDD), with tools like Jest, Supertest, and PostmanGood with API and its design/protocol e.g. Restful, Websocket, SOAP Good understanding of Request/Response vs Async protocol Familiarity with production-grade monitoring, logging, and alerting Can work with various databases to match query and storage requirements e.g. DynamoDB ,SQL, DocumentDB Build and maintain scalable REST APIs integrated with DynamoDB, S3, SNS/SQS, Step Functions, and Lambda Experience in cloud native architecture Understanding of data lake and data warehousing Knowledge in secured coding e.g. OWASP, XSS, CORS Experience in authentication standards and platforms e.g. JWT, OAuth, IdentityFederation Experience in AWS Cloud environment AWS Serverless architecture Microservices Blue Green Deployments Own CI/CD processes using CodePipeline, CodeBuild, and CodeDeployInfrastructure As a Code (IAC): Terraform, Cloudformation AWS Devops SNS, SQS, EventBridge, Step Functions ElastiCache Loading Balancing, Route53, CloudFront, ECS,ECR, Auto-Scaling S3, RDS, DynamoDB,DocumentDB CodePipeline, CodeBuild, CodeDeploy Improve observability using CloudWatch, X-Ray, and other monitoring tools Proven track record in developing and maintaining mission-critical high-load productionsystems with SLA 99.999 % Proven track record in supporting rapid and agile product deployments to different Contribute to and evolve our technical architecture and engineering processes Participate in system design and architecture reviewsenvironments - dev, test, stress-testing, staging/production. Attributes Loves challenging the status-quo Ability to work autonomously yet collaboratively Prepared to be bold yet consistent with your engineering principles Logical, ethnical, mature and responsible Fast learner, humble and loves to share knowledge Calm and exercises positive level of stress in exceptional circumstances such as;production issues, timeline requirements Qualifications And Experience Minimum of a Bachelor degree in software engineering (or related) 5+ years of working experience in a technical hands on software engineering role Demonstrable experiences in developing mission-critical systems Bonus Points Experience in fintech.AWS Certified Solutions Architect (Associate or Professional) Experience in working within a high-growth environment Experience in other programming languages Experience in payments Exposure to Domain-Driven Design (DDD)Experience with PCI compliant environments (PCI-DSS, etc) Like the rest of our team, you will benefit from Competitive remuneration A balanced, progressive, and supportive work environment; Excellent parental leave and other leave entitlements; Fully remote role Annual get together with the team Endless learning and development opportunities; Plenty of remote friendly fun and social opportunities - we love to come together as a team; An ability to influence and shape the future of Zeller as our company scales both domestically and globally; Being part of one of Australia’s most exciting scale-ups.
Posted 2 days ago
19.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a skilled Systems Architect to lead the design and implementation of advanced contact center solutions leveraging Amazon Connect and its integrated services. This role demands technical expertise in designing CCaaS architectures and ensuring seamless integration with CRM, WFM, and other unified communication platforms. Responsibilities Define technical design and make critical decisions for integrations architecture Produce the target state for implementations based on AWS Connect (Bring-Your-Own-Telephony) Support project estimations and contribute to accurate scoping Design end-to-end CCaaS architecture using Amazon Connect, Salesforce SCV, Lambda, and related AWS services Align call routing and handling logic with Salesforce SCV routing requirements Implement security best practices within Amazon Connect to ensure compliance and safeguard customer data Define and execute backup and disaster recovery strategies for Amazon Connect flows, call recordings, DynamoDB configurations, and other integrated services Collaborate with DevOps teams to automate deployment of Amazon Connect resources Monitor and enhance the performance of Amazon Connect to optimize customer service operations Offer recommendations for feature upgrades and process improvements based on Amazon Connect's evolving capabilities Requirements Extensive hands-on experience with Amazon Connect, Amazon Lex, AWS CLI, Kinesis, S3, and RDS, with 13–19 years in the IT field Strong expertise in Salesforce SCV integration, S3, CloudWatch, and Amazon Connect APIs Background in dynamic IVR flow design utilizing Amazon Lex, Polly, Lambda, and DynamoDB Proficiency in speech analytics, transcription, and sentiment analysis using Contact Lens Familiarity with monitoring tools such as CloudWatch, CloudTrail, and Datadog Competency in creating integrations across WFM, CRM, Unified Communications, and contact center solutions as added advantage Demonstrated capability to implement and maintain security measures within Amazon Connect for compliance and data protection
Posted 2 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Req ID: 336371 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Java 8+, Spring Boot, Restful API, AWS - Developer to join our team in Chennai, Tamil Nādu (IN-TN), India (IN). Java 8+, Spring Boot, Restful API, AWS - Developer FMRJP00035645 Full Stack Engineer 3 (6-9 Years) 6 to 7yrs Candidate with below Skills Strong skills in Java 8+, Web application frameworks such as Spring Boot, and RESTful API development. Familiarity with AWS Toolsets, including but not limited to SQS, Lambda, DynamoDB, RDS, S3, Kinesis, Cloud formation Demonstrated experience in designing, building, and documenting customer facing RESTful APIs Demonstrable ability to read high-level business requirements and drive clarifying questions. Demonstrable ability to engage in self-paced continuous learning to upskill, with the collaboration of engineering leaders. Demonstrable ability to manage your own time and prioritize how you spend your time most effectively. Strong skills with the full lifecycle of development, from analysis to install into production Minimum Experience On Key Skills (6-7)yrs General Expectation Must have Good Communication Must be ready to work in 10:30 AM to 8:30 PM Shift Flexible to work in Client Location DLF Downtown, Tharamani, Chennai Must be ready to work from office in a Hybrid work environment. Full Remote work is not an option Expect Full Return to office in 2026 Pre-Requisites before submitting profiles Must have Genuine and Digitally signed Form16 for ALL employments All employment history/details must be present in UAN/PPF statements3) Candidate must be screened using Video and ensure he/she is genuine and have proper work setup Candidates must have real work experience on mandatory skills mentioned in JD Profiles must have the companies which they are having payroll with and not the client names as their employers As these are competitive positions and client will not wait for 60 days and carry the risks of drop-out, candidates must of 0 to 3 weeks of Notice Period Candidates must be screened for any gaps after education and during employment for genuineness of the reasons About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client’s needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us . NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .
Posted 2 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We’re seeking a highly skilled Senior Fullstack Engineer with deep expertise in TypeScript, a strong preference for React on the frontend and NestJS on the backend, and rock-solid software-engineering fundamentals. You’ll balance frontend and backend work, contribute ideas when new technical challenges arise, and help deliver features quickly and reliably. Key Responsibilities Design, develop, and maintain scalable fullstack applications using TypeScript (React + NestJS preferred). Integrate AI/ML capabilities into web applications using APIs or custom-trained models to enhance user experiences and automation. Comfortably tackle complex sprint tickets and help teammates unblock issues, delivering high-quality solutions efficiently. Propose and discuss technical approaches with the team when new problems surface. Collaborate closely with designers, product managers, data scientists, and engineers to ship intelligent, high-quality features. Write clean, testable, maintainable code and participate in code reviews. Deploy and troubleshoot applications in AWS-based environments. Qualifications 7+ years of professional experience across frontend and backend development. A background in AI/ML integration—particularly deploying AI-powered features within fullstack applications Advanced proficiency in TypeScript with significant React and NestJS experience. Hands-on experience integrating AI/ML APIs or services (e.g., OpenAI, AWS SageMaker, TensorFlow Serving, or similar). Strong foundations in design patterns, automated testing, clean architecture, and SOLID principles. Experience with relational databases (e.g., PostgreSQL) and ORMs such as Prisma or TypeORM. Practiced in writing and maintaining automated tests (e.g., Jest, Playwright, Cypress). Fluent English—clear, efficient verbal and written communication. Experience deploying applications to AWS (Lambda, S3, DynamoDB, API Gateway, IAM). Comfortable working in Agile environments, with a strong sense of ownership and accountability for quality and performance. Preferred Qualifications Familiarity with event-driven architectures, including tools and patterns such as Kafka / Amazon MSK, SNS + SQS fan-out, and Amazon EventBridge. Experience building microservices or modular monoliths and understanding their trade-offs. Familiarity with CI/CD pipelines (including GitHub Actions) and infrastructure-as-code tooling. Awareness of application-security best practices and performance tuning techniques. Experience with GraphQL, WebSockets, or other real-time communication patterns. Exposure to ML pipelines or MLOps workflows, even at a basic level, is a strong plus. Demonstrated eagerness to learn new technologies, especially in the evolving AI/ML space. About Us TechAhead is a global digital transformation company with a strong presence in the USA and India. We specialize in AI-first product design thinking and bespoke development solutions . With over 15 years of proven expertise, we have partnered with Fortune 500 companies and leading global brands to drive digital innovation and deliver excellence. At TechAhead, we are committed to continuous learning, growth and crafting tailored solutions that meet the unique needs of our clients. Join us to shape the future of digital innovation worldwide and drive impactful results with cutting-edge AI tools and strategies!
Posted 2 days ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
We're looking for an entrepreneurial, passionate, and driven Data Driven to join Startup Gala Intelligence backed by Navneet Tech Venture. . As we're building our technology platform from scratch, you'll have the unique opportunity to shape our technology vision, architecture, and engineering culture right from the ground up. You’ll directly contribute to foundational development and establish best practices, while eventually building and contributing to our engineering team. This role is ideal for someone eager to own the entire tech stack, who thrives on early-stage challenges, and loves building innovative, scalable solutions from day zero. What you'll do: Data Pipeline Development - Build scalable, resilient pipelines for collecting, transforming, and loading large datasets from web, APIs, and internal systems. Multi-Source Data Integration - Ingest data from diverse structured, semi-structured, and unstructured sources (web scraping, open datasets, APIs, third-party feeds). Data Storage & Modeling - Design and maintain efficient schemas across NoSQL (e.g., MongoDB, DynamoDB) and graph databases (e.g., Neo4j) to handle identity relationships at scale. Data Quality & Trust Scoring - Develop processes to deduplicate, validate, and assign confidence scores to identity data. Data Enrichment - Build enrichment workflows to merge identity records from multiple sources into a unified view. Orchestration & Automation - Implement workflow orchestration using tools like Airflow, Dagster, or Prefect for scheduling, monitoring, and scaling data pipelines. Performance Optimization - Optimize ETL/ELT processes for speed, scalability, and cost efficiency. Collaboration - Work closely with backend engineers, AI/ML teams, and product managers to deliver data solutions aligned with product goals. Who you are: Technical Requirements: 3+ years in data engineering or backend engineering with a strong focus on data pipelines. Hands-on experience with ETL/ELT frameworks and orchestration tools (Airflow, Prefect, Dagster). Proficiency in Python or another data-friendly language. Experience with NoSQL databases (MongoDB, DynamoDB) and graph databases (Neo4j). Strong understanding of web scraping frameworks and data ingestion APIs. Familiarity with cloud data infrastructure (AWS, GCP, or Azure). Experience in handling large datasets, data normalization, and data quality processes. Bonus Points: Problem Solver - You thrive in ambiguous situations and can design solutions from scratch. Attention to Detail - Data integrity is non-negotiable for you. Collaborative - You work well with cross-functional teams and communicate clearly. Startup Mindset - You’re comfortable moving fast, iterating often, and wearing multiple hats. Who We Are & Our Culture: Gala Intelligence , backed by Navneet Tech Ventures , is a tech-driven startup dedicated to solving one of the most pressing business challenges - fraud detection and prevention. We're building cutting-edge, real-time products designed to empower consumers and businesses to stay ahead of fraudsters, leveraging innovative technology and deep domain expertise. . Our culture and values: We’re united by a single, critical mission - stopping fraud before it impacts businesses. Curiosity, innovation, and proactive action define our approach. We value transparency, collaboration, and individual ownership, creating an environment where talented people can do their best work. Problem-Driven Innovation : We're deeply committed to solving real challenges that genuinely matter for our customers. Rapid Action & Ownership : We encourage autonomy and accountability—own your projects, move quickly, and shape the future of Gala Intelligence. Collaborative Excellence : Cross-team collaboration ensures alignment, sparks innovation, and drives us forward together. Continuous Learning : Fraud evolves rapidly, and so do we. Continuous improvement, experimentation, and learning are core to our success. If you're excited by the opportunity to leverage technology in the fight against fraud, and you're ready to build something impactful from day one, we want to hear from you!
Posted 2 days ago
8.0 years
0 Lacs
Mohali district, India
On-site
Job Title: Senior DevOps Engineer – Travel Domain Location: Mohali Employment Type: Full-time Experience: 8+ years (minimum 2 years in travel domain preferred) Role Overview The Senior DevOps Engineer will be responsible for designing, deploying, and maintaining secure, scalable AWS infrastructure. The role involves leading cloud architecture initiatives, implementing robust security frameworks, optimizing deployment pipelines, and ensuring seamless integrations with Online Booking Tools (OBT), Global Distribution Systems (GDS), and transportation services. Key Responsibilities Architect, deploy, and manage AWS infrastructure (EC2, ECS/EKS, Lambda, S3, RDS, DynamoDB, VPC, CloudFront, CloudFormation/Terraform, etc.). Manage Active Directory integration with AWS Directory Service for centralized authentication and role-based access. Design and implement secure network topologies and firewall architectures (AWS Network Firewall, WAF, security groups, NACLs). Build and manage centralized logging, monitoring, and alerting solutions (ELK Stack, OpenSearch, CloudWatch, etc.). Implement CI/CD pipelines using Jenkins, GitLab CI, AWS CodePipeline. Conduct security assessments and apply PCI-DSS, CIS, and AWS best practices. Manage IAM policies, MFA, KMS, secrets management, and encryption. Automate infrastructure operations with Python, Bash, Go, Ansible, Chef, or Puppet. Develop disaster recovery plans, backups, and failover strategies. Monitor cost and optimize AWS resource utilization. Document architectures, processes, and security policies. Required Expertise AWS & Cloud Architecture Deep knowledge of AWS core services, networking (VPC, subnets, VPN, Transit Gateway, Direct Connect), and cloud security. Identity & Access Management Strong hands-on Active Directory management and cloud integration experience. Security & Network Firewall and ACL configuration in cloud and hybrid environments. Familiarity with on-prem firewalls such as Sophos. Monitoring & Analytics Proficiency with ELK Stack, Amazon OpenSearch, CloudWatch, Grafana, or Prometheus. DevOps & Automation Strong scripting skills in Python, Bash, or Go. CI/CD pipeline development and container orchestration (Docker, Kubernetes/EKS). Compliance Understanding of CIS benchmarks, GDPR, ISO 27001, SOC 2. Preferred Qualifications AWS Certifications (DevOps Engineer, Solutions Architect, Security Specialty). PCI-DSS audit or implementation experience. Zero-trust architecture and microsegmentation knowledge. Travel domain experience (OBT, GDS, transportation APIs).
Posted 2 days ago
12.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Skill: AWS Architect, Application Modernization Experience:12-19 years Joining Time: Need 30-45 Days Joiner Work Location: Kolkata Job Description: • 15+ years of hands on IT experience in design and development of complex system • Minimum of 5+ years in a solution or technical architect role using service and hosting solutions such as private/public cloud IaaS, PaaS and SaaS platforms • At least 4+ years of experience hands on experience in cloud native architecture design, implementation of distributed, fault tolerant enterprise applications for Cloud. • Experience in application migration to AWS cloud using Refactoring, Rearchitecting and Re-platforming approach • 3+ Proven experience using AWS services in architecting PaaS solutions. • AWS Certified Architect Technical Skills • Deep understanding of Cloud Native and Microservices fundamentals • Deep understanding of Gen AI usage and LLM Models, Hands on experience creating Agentic Flows using AWS Bedrock, Hands on experience using Amazon Q for Dev/Transform • Deep knowledge and understanding of AWS PaaS and IaaS features • Hands on experience in AWS services i.e. EC2, ECS, S3, Aurora DB, DynamoDB, Lambda, SQS, SNS, RDS, API gateway, VPC, Route 53, Kinesis, cloud front, Cloud Watch, AWS SDK/CLI etc. • Strong experience in designing and implementing core services like VPC, S3, EC2, RDS, IAM, Route 53, Autoscaling , Cloudwatch, AWS Config, Cloudtrail, ELB, AWS Migration services, ELB, VPN/Direct connect • Hands on experience in enabling Cloud PaaS app and data services like Lambda, RDS, SQS, MQ,, Step Functions, App flow, SNS, EMR, Kinesis, Redshift, Elastic Search and others • Experience automation and provisioning of cloud environments using API’s, CLI and scripts. • Experience in deploy, manage and scale applications using Cloud Formation/ AWS CLI • Good understanding of AWS Security best practices and Well Architecture Framework • Good knowledge on migrating on premise applications to AWS IaaS • Good knowledge of AWS IaaS (AMI, Pricing Model, VPC, Subnets etc.) • Good to have experience in Cloud Data processing and migration, advanced analytics AWS Redshift, Glue, AWS EMR, AWS Kinesis, Step functions • Creating, deploying, configuring and scaling applications on AWS PaaS • Experience in java programming languages Spring, Spring boot, Spring MVC, Spring Security and multi-threading programming • Experience in working with hibernate or other ORM technologies along with JPA • Experience in working on modern web technologies such as Angular, Bootstrap, HTML5, CSS3, React • Experience in modernization of legacy applications to modern java applications • Experience in DevOps tool Jenkins/Bamboo, Git, Maven/Gradle, Jira, SonarQube, Junit, Selenium, Automated deployments and containerization • Knowledge on relational database and no SQL databases i.e. MongoDB, Cassandra etc. • Hands on experience with Linux operating system • Experience in full life-cycle agile software development • Strong analytical & troubleshooting skills • Experienced in Python, Node and Express JS (Optional) Main Duties: • AWS architect takes company’s business strategy and outlines the technology systems architecture that will be needed to support that strategy. • Responsible for analysis, evaluation and development of enterprise long term cloud strategic and operating plans to ensure that the EA objectives are consistent with the enterprise’s long-term business objectives. • Responsible for the development of architecture blueprints for related systems • Responsible for recommendation on Cloud architecture strategies, processes and methodologies. • Involved in design and implementation of best fit solution with respect to Azure and multi-cloud ecosystem • Recommends and participates in activities related to the design, development and maintenance of the Enterprise Architecture (EA). • Conducts and/or actively participates in meetings related to the designated project/s • Participate in Client pursuits and be responsible for technical solution • Shares best practices, lessons learned and constantly updates the technical system architecture requirements based on changing technologies, and knowledge related to recent, current and upcoming vendor products and solutions. • Collaborates with all relevant parties in order to review the objectives and constraints of each solution and determine conformance with the EA. Recommends the most suitable technical architecture and defines the solution at a high level.
Posted 2 days ago
3.0 years
0 Lacs
India
On-site
Job Title: Data Engineer About VXI VXI Global Solutions is a BPO leader in customer service, customer experience, and digital solutions. Founded in 1998, the company has 40,000 employees in more than 40 locations in North America, Asia, Europe, and the Caribbean. We deliver omnichannel and multilingual support, software development, quality assurance, CX advisory, and automation & process excellence to the world’s most respected brands. VXI is one of the fastest growing, privately held business services organizations in the United States and the Philippines, and one of the few US-based customer care organizations in China. VXI is also backed by private equity investor Bain Capital. Our initial partnership ran from 2012 to 2016 and was the beginning of prosperous times for the company. During this period, not only did VXI expand our footprint in the US and Philippines, but we also gained ground in the Chinese and Central American markets. We also acquired Symbio, expanding our global technology services offering and enhancing our competitive position. In 2022, Bain Capital re-invested in the organization after completing a buy-out from Carlyle. This is a rare occurrence in the private equity space and shows the level of performance VXI delivers for our clients, employees, and shareholders. With this recent investment, VXI has started on a transformation to radically improve the CX experience though an industry leading generative AI product portfolio that spans hiring, training, customer contact, and feedback. Job Description: We are seeking talented and motivated Data Engineers to join our dynamic team and contribute to our mission of harnessing the power of data to drive growth and success. As a Data Engineer at VXI Global Solutions, you will play a critical role in designing, implementing, and maintaining our data infrastructure to support our customer experience and management initiatives. You will collaborate with cross-functional teams to understand business requirements, architect scalable data solutions, and ensure data quality and integrity. This is an exciting opportunity to work with cutting-edge technologies and shape the future of data-driven decision-making at VXI Global Solutions. Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and store data from various sources. Collaborate with business stakeholders to understand data requirements and translate them into technical solutions. Implement data models and schemas to support analytics, reporting, and machine learning initiatives. Optimize data processing and storage solutions for performance, scalability, and cost-effectiveness. Ensure data quality and integrity by implementing data validation, monitoring, and error handling mechanisms. Collaborate with data analysts and data scientists to provide them with clean, reliable, and accessible data for analysis and modeling. Stay current with emerging technologies and best practices in data engineering and recommend innovative solutions to enhance our data capabilities. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven 3+ years' experience as a data engineer or similar role Proficiency in SQL, Python, and/or other programming languages for data processing and manipulation. Experience with relational and NoSQL databases (e.g., SQL Server, MySQL, Postgres, Cassandra, DynamoDB, MongoDB, Oracle), data warehousing (e.g., Vertica, Teradata, Oracle Exadata, SAP Hana), and data modeling concepts. Strong understanding of distributed computing frameworks (e.g., Apache Spark, Apache Flink, Apache Storm) and cloud-based data platforms (e.g., AWS Redshift, Azure, Google BigQuery, Snowflake) Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker, Apache Superset) and data pipeline tools (e.g. Airflow, Kafka, Data Flow, Cloud Data Fusion, Airbyte, Informatica, Talend) is a plus. Understanding of data and query optimization, query profiling, and query performance monitoring tools and techniques. Solid understanding of ETL/ELT processes, data validation, and data security best practices Experience in version control systems (Git) and CI/CD pipelines. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively with cross-functional teams. Join VXI Global Solutions and be part of a dynamic team dedicated to driving innovation and delivering exceptional customer experiences. Apply now to embark on a rewarding career in data engineering with us!
Posted 2 days ago
5.0 years
0 Lacs
India
On-site
Sparsa AI is a Singapore and Germany based Industrial-AI Startup, building the next generation of agentic AI platform to transform how physical industries—such as manufacturing and logistics—make decisions and optimize their operations. Our AI agents orchestrate complex workflows across business functions and enterprise applications including ERP, MES, CRM and supply chain environments to resolve real-world constraints and unlock productivity. We’re looking for a Backend Engineer to help us build foundational services that power agent observability, feedback systems, and data capture for LLM fine-tuning. This role sits at the intersection of infrastructure and AI, and is core to enabling intelligent enterprise agents. What you'll do: Design and maintain backend services for agent orchestration, session tracking, and feedback ingestion Build APIs and microservices that support real-time and batch workflows Integrate observability tooling to enable fine-grained monitoring of agent behavior Work closely with the AI team to structure data pipelines and interaction logs Contribute to internal SDKs and service interfaces used across deployments What we’re looking for: 2–5 years of backend engineering experience (Node.js, Python, or similar) Experience with API development, WebSockets, and microservices patterns Familiarity with DynamoDB, Redis, and message buses like Kafka/EventBridge Exposure to observability stacks (OpenTelemetry, tracing, metrics) Bonus: experience working on LLM-based or event-driven systems Benefits A key engineering role at a pioneering AI company with operations in Asia and Europe. High ownership of backend systems that directly shape intelligent enterprise agent performance. Close collaboration with the AI and infrastructure teams to design services powering agent orchestration, observability, and fine-tuning data pipelines. The chance to build foundational systems that will be deployed across multiple industries and high-impact enterprise environments. Join Us at Sparsa AI If you are passionate about building transformative products at the intersection of AI and industrial operations, we invite you to shape the future with us. This is your opportunity to learn and execute in a fast-growing company that is redefining how the real economy works. At Sparsa AI, you'll work alongside an exceptional team, solve real-world problems, and leave a lasting impact on global industries. Let’s build the future of Industrial AI-Agents—together. If you have the chops, let’s connect!
Posted 2 days ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Amazon Connect Contact Center Associate/Engineer Company : IOWeb3 Technologies Location : Remote/Hybrid/Onsite (Your preference) About IOWeb3 Technologies : IOWeb3 Technologies is a leading software development company dedicated to crafting exceptional customer experiences and delivering cutting-edge tech solutions. Specializing in product engineering, digital solutions, UI-UX design, and innovative software products, we are committed to integrity, collaboration, and excellence in all we do. We partner with reputable clients globally, providing high-quality, reliable products and services. About the Role : We are actively seeking talented Amazon Connect Contact Center Associates/Engineers at various experience levels for one of our esteemed multinational clients. If you have a passion for leveraging cloud-based contact center solutions to enhance customer interactions, we encourage you to apply! Amazon Connect Tech Lead with 5-7 years’ experience. Experience with Amazon Connect (Contact flows, queues, routing profile) Integration experience with Salesforce (Service Cloud Voice) Hands-on with AWS services AWS Lambda (for backend logic) Programming language Java or Python Amazon Lex (Voice/chat bots) Amazon Kinesis (for call analytics and streaming) Amazon S3 (Storage) Amazon DynamoDB CloudWatch IAM Proficiency in programming scripting Java Script, Nodejs Familiarity with CICD pipelines DevOps and infrastructure as code Terraform CloudFormation Knowledge of contact center KPIs and analytics dashboards Solid experience in REST APIs and integration frameworks. Experience working in Agile Scrum environments Experience in Automating deployments of contact center setup using CI/CD pipelines or Terraform Lead the end to end implementation and configuration of Amazon Connect Understanding of telephony concepts SIP DID ACD IVR CTI AWS Certified Solutions Architect or Amazon Connect certification Why Join IOWeb3 Technologies? * Work with a team that values integrity, collaboration, and excellence. * Opportunity to contribute to impactful projects for a multinational client. * Flexible work arrangements (Remote, Hybrid, or Onsite options). * Be part of a company that is at the forefront of digital solutions and customer experience. Apply Now : If you're an Amazon Connect specialist looking for your next challenge, apply today! Please indicate your relevant experience level in your application.
Posted 2 days ago
8.0 years
0 Lacs
India
Remote
Location: Remote We are building a scalable AWS-based backend application that integrates with CrowdStrike and Palo Alto APIs to process and enforce Host/IP block requests. The application will log and audit each action, store metadata in DynamoDB, and run as a containerized service on ECS Fargate. Built using Python (Flask/FastAPI), the system is designed to scale across 44+ enforcement endpoints. Responsibilities Design and develop RESTful APIs using Python (Flask or FastAPI). Implement integrations with CrowdStrike, Palo Alto, and other security platforms. Build scalable serverless and containerized solutions using AWS services (ECS Fargate, Lambda, API Gateway, DynamoDB, CloudWatch). Develop data logging, auditing, and monitoring pipelines. Optimize application performance and scalability for future growth. Ensure secure development practices, including IAM policies and secrets management. Collaborate with frontend, DevOps, and QA teams to deliver features end-to-end. Required Skills & Experience 3–8 years of backend development experience with Python. Strong experience with Flask or FastAPI frameworks. Hands-on experience with AWS services (ECS Fargate, API Gateway, Lambda, DynamoDB, CloudWatch). Experience with REST API design and third-party API integrations. Familiarity with Docker and container orchestration. Understanding of logging, auditing, and monitoring tools in distributed systems. Knowledge of security best practices (authentication, authorization, secrets management). Good to have Experience with cybersecurity tools (CrowdStrike, Palo Alto, etc.). Knowledge of CI/CD pipelines (CodePipeline, GitHub Actions, or similar). Background in building scalable, high-availability applications.
Posted 2 days ago
8.0 years
0 Lacs
Chandigarh, India
Remote
Job Specification: Senior Software Engineer Job Type: 12-month contract Working Model: Permanent WFH (Work from Home) Job Overview: We are seeking a highly skilled Senior Software Engineer to design, develop, and maintain scalable software solutions that align with our business objectives. This role involves a thorough review of our current code and architecture to identify areas for improvement, followed by the development and implementation of agreed changes. Key Responsibilities: Design, develop, and maintain scalable software solutions that are robust and reliable. Review existing code and architecture to identify improvement areas. Develop and implement improvements to enhance performance and maintainability. Collaborate with cross-functional teams to ensure seamless integration of new features. Utilise AWS services such as Lambda, API Gateway, DynamoDB, S3, SQS, and SNS. Implement infrastructure as code using Terraform. Integrate portal interfaces with Salesforce, Zuora, or Commerce Tools. Design and implement workflows using AWS Step Functions. Develop serverless architectures and microservices-based applications. Required Qualifications: 8+ years of experience in software development. Proficiency in React and Node.js. Extensive experience with AWS services, including Lambda, API Gateway, DynamoDB, S3, SQS, and SNS. Strong understanding of cloud infrastructure principles. Experience with Terraform for infrastructure as code. Experience with portal interface integration with Salesforce, Zuora, or Commerce Tools. Hands-on experience with AWS Step Functions. Experience with serverless architectures and microservices-based applications. Preferred Attributes: Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Ability to work independently in a remote work environment.
Posted 2 days ago
7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Role: Full Stack Engineer Years of Exp: 7+ years (relevant) Work Location: Remote Work shift: 2- 11PM IST Senior-level experience developing and deploying full-stack applications, especially on AWS. Design, develop, and deploy full-stack solutions on AWS, focusing on cloud architecture and end-to-end application delivery. Top 3 skills: - AWS tools - Python; What enterprise level projects have they worked with in Python. Figure out the libraries as well. - Competent in Generative AI. Rest API, Vector DB, Prompt Engineering, Prototyping. Develop serverless applications using AWS Lambda, API Gateway, and other relevant AWS services. Strong expertise in Python , with a solid understanding of object-oriented programming and best practices. In-depth experience with AWS services such as API Gateway, Lambda Functions, DynamoDB , and cloud-native development practices. Proven experience in building applications end-to-end on AWS, from architecture to deployment. Implement, manage, and optimize databases using DynamoDB and other AWS storage solutions. Strong understanding of cloud security best practices and AWS compliance frameworks. Familiarity with web frameworks and technologies (e.g., Django, Flask, React, Node.js) is a plus. Ability to work in a collaborative, fast-paced, and agile development environment. Strong problem-solving skills and attention to detail. Excellent communication skills, able to explain complex technical concepts to both technical and non-technical stakeholders. Preferred Qualifications: Familiarity with additional AWS services such as EC2, S3, and CloudFormation. Experience with modern DevOps practices, CI/CD pipelines, and cloud monitoring. Previous experience in a Full Stack Engineer or similar role, delivering production-level applications on AWS.
Posted 2 days ago
6.0 years
0 Lacs
Greater Kolkata Area
Remote
Join the Tilt team At Tilt (formerly Empower), we see a side of people that traditional lenders miss. Our mobile-first products and machine learning-powered credit models look beyond outdated credit scores, using over 250 real-time financial signals to recognize real potential. Named among the next billion-dollar startups, we're not just changing how people access financial products — we're creating a new credit system that backs the working, whatever they're working toward. The Opportunity: Lead Engineer - India Tilt is looking for a hands-on technical expert to spearhead the system redesign of NIRA, our newly acquired credit business, in India. This is a senior IC role, not a people management one. NIRA is building the leading financial brand for Middle India—a market of 200 million people underserved by traditional banks. With customers in 5,000+ cities & with growth engine on, they process 15,000 new loan applications daily, growing 12-15% MoM. NIRA is solving a massive problem with technology-first lending, offering instant credit via their mobile app. As Lead Engineer at NIRA, you will design, build and scale production systems that power real-time credit decision engines, loan origination, and loan management throughout the loan lifecycle to repayments and resizing of credit lines. You'll be instrumental in building the tech stack for credit and other relevant products for NIRA’s customers built on the innovative public financial rails, the India Stack. This will be an in office role that will report to NIRA’s founders in India, while partnering with global engineering teams at Tilt including US, LatAm, and SE Asia. The ideal candidate is someone who has the experience of scaling a product from a few millions to tens of millions of users in an individual contributor role. Your work would have enabled decision making & deployment of product iterations based on purpose fit A/B testing or other decision systems. Your work would have also led to rapid prototypes and integrations on both the demand and supply side of the business. How You'll Make an Impact Architect & Scale: Own the end-to-end design, development, and deployment of our highest impact systems across loan lifecycle including credit decision, loan origination and management. Calculated risk taking: Within a small group, prototype, integrate, and deploy new technologies, solutions and rewrites that allow us to run the ongoing business while also pushing the envelope on redesigning for future scale. Collaborate with Credit & Partner Lender team: Build system capability to rapidly iterate in two dimensions: a) addition of new data sources, segments and credit decision components, and b) addition of new lending partners. Build internal platform tools: To support and catalyse recurring engineering workloads in the team. Mentor & Lead: Primarily by example, but also undertake team coaching initiatives aimed at striving for engineering high road. Partner with other engineering teams operating internationally and in the US to identify transferable learnings, and common artefacts that can be utilized across all businesses. Why You're a Great Fit Bachelor’s degree in Engineering, Computer Science, Mathematics or a similar quantitative field 6+ years of industry experience in backend development on large-scale, high-traffic production systems Strong hands-on expertise in databases (open source & managed), APIs, servers and backend architecture Communicate effectively about details but also zoom out and convey impact to executive audience Don't meet every qualification? If you're excited about this challenge and driven to innovate, we still want to hear from you! Our Tech Stack Backend: Modular microservices (serverless deployment) Languages: Java, Python, NodeJS Databases: DynamoDB (primary), AWS Glue + Athena (secondary) Data Science & Decisioning: Python-based models deployed on serverless architecture Internal Tools: NodeJS for Operations workflow management Note that this is a full-time, in-office role based in Bangalore, India (few days a week can be WFH) Our Interview Process Initial Recruiter Call: A conversation to learn about your experience and what you're looking for in your next role. Hiring Manager Interview: A deeper discussion about your background and approach to solving challenges. Skills Panel: Meet with Tilt engineers to discuss your expertise and problem-solving approach through real-world scenarios. Leadership Conversation: A final conversation with the Co-Founder of NIRA to discuss our mission and how you could contribute to it (and how we can help you achieve your career goals along the way). Don’t meet every qualification? We care about potential over your past. If you're bringing ambition and drive to what we're building, we want to hear from you. What You'll Get At Tilt Virtual-first teamwork: The Tilt team is collaborating across 14 countries, 12 time zones, and counting. You’ll get started with a WFH office reimbursement. Competitive pay: We're big on potential, and it's reflected in our competitive compensation packages and generous equity. Complete support: Find flexible health plans at every premium level, and substantial subsidies that stand up to global standards. Visibility is yours: You can count on direct exposure to our leadership team — we’re a team where good ideas travel quickly. Paid global onsites: Magic happens IRL: we gather twice yearly to reconnect over shared meals or kayaking adventures. (We’ve visited Vail, San Diego, and Mexico City, to name a few.) Impact is recognized: Growth opportunities follow your contributions, not rigid promotion timelines. The Tilt Way We're looking for people who chase excellence and impact. Those who stand behind their work, celebrating the wins and learning from the missteps equally. We foster an environment where every voice is valued and mutual respect is non-negotiable — brilliant jerks need not apply. We're in this together, working to expand access to fair credit and prove that people are incredible. When you join us, it's not just another day at the [virtual] office, you're helping millions of hardworking people reach better financial futures. You’re pushing ahead in your career? We can get behind that. Join us in building the credit system that people deserve.
Posted 2 days ago
3.0 years
10 - 18 Lacs
Bengaluru, Karnataka, India
On-site
Are you passionate about building scalable backend systems, working with cutting-edge technologies, and solving real-world challenges through clean and efficient code? We are looking for a seasoned Software Development Engineer (SDE 1) with 3+ years of experience to join our dynamic team and drive impactful backend development projects. Key Responsibilities Lead backend and full-stack development initiatives using Java (mandatory) and optionally Python Architect and implement microservices, apply SOLID and OOPS principles, and design for performance and scalability Build and maintain robust APIs (REST, GraphQL) using tools like Swagger and Postman Work with Spring Boot, Hibernate, and ensure clean, modular code Manage both SQL (MySQL) and NoSQL (MongoDB) databases with strong ACID compliance Enforce security standards (SSL, TLS, cookies, headers) Develop and deploy on AWS (EC2, Lambda, ECS Fargate, S3, DynamoDB, SQS/SNS) Ensure DevOps best practices: Git, GitHub, CI/CD pipelines, code reviews, JUnit/Mockito testing Optimize performance with caching (L1/L2), throttling, and load balancing Must-Have Expertise In Strong grasp of Data Structures & Algorithms (DSA) Solid foundation in Object-Oriented Programming (OOPS) Ideal Candidate 3+ years in software development with a strong backend focus Proactive problem solver and excellent communicator Experience mentoring junior developers or leading tech discussions Location: Bengaluru Skills: rest apis,postman,python,hibernate,data structures,mysql,java,data structures & algorithms (dsa),oop principles,microservices,solid principles,core java,mongodb,security standards (ssl, tls),nosql,load balancing,spring boot,aws (ec2, lambda, ecs fargate, s3, dynamodb, sqs/sns),throttling,devops (git, github, ci/cd),mockito,graphql,junit,caching,sql,swagger,aws
Posted 2 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Join New Era Technology, where People First is at the heart of everything we do. With a global team of over 4,500 professionals, we're committed to creating a workplace where everyone feels valued, empowered, and inspired to grow. Our mission is to securely connect people, places, and information with end-to-end technology solutions at scale. At New Era, you'll join a team-oriented culture that prioritizes your personal and professional development. Work alongside industry-certified experts, access continuous training, and enjoy competitive benefits. Driven by values like Community, Integrity, Agility, and Commitment, we nurture our people to deliver exceptional customer service. If you want to make an impact in a supportive, growth-oriented environment, New Era is the place for you. Apply today and help us shape the future of work—together. Job Title: Senior Software Developer – Contract (Remote) Company: New Era Technology Location: Remote (India-based, working for client in Malaysia) Employment Type: 12-Month Contract (Extendable) Contract Employer: New Era Technology, India About Us New Era Technology is a global IT solutions provider with a strong presence across multiple regions, delivering cutting-edge technology services to our clients. We are looking for an experienced Senior Software Developer to work on a prestigious project for our client in Malaysia. This is a remote role for India-based candidates, on a contractual basis with the possibility of extension. Position Overview We are seeking a Senior Software Developer with 8–10 years of experience in enterprise-grade software development. The ideal candidate will have deep expertise in Java (8, 17, 21) , cloud technologies, microservices architecture, and modern DevOps practices. Key Responsibilities Design, develop, and maintain enterprise-grade applications using Java and related technologies. Implement and manage CI/CD pipelines using AWS CodePipeline, CodeBuild, CodeDeploy, ECR, CodeArtifact. Work with GitHub and GitOps for version control and deployment automation. Develop and deploy microservices using Kubernetes and ArgoCD. Integrate and optimize services with AWS S3, EC2, Lambda, CloudWatch, and Secrets Manager. Work with Kafka for event streaming and JWT authentication for secure API access. Create, manage, and query databases using DynamoDB and Oracle. Generate reports using Jasper Reports. Troubleshoot and monitor application performance with Splunk. Collaborate with cross-functional teams in an agile environment. Required Skills & Experience 8–10 years of software development experience. Proficiency in Java 8, 17, and 21. Strong experience with GitHub, GitOps, and CI/CD tools. Hands-on expertise in AWS cloud services (CodePipeline, CodeBuild, CodeDeploy, ECR, CodeArtifact, Secrets, S3, EC2, Lambda, CloudWatch). Strong knowledge of Microservices architecture and Kubernetes. Experience with JWT authentication and ArgoCD. Good understanding of Kafka, DynamoDB, and Oracle databases. Familiarity with Linux environments. Experience with Jasper Reports for reporting solutions. Strong problem-solving skills and ability to work independently in a remote setup. Contract Details Duration: 12 months (extendable) Employment: On contract rolls of New Era Technology, India Work Mode: Remote (collaboration with Malaysia-based client) How to Apply If you meet the above requirements and are excited about this opportunity, please send your updated CV to: 📧 sravani.karri@neweratech.com New Era Technology, Inc., and its subsidiaries ("New Era" "we", "us", or "our") in its operating regions worldwide are committed to respecting your privacy and recognize the need for appropriate protection and management of any Personal Data that you may provide us. In this, we are also committed to providing you with a positive experience on our websites and while using our products, services and solutions ("Solutions"). View our Privacy Policy here https://www.neweratech.com/us/privacy-policy/
Posted 2 days ago
6.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience- 6 to 8 Years Job Location - Hyderabad, Indore, Ahmedabad - Work from Office Max Joining Time - 2 Months Must have 2 years of Lead Experience Bachelor’s or master's degree in computer science, Engineering, Data Science, Mathematics, Statistics or related fields. • At least 5 years of professional experience in AI/Machine Learning engineering. • Strong programming skills in Python and Java. • Demonstrated hands-on experience building Retrieval-Augmented Generation (RAG)- based chatbots or similar generative AI applications. • Proficiency in cloud platforms, particularly AWS, including experience with EC2, Lambda, SageMaker, DynamoDB, CloudWatch, and API Gateway. • Solid understanding of AI methodologies, including natural language processing (NLP), vector databases, embedding models, and large language model integrations. • Experience with leading projects or teams, managing technical deliverables, and ensuring high-quality outcomes. Preferred Qualifications: • AWS certifications (e.g., AWS Solutions Architect, AWS Ma
Posted 2 days ago
9.0 - 12.0 years
0 Lacs
India
On-site
Job Summary We are looking for a highly skilled Technical Architect with expertise in AWS, Generative AI, AI/ML, and scalable production-level architectures. The ideal candidate should have experience handling multiple clients, leading technical teams, and designing end-to-end cloud-based AI solutions with an overall experience of 9-12 years. This role involves architecting AI/ML/GenAI-driven applications, ensuring best practices in cloud deployment, security, and scalability while collaborating with cross-functional teams. Key Responsibilities Technical Leadership & Architecture Design and implement scalable, secure, and high-performance architectures on AWS for AI/ML applications. Architect multi-tenant, enterprise-grade AI/ML solutions using AWS services like SageMaker, Bedrock, Lambda, API Gateway, DynamoDB, ECS, S3, OpenSearch, and Step Functions. Lead full lifecycle development of AI/ML/GenAI solutions—from PoC to production—ensuring reliability and performance. Define and implement best practices for MLOps, DataOps, and DevOps on AWS. AI/ML & Generative AI Expertise Design Conversational AI, RAG (Retrieval-Augmented Generation), and Generative AI architectures using models like Claude (Anthropic), Mistral, Llama, and Titan. Optimize LLM inference pipelines, embeddings, vector search, and hybrid retrieval strategies for AI-based applications. Drive ML model training, deployment, and monitoring using AWS SageMaker and AI/ML pipelines. Cloud & Infrastructure Management Architect event-driven, serverless, and microservices architectures for AI/ML applications. Ensure high availability, disaster recovery, and cost optimization in cloud deployments. Implement IAM, VPC, security best practices, and compliance. Team & Client Engagement Lead and mentor a team of ML engineers, Python Developer and Cloud Engineers. Collaborate with business stakeholders, product teams, and multiple clients to define requirements and deliver AI/ML/GenAI-driven solutions. Conduct technical workshops, training sessions, and knowledge-sharing initiatives. Multi-Client & Business Strategy Manage multiple client engagements, delivering AI/ML/GenAI solutions tailored to their business needs. Define AI/ML/GenAI roadmaps, proof-of-concept strategies, and go-to-market AI solutions. Stay updated on cutting-edge AI advancements and drive innovation in AI/ML offerings. Key Skills & Technologies Cloud & DevOps AWS Services: Bedrock, SageMaker, Lambda, API Gateway, DynamoDB, S3, ECS, Fargate, OpenSearch, RDS MLOps: SageMaker Pipelines, CI/CD (CodePipeline, GitHub Actions, Terraform, CDK) Security: IAM, VPC, CloudTrail, GuardDuty, KMS, Cognito AI/ML & GenAI LLMs & Generative AI: Bedrock (Claude, Mistral, Titan), OpenAI, Llama ML Frameworks: TensorFlow, PyTorch, LangChain, Hugging Face Vector DBs: OpenSearch, Pinecone, FAISS RAG Pipelines, Prompt Engineering, Fine-tuning Software Architecture & Scalability Serverless & Microservices Architecture API Design & GraphQL Event-Driven Systems (SNS, SQS, EventBridge, Step Functions) Performance Optimization & Auto Scali
Posted 2 days ago
4.0 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Experience- 4-6Yrs Location ( Mumbai- Thane) Only Immediate joiners Key Responsibilities Database Engineering & Operations Own and manage critical components of the database infrastructure across production and non-production environments. Ensure performance, availability, scalability, and reliability of databases including PostgreSQL, MySQL, and MongoDB Drive implementation of best practices in schema design, indexing, query optimization, and database tuning. Take initiative in root cause analysis and resolution of complex performance and availability issues. Implement and maintain backup, recovery, and disaster recovery procedures; contribute to testing and continuous improvement of these systems. Ensure system health through robust monitoring, alerting, and observability using tools such as Prometheus, Grafana, and CloudWatch. Implement and improve automation for provisioning, scaling, maintenance, and monitoring tasks using scripting (e.g., Python, Bash). Database Security & Compliance Enforce database security best practices, including encryption at-rest and in-transit, IAM/RBAC, and audit logging. Support data governance and compliance efforts related to SOC 2, ISO 27001, or other regulatory standards. Collaborate with the security team on regular vulnerability assessments and hardening initiatives. DevOps & Collaboration Partner with DevOps and Engineering teams to integrate database operations into CI/CD pipelines using tools like Liquibase, Flyway, or custom scripting. Participate in infrastructure-as-code workflows (e.g., Terraform) for consistent and scalable DB provisioning and configuration. Proactively contribute to cross-functional planning, deployments, and system design sessions with engineering and product teams. Required Skills & Experience 4-6 years of production experience managing relational and NoSQL databases in cloud-native environments (AWS, GCP, or Azure). Proficiency in: Relational Databases: PostgreSQL and/or MySQL NoSQL Databases: MongoDB (exposure to Cassandra or DynamoDB is a plus) Deep hands-on experience in performance tuning, query optimization, and troubleshooting live systems. Strong scripting ability (e.g., Python, Bash) for automation of operational tasks. Experience in implementing monitoring and alerting for distributed systems using Grafana, Prometheus, or equivalent cloud-native tools. Understanding of security and compliance principles and how they apply to data systems. Ability to operate with autonomy while collaborating in fast-paced, cross-functional teams. Strong analytical, problem-solving, and communication skills. Nice to Have (Bonus) Experience with Infrastructure as Code tools (Terraform, Pulumi, etc.) for managing database infrastructure. Familiarity with Kafka, Airflow, or other data pipeline tools. Experience working in multi-region or multi-cloud environments with high availability requirements. Exposure to analytics databases (e.g., Druid, ClickHouse, BigQuery, Vertica Db) or search platforms like Elasticsearch. Participation in on-call rotations and contribution to incident response processes.
Posted 2 days ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Zenwork stands at the forefront of cloud/API-based Tax Automation and Governance, Risk Compliance (GRC) technology, pioneering the future of Tax Tech and GRC Automation. Our comprehensive suite of top-tier AI-SaaS solutions serve a vast clientele of over 500,000, providing effortless tax automation through our APIs for major enterprises. In terms of numbers, during the tax year 2022, we have reported over $413 Billion to the Internal Revenue Service, spanning over 30 million transactions for some of the globe's leading and most forward-thinking firms. As a rapidly expanding digital compliance AI-SaaS Product company, Zenwork boasts a customer base that spans all sizes, partnering with industry giants like Intuit, Bill.com, Xero, and Sage Intacct. Recognized as one of the fastest-growing companies in the U.S. by Inc magazine and a consecutive Accountex award recipient, Zenwork has garnered significant acclaim. Backed by Spectrum Equity Partners, Zenwork has successfully raised over $163M in funding, maintaining profitability as a late-stage entity with operations in both the U.S. and India. Location: Zenwork, Financial District, Manikonda, Hyderabad Experience: 9+ Years Job Type: Full-time Employment Type: Full-time | Work-From-Office About the Role We are seeking a highly skilled Software Architect to lead the design and development of scalable, high-performance applications for our product-based software company. The ideal candidate should have deep expertise in .NET, .NET Core, SQL, Redis, Queuing systems, and AWS , with a strong foundation in modern software design principles, cloud-native solutions, and distributed architectures. Key Responsibilities Architect & Design: Develop scalable, high-performance software architectures for enterprise applications. Technology Leadership: Guide development teams in best practices for .NET, .NET Core, microservices, and cloud-based architectures . Cloud & Infrastructure: Design cloud-native solutions using AWS (EC2, S3, Lambda, RDS, DynamoDB, etc.) . Database Management: Optimize performance and scalability of SQL Server and Redis . Performance Optimization: Implement caching (Redis), queuing (Kafka/RabbitMQ/Azure Service Bus), and event-driven architectures . Security & Compliance: Ensure best practices for security, data protection, and compliance . Mentorship: Lead engineering teams, conduct code reviews, and enforce architectural standards. Innovation & Research: Stay updated with emerging technologies and integrate them into system design. Required Skills & Experience 10+ years of software development experience, with at least 3+ years as a Software Architect . Strong expertise in .NET, .NET Core, C# , and microservices architecture . Proficiency in SQL Server, Redis, and NoSQL databases . Hands-on experience with AWS cloud services . Expertise in event-driven architectures, queuing systems (Kafka, RabbitMQ, Azure Service Bus, SQS, etc.) . Understanding of DevOps, CI/CD, containerization (Docker, Kubernetes) is a plus. Excellent problem-solving and decision-making skills. Strong leadership and communication skills to drive collaboration across teams. Why Join Us? Work in an innovative product-based company solving real-world challenges. Collaborate with top engineering talent and drive technology decisions . Competitive compensation, career growth opportunities, and work-life balance.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40098 Jobs | Dublin
Wipro
19612 Jobs | Bengaluru
Accenture in India
17156 Jobs | Dublin 2
EY
15921 Jobs | London
Uplers
11674 Jobs | Ahmedabad
Amazon
10661 Jobs | Seattle,WA
Oracle
9470 Jobs | Redwood City
IBM
9401 Jobs | Armonk
Accenture services Pvt Ltd
8745 Jobs |
Capgemini
7998 Jobs | Paris,France