Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
14.0 - 20.0 years
30 - 45 Lacs
Noida, Gurugram, Greater Noida
Hybrid
Job Description: Java/Microservices/ Mainframe - Skills Backend Server-side Framework Java 17 , Spring Boot, Spring MVC Microservice Architecture, Rest API Design Unit Testing Framework, Junit, Mockito Performance Tuning & Profiling Visual VM, JMETER Logging, Troubleshooting New Relic, Dynatrace, Kibana, Cloud Watch Database Relational database – Oracle, MSSQL, MySql, Postgres, DB2 Nosql database – MongoDB, Redis, DynamoDB Cloud Platform, Deployment and DevOps Cloud Platforms – AWS (ECS, EC2, Cloud front, Cloud Watch,S3, IAM, Route 53, ALB), Redshift, RDS DevOps – Docker containers, SonarQube, Jenkins, Git, Github, Github-actions, CI/CD pipelines, Terraform Mainframe COBOL JCL DB2 (Advance SQL skills) Alight COOL Tools: Xpeditor, File-Aid (Smart-File), DumbMaster (AbendAid) Omegamon iStrobe/strobe Roles & Responsibilities Ensure the architectural needs of the value stream are realized rather than those of just a channel or a team Strong knowledge of Design principles and implementing the same in designing and building the robust and scalable software solutions across the full stack having no/least adoption effort by clients Work with Product Management, Product Owners, and other value stream stakeholders to help ensure strategy and execution alignment Help manage risks, dependencies and troubleshoot and pro-actively solve problems Collaboration and review with other Architects and participating in Architecture COP and ARB Owning Performance & Non-Functional requirement including end to end ownership Acquire broader insight of the Product and Enterprise Architecture Spread broader knowledge with other engineers in the value stream Proven experience on stated engineering skills Aggregate program PI objectives into value stream PI objectives and publish them for visibility and transparency Assess the agility level of the program/value stream and help improve
Posted 4 days ago
5.0 - 10.0 years
12 - 18 Lacs
Noida
Work from Office
AWS Certified Solutions Architect (with DevOps Expertise) Immediate Start Role Summary iFactFind is seeking a highly skilled and AWS-certified Solutions Architect with strong DevOps expertise to modernise and scale our cloud infrastructure. You will play a pivotal role in architecting a secure, scalable, multi-region SaaS platform while actively supporting deployment, automation, and compliance initiatives. This is a senior-level role ideal for someone who thrives in a high-impact, collaborative, and fast-paced environment. Key Responsibilities Architecture & Infrastructure Design Audit and optimise our existing AWS infrastructure (EC2, RDS, S3, CloudFront, IAM, etc.) Architect and implement a secure, scalable multi-region SaaS environment Maintain clear separation between Dev, Staging, UAT, and Production environments Design for high availability, disaster recovery, and fault tolerance Security & Compliance Align infrastructure design with SOC 2 , GDPR , and ISO 27001 standards Implement best practices for identity, access, encryption, and monitoring Act as Information Security Officer supporting compliance initiatives DevOps & Automation Build and optimise CI/CD pipelines using AWS CodePipeline, CodeBuild, and Git Implement Infrastructure as Code (Terraform or CloudFormation) Set up proactive monitoring and alerts using AWS CloudWatch, GuardDuty, and Cost Explorer Automate deployments, rollback mechanisms, and disaster recovery Documentation & Collaboration Maintain clear architectural diagrams and onboarding playbooks Collaborate with development, QA, and product teams Participate in sprint planning and Agile ceremonies Required Skills & Experience AWS Certified Solutions Architect (Professional preferred) 5+ years of hands-on AWS experience in SaaS/cloud-native environments Deep knowledge of VPC design, IAM, EC2, RDS/Aurora, S3, CloudFront Experience with ECS/EKS, Lambda, and API Gateway Proficiency with CI/CD and Infrastructure as Code (Terraform, CloudFormation) Understanding of SOC 2, GDPR, and related compliance frameworks Strong troubleshooting, optimisation, and documentation skills Nice to Have Experience with AWS Config, Security Hub, Inspector Background in startup or high-growth SaaS environments Exposure to cost optimisation tools and FinOps Why Join iFactFind? Shape the infrastructure of a mission-driven, fast-growing SaaS platform Work with a high-trust, collaborative leadership team Enjoy flexibility, ownership, and long-term engagement opportunities Drive meaningful architectural decisions with real-world impact Important : Only apply if you are AWS Certified and available to start immediately or within the next 2 weeks.
Posted 4 days ago
1.0 - 3.0 years
8 - 13 Lacs
Pune
Work from Office
Overview We are seeking a DevOps Engineer to join the Critical Start Technologies Private Ltd. team, operating under the Critical Start umbrella, for our India operations. The ideal candidate brings 1–3 years of experience, a strong background in AWS and Terraform, and a passion for infrastructure as code. Candidates should be skilled at writing well-structured Terraform modules, proficient in AWS service provisioning, and familiar with best practices for managing IaaS and PaaS environments. Additional experience with Linux administration, GitHub Actions, container orchestration, and monitoring solutions such as CloudWatch or Prometheus is a plus. Your experience includes writing production code and proficiency in understanding and structuring large projects using Terraform modules. You possess a deep understanding of provisioners and are well-versed in remote state management. We value individuals who are proactive, detail-oriented, and passionate about infrastructure as code. Critical Start is committed to building an inclusive, equitable, and respectful workplace, and we welcome candidates from all backgrounds to apply. Responsibilities As a DevOps Engineer, you will play a key role in maintaining, evolving, and enhancing our existing Terraform-based infrastructure. You'll work across a diverse infrastructure stack to support the delivery of new projects and services to our customers. A core part of your responsibilities will be using Terraform to build modular, maintainable, and scalable infrastructure solutions. You will also take initiative in identifying opportunities to improve performance—focusing on responsiveness, availability, and scalability. Establishing effective monitoring and alerting systems will be essential, as will troubleshooting issues within distributed systems, including throughput, resource utilization, and configuration. Our infrastructure stack includes the following components: Terraform: Used for comprehensive infrastructure management. AWS Fargate: Primary platform for hosting most of our applications and services, along with select EC2 instances for specific use cases. Monitoring and alerts: AWS CloudWatch, SNS, New Relic, and Sentry.io support effective monitoring and timely alerting. Storage and databases: S3, Postgres (RDS), Memcached, RabbitMQ, and AWS Elasticsearch Service handle our storage and data processing needs. Networking and security: VPC, Route 53, IAM, ALB/NLB, Security Groups, and Secrets Manager support a secure and resilient networking environment. CI/CD pipeline: Built using EC2 Image Builder, CodeBuild, and GitHub to streamline software delivery and deployment. Qualifications Required Qualifications: 1-3 years of professional experience in a DevOps, Site Reliability Engineer, or Systems Engineering role. Ability to work through ambiguity and uncertainty. You have a solid understanding of CI/CD pipelines, including their purpose and implementation, and hands-on experience setting them up in real-world environments. You bring experience working with Terraform for provisioning using modular approaches. Strong troubleshooting, problem-solving, and collaborative mindset . You hold a Bachelor's degree from a recognized institution or possess equivalent practical experience that demonstrates your technical capabilities. Preferred Qualifications: Shell scripting experience is a strong plus. Strong knowledge of Linux/Unix systems. Familiarity with source control tools, such as Git. Experience with observability tools such as CloudWatch, New Relic, or Sentry.io Proficiency with Docker and practical experience running containers in AWS environments such as EC2 and Fargate.
Posted 4 days ago
3.0 - 5.0 years
13 - 21 Lacs
Pune
Work from Office
Overview We are seeking a DevOps Engineer II to join the Critical Start Technologies Private Ltd. team, operating under the Critical Start umbrella, for our India operations. The ideal candidate brings 3-5 years of hands-on experience in cloud-native infrastructure, CI/CD automation, and Infrastructure as Code. You bring advanced skills in AWS and Terraform, a strong understanding of scalable systems, and a mindset geared toward security, resilience, and automation-first practices. The ideal candidate has worked in complex environments with microservices, container orchestration, and multi-account AWS structures. You take pride in building robust DevOps pipelines and actively contribute to architectural and operational decisions. Experience leading small initiatives or mentoring junior engineers is a plus. Responsibilities As a DevOps Engineer II, you will be a technical contributor and enabler for scalable infrastructure delivery and automation practices. Your role involves: Owning and improving the infrastructure codebase : maintaining reusable and modular Terraform configurations, setting standards for code structure, and contributing to design documentation. Building and evolving CI/CD pipelines : designing resilient and secure build/deploy pipelines using GitHub Actions, AWS CodePipeline, or equivalent. Monitoring and Observability : developing dashboards and proactive alerting with CloudWatch, Prometheus, or New Relic to ensure high availability and quick recovery. Infrastructure Security and Compliance : implementing IAM best practices, Secrets Manager, least privilege policies, and conducting periodic audits. Optimizing cloud spend and performance through rightsizing, auto-scaling, and cost monitoring strategies. Collaborating closely with development, QA, and security teams to support full software delivery lifecycle from development through production. Participating in incident response and postmortem analysis. Qualifications Required Qualifications: 3-5 years of professional experience in DevOps, SRE, or Cloud Engineering roles. Advanced Terraform experience, including custom module design, remote state management, and backend locking. Deep knowledge of AWS services (VPC, IAM, ECS/Fargate, EC2, RDS, ALB/NLB, S3, CloudWatch, Secrets Manager, etc.). Strong background in Linux systems administration, including networking and performance tuning. Proven expertise in Docker, ECS/EKS, and secure image lifecycle. Strong scripting and automation skills using Bash, Python, or Go. Experience with GitOps, infrastructure promotion strategies, and artifact management. Familiarity with log aggregation and tracing (e.g., Fluentd, Open Telemetry, Sentry). Exposure to infrastructure testing frameworks (e.g., Terratest, InSpec). Excellent communication and cross-functional collaboration skills. Bachelor’s or Master’s degree in Computer Science or related field. Preferred Qualifications: Additional scripting experience is a strong plus. Knowledge of security and compliance frameworks like SOC2, CIS Benchmarks, or ISO 27001 is a plus. Experience working in regulated environments or with customer-facing infrastructure. Contributions to open-source infrastructure tools or Terraform modules. Exposure to Kubernetes or hybrid cloud platforms. Experience with IaC scanning tools like Checkov, tfsec, or Bridgecrew.
Posted 4 days ago
5.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Key Responsibilities WorkSpaces Design & Implementation :. Design and deploy scalable Amazon WorkSpaces environments across multiple regions (US East, EMEA, SEA). Architect Personal (persistent) and Pooled (non-persistent) WorkSpaces, supporting up to 5,000 users in production environments and 500 users in non-production environments. Serve as a Disaster Recovery (DR) layer for existing Azure Virtual Desktop solutions. Manage user profiles, including standard, power, and GPU-based WorkSpaces, with FSLogix profile management for O365 and non-persistent Automation & Image Management :. Implement Bring Your Own License (BYOL) strategy and deploy custom images using Image Builder and Merck eCore base images. Design and enforce naming conventions, tagging strategies, and IAM role assignments for workspace deployments. Automate workspace provisioning processes using AWS CLI, CloudFormation, and Compliance & Governance :. Configure and manage IAM permissions, domain policies, secret management, and eDiscovery settings . Implement Active Directory integration using AWS Managed Microsoft AD or AD Connector. Ensure security compliance through encryption, access control policies, and audit logging via CloudTrail and & Connectivity :. Configure network settings, including bandwidth, latency, VPCs, subnets, endpoints, and security groups. Set up multi-AZ deployments for high availability and integrate with AWS Direct Connect for secure hybrid Logging & Recovery :. Design observability frameworks using CloudWatch, CloudTrail, and internal databases. Define backup, recovery, and rollback procedures for workspace environments. Provide operational support, image optimization, and iterative improvements based on testing feedback. Required Skills & Experience. 8+ years of experience as an AWS Cloud Engineer, DevOps Engineer, or EUC Specialist. Proven expertise in deploying and managing Amazon WorkSpaces at scale. Hands-on experience with Python, PowerShell, and scripting for infrastructure automation. Proficiency with Infrastructure as Code (IaC) using Terraform, AWS CloudFormation, and AWS CLI. Strong understanding of networking, security, Active Directory integration, and observability in cloud environments. In-depth knowledge of VDI design principles, including BYOL and hybrid DR configurations. Preferred Qualifications. AWS Certifications (e.g., Solutions Architect, SysOps Administrator, or DevOps Engineer). Experience with Azure Virtual Desktop (AVD) and hybrid VDI solutions. Familiarity with eDiscovery, FSLogix, and enterprise IAM frameworks. Exposure to large-scale EUC and cloud migration projects. (ref:hirist.tech).
Posted 4 days ago
4.0 - 6.0 years
4 - 8 Lacs
Kolkata, Hyderabad, Pune
Work from Office
AWS Engineer1 TitleAWS Engineer Location: Offshore (Remote) Job Overview: We are seeking a skilled AWS Engineer to support the buildout and scaling of CodePulse , an internal engineering metrics platform. This is a contract position for 36 months , focused on implementing AWS infrastructure for high-throughput metric ingestion and analytics. The ideal candidate has a strong track record of building scalable, event-driven systems using core AWS services such as ECS Fargate , SQS , API Gateway , RDS , S3 , and Elasticache . Experience working on similar data processing or pipeline-based platforms is highly desirable. Key Responsibilities: Design, implement, and maintain secure, auto-scaling AWS infrastructure for a container-based microservice application. Deploy ECS (Fargate) workloads that process messages from SQS queues and write results to RDS and S3. Set up and tune CloudWatch alarms, logs, and metrics for system observability and alerting. Configure and optimize infrastructure componentsSQS, SNS, API Gateway, RDS (PostgreSQL), ElastiCache (Redis), and S3. Support integration with GitHub and Jira by securely handling API credentials, tokens, and webhook flows. Write and manage infrastructure-as-code using Terraform or AWS CDK, with support for versioning and team hand-off. Work with internal engineers to troubleshoot issues, optimize performance, and manage deployment workflows. Qualifications: 4-6 years of hands-on experience working as an AWS DevOps or Cloud Engineer. Proven experience deploying and scaling services using ECS Fargate , SQS , API Gateway , RDS (Postgres) , and S3 . Experience with Redis caching using Elasticache and familiarity with tuning cache strategies. Strong experience with CloudWatch , including logs, alarms, and dashboard setup. Proficient with Terraform or AWS CDK for infrastructure automation. Strong understanding of VPCs, IAM roles and policies, TLS, and secure communication patterns. Demonstrated experience building or supporting event-driven microservices in a production setting. Ability to work independently in a remote, distributed team and communicate clearly. Preferred Qualifications: Experience building internal tools or platforms with metric processing , workflow orchestration , or CI/CD integration . Familiarity with GitHub Actions , Docker , and container image deployment via ECR. Experience optimizing AWS infrastructure for cost efficiency and auto-scaling under burst loads . Prior experience integrating with third-party APIs like GitHub, Jira, or ServiceNow (optional but a plus). Location - Pune,Hyderabad,Kolkata,Jaipur,Chandigarh
Posted 4 days ago
10.0 - 12.0 years
11 - 16 Lacs
Pune
Work from Office
Project Manager Oracle Fusion Applications1 Job SummaryWe are seeking an experienced Project Manager to lead Oracle Fusion Applications (Finance, SCM, HCM), Fusion Tech and Oracle EBS IT services projects. The ideal candidate will have a strong background in successfully executing end-to-end Oracle Fusion implementation projects and managing cross-functional teams to deliver projects on time, within budget, and with high quality. Key Responsibilities:Lead the planning, execution, and delivery of Oracle Fusion Applications (Finance, SCM, HCM) and EBS , Technology and support projects. Manage end-to-end project lifecycle from initiation to go-live and post-production support. Coordinate with stakeholders, business users, functional and technical teams to ensure project goals are achieved. Develop and maintain project plans, resource allocation plans, risk management logs, and issue logs. Ensure adherence to project management best practices, governance standards, and customer satisfaction targets. Provide regular project status updates to senior management and clients. Identify and mitigate project risks proactively. Manage change requests and ensure scope, timeline, and costs are controlled. Guide teams in adopting Oracle Cloud implementation methodologies. Maintain high levels of team motivation and performance. Required Qualifications:10-12 years of overall experience in IT project management. Minimum 2 end-to-end Oracle Fusion Application (Finance, SCM, and/or HCM) implementation projects successfully delivered. Strong knowledge of Oracle Fusion Cloud modules and Oracle EBS modules. Project Management Certification (PMP, PRINCE2, or equivalent) is mandatory. Strong leadership, communication, and stakeholder management skills. Ability to work with cross-functional global teams in a dynamic environment. Hands-on experience with project management tools (e.g., MS Project, Jira, Smartsheet, etc.). Preferred Skills: Experience managing multi-pillar Oracle Cloud projects (FIN + SCM + HCM). Familiarity with Agile and Hybrid project methodologies. Previous consulting background is a plus. Education:Bachelor's degree in Information Technology, Business Administration, Engineering, or a related field. MBA is a plus (not mandatory).
Posted 4 days ago
5.0 - 10.0 years
9 - 13 Lacs
Hyderabad, Pune
Work from Office
Senior Consultant1 Salesforce developer with a Minimum 5 Years of experience Candidate must have Salesforce Admin certification and Developer certifications Candidatemust haveApex Coding hands-on experience Candidate should have Good Analytical skills, Debugging and Bug fix experience Candidates must have hands-on development experience on Lightening Candidate must have Sales and Service Cloud module experience Candidate must have code deployment experience through VSS, Jenkins Candidate must have good communication skills to interact with the Customer Candidate should have Agile development experience
Posted 4 days ago
5.0 - 10.0 years
10 - 12 Lacs
Pune, Chennai, Jaipur
Work from Office
We are hiring an experienced Python Developer for a contractual role with a leading global digital services client via Awign Expert. The role requires hands-on development experience with AWS, PySpark, Lambdas, CloudWatch, SNS, SQS, and CloudFormation. The developer will work on real-time data integrations using various data formats and will manage streaming data via Kinesis and implement autoscaling strategies. The ideal candidate is a strong individual contributor, a collaborative team player, and possesses excellent problem-solving and communication skills. This is a high-impact opportunity for someone passionate about cloud-native Python applications and scalable architectures. Location : - Jaipur, Chennai, Pune, Bengaluru, Hyderabad, Chandigarh, Indore, Nagpur, Gurugram, Mangalore
Posted 5 days ago
7.0 - 12.0 years
10 - 14 Lacs
Navi Mumbai
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Architecture Good to have skills : AWS Elastic Beanstalk, AWS Lambda Administration, Infrastructure As Code (IaC)Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application design and development process- Coordinate with stakeholders to gather requirements- Ensure timely project delivery Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Architecture, AWS Elastic Beanstalk, AWS Lambda Administration, Infrastructure As Code (IaC)- Strong understanding of cloud computing principles- Experience in designing scalable and secure AWS solutions- Knowledge of AWS services and best practices- Hands-on experience in implementing AWS architectures Additional Information:- The candidate should have a minimum of 7.5 years of experience in AWS Architecture- This position is based at our Mumbai office- A 15 years full-time education is required Qualification 15 years full time education
Posted 5 days ago
5.0 - 8.0 years
5 - 9 Lacs
Kolkata
Work from Office
Seeking a results-driven Python Developer with expertise in API development, AWS services, SQL, and raw queries. Must have a basic grasp of backend architecture and system design principles.A Python Developer Lead plays a crucial role in the software development lifecycle, combining deep technical expertise in Python with strong leadership and project management skills. They are responsible for guiding a team of Python developers, ensuring the delivery of high-quality, scalable, and efficient software solutions. Job Summary: The Python Developer Lead will be responsible for overseeing the design, development, and deployment of robust, scalable, and performant Python applications. This role requires a blend of hands-on coding, architectural design, team leadership, and cross-functional collaboration. The Lead will mentor junior developers, establish best practices, ensure code quality, and contribute significantly to the overall technical strategy and success of our projects. Key Responsibilities: Technical Leadership & Architecture: Lead the design and development of complex Python-based systems, ensuring scalability, reliability, and maintainability. Define and enforce coding standards, design patterns, and architectural principles across the team. Conduct code reviews, provide constructive feedback, and ensure adherence to best practices. Stay abreast of emerging technologies, tools, and trends in the Python ecosystem and integrate relevant advancements. Team Management & Mentorship: Manage and mentor a team of Python developers, fostering their technical growth and professional development. Assign tasks, monitor progress, and provide guidance to ensure efficient project execution. Facilitate knowledge sharing and encourage a collaborative team environment. Participate in the hiring process for new team members. Software Development & Delivery: Develop, test, and deploy high-quality, efficient, and well-documented Python code for various applications and services. Work with cross-functional teams (Product, UI/UX, QA, DevOps) to translate business requirements into technical specifications and deliver effective solutions. Design and implement RESTful APIs, integrate with third-party services, and manage data pipelines. Troubleshoot and debug complex issues, ensuring low-latency and high-availability applications. Oversee the entire software development lifecycle, from conception to deployment and maintenance.
Posted 5 days ago
5.0 - 10.0 years
12 - 15 Lacs
Bengaluru
Hybrid
Job Description We are seeking a skilled and proactive AWS DevOps Engineer to join our growing team. You will be responsible for managing scalable infrastructure, automating deployments, monitoring environments, and ensuring optimal performance and security across cloud-based systems. If you're passionate about automation, cloud technologies, and system reliability wed love to hear from you! Key Responsibilities Design, manage, and optimize AWS infrastructure components (EC2, S3, RDS, IAM, VPC, Lambda, etc.). Develop and maintain automation scripts using Bash , Python , or PowerShell for operations, deployments, and monitoring. Implement monitoring and alerting systems using CloudWatch , Datadog , Prometheus , or similar tools. Automate infrastructure provisioning through Infrastructure as Code (IaC) tools like Terraform , CloudFormation , or AWS CDK . Enforce security best practices (IAM policies, encryption, logging, patch management). Manage incident response, conduct root cause analysis, and resolve production issues efficiently. Support and enhance CI/CD pipelines using tools like Jenkins , AWS CodePipeline , GitHub Actions , etc. Monitor and optimize cost, performance, and resource utilization across environments. Ensure robust backup and disaster recovery strategies for cloud workloads. Participate in on-call rotations and respond to high-priority alerts when necessary. Nice to Have AWS Certifications : AWS Certified SysOps Administrator or Solutions Architect. Experience with Kubernetes , ECS , or EKS . Familiarity with Ansible , Chef , or other configuration management tools. Exposure to multi-cloud or hybrid-cloud environments. Experience working in regulated environments (e.g., healthcare, finance, government). Why Join Us? Opportunity to work with a high-performing, collaborative DevOps team. Exposure to cutting-edge cloud technologies. Dynamic work culture with a strong emphasis on innovation and continuous learning. Interested candidates can apply here or send your resume to srinivas.appana@relevancelab.com
Posted 6 days ago
6.0 - 10.0 years
15 - 20 Lacs
Mumbai
Work from Office
Deploy, configure, and maintain AWS-based infrastructure and services. Monitor and optimize AWS resources to ensure optimal performance and cost-effectiveness. Collaborate with development teams to automate deployment and management processes using AWS containerization or other relevant tools. Implement and enforce security measures to protect AWS resources and data. Troubleshoot and resolve operational issues related to AWS services and infrastructure . Stay updated with the latest AWS services and features , and proactively propose improvements to enhance cloud infrastructure. Cloud Services : In-depth knowledge and hands-on experience with various AWS services such as RDS, IAM, VPC , and more. Infrastructure as Code (IaC) : Proficient in tools like AWS CloudFormation or Terraform to design, provision, and manage infrastructure resources in a repeatable and automated manner. Monitoring and Troubleshooting : Skilled in monitoring AWS resources, configuring alerts, and troubleshooting to ensure high availability and performance of applications and services. Security and Compliance : Understanding of AWS security best practices, including access controls, encryption, and compliance standards such as HIPAA or GDPR . Networking : Proficient in configuring and managing VPCs, subnets, security groups, routing tables, and VPN connections .
Posted 1 week ago
1.0 - 3.0 years
2 - 4 Lacs
Kolkata
Hybrid
Required Skills Strong proficiency in Python (3.x) and Django (2.x/3.x/4.x) Hands-on experience with Django REST Framework (DRF) Expertise in relational databases like PostgreSQL or MySQL Proficiency with Git and Bitbucket Solid understanding of RESTful API design and integration Experience in domain pointing and hosting setup on AWS or GCP Deployment knowledge on EC2 , GCP Compute Engine , etc. SSL certificate installation and configuration Familiarity with CI/CD pipelines (GitHub Actions, Bitbucket Pipelines, GitLab CI) Basic usage of Docker for development and containerization Ability to independently troubleshoot server/deployment issues Experience managing cloud resources like S3 , Load Balancers , and IAM roles Preferred Skills Experience with Celery and Redis / RabbitMQ for asynchronous task handling Familiarity with front-end frameworks like React or Vue.js Exposure to Cloudflare or similar CDN/DNS tools Experience with monitoring tools: Prometheus , Grafana , Sentry , or CloudWatch Why Join Us? Work on impactful and modern web solutions Growth opportunities across technologies and cloud platforms Collaborative, inclusive, and innovation-friendly work environment Exposure to challenging and rewarding projects
Posted 1 week ago
8.0 - 12.0 years
16 - 27 Lacs
Chennai, Bengaluru
Work from Office
Role & responsibilities Design, develop, and optimize scalable ETL pipelines using PySpark and AWS data services Work with structured and semi-structured data from various sources and formats (CSV, JSON, Parquet) Build reusable data transformations using Spark DataFrames, RDDs, and Spark SQL Implement data validation, quality checks, and ensure schema evolution across data sources Manage deployment and monitoring of Spark jobs using AWS EMR, Glue, Lambda, and CloudWatch Collaborate with product owners, architects, and data scientists to deliver robust data workflows Tune job performance, manage partitioning strategies, and reduce job latency/cost Contribute to version control, CI/CD processes, and production support Preferred candidate profile Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 5+ years of experience in PySpark, Spark SQL, RDDs, UDFs, and Spark optimization Strong experience in building ETL workflows for large-scale data processing Solid understanding of AWS cloud ecosystem, especially S3, EMR, Glue, Lambda, Athena Proficiency in Python, SQL, and shell scripting Experience with data lakes, partitioning strategies, and file formats (e.g., Parquet, ORC) Familiarity with Git, Jenkins, and automated testing frameworks (e.g., PyTest) Experience with Redshift, Snowflake, or other DW platforms Exposure to data governance, cataloging, or DQ frameworks Terraform or infrastructure-as-code experience Understanding of Spark internals, DAGs, and caching strategies
Posted 1 week ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Description. Remitly's vision is to transform lives with trusted financial services that transcend borders. Since 2011, we have been tirelessly delivering on our promises to people who send money around the world. Today, we are reimagining global financial services and building products that extend beyond traditional barriers to give customers access to more of the services they need, no matter where they call home. Join over 2,700 employees worldwide who are growing their careers with purpose and connection with our customers while having a positive impact on millions of people around the globe.. About The Role. The Global Regional Expansion team manages the core of Remitly's product; a platform for transferring funds quickly and reliably, ensuring our customers always have a great experience, from send to receive. We ensure the movement of over 12 billion dollars a year through disbursement methods such as bank deposit, cash pickup, mobile wallets, and home delivery.. As a Software Development Engineer III (SDE III) on the Global Regional Expansion team at Remitly, you will play a key role in building and scaling our platform to support remittance expansion into new regions. You will develop scalable, secure, and regionally tailored solutions that address the needs of customers and partners in different markets.. As an experienced engineer, you will design and implement complex systems, collaborate cross-functionally, and lead key technical initiatives to enable seamless expansion. You will also help improve our existing infrastructure, drive engineering excellence, and ensure our systems are scalable, and compliant with regional financial regulations. You Will. Solve technical challenges with a customer-centric mindset.. Design and implement solutions for ambiguous and difficult problems involving multiple technologies (TypeScript, React Native, Java/Kotlin, Go) and systems.. Guide the team's technical strategy while leading, shipping, and operating full-stack features and systems.. Solid experience with APIs, microservices, and event-driven architectures. Drive engineering excellence by delivering well-tested, observable, documented, efficient, scalable, and resilient solutions, improving team processes, and contributing to cross-team technical designs.. Enhance team talent through mentoring, code review, interviewing, and ramping new hires.. Collaborate with other engineers, product managers, designers, and analysts to achieve outstanding customer experiences.. You Have. Strong computer science, web, and native app fundamentals, and knowledge of design patterns.. Interest in regional financial regulations, security, and compliance requirements in fintech or global payments. 5+ years of software development experience.. Experience with large scale database systems like MySQL. Experience with Docker, Kubernetes, or other orchestration tools. Experience with monitoring systems such as Prometheus, VictoriaMetrics, CloudWatch, NewRelic. Expertise in one of the following programming languages: TypeScript, JavaScript, Kotlin, Java, Go.. Excellent collaboration skills.. Our Benefits. Medical, Dental & Vision Insurance. Life Insurance. Mental Health & Family Forming benefits. Learning & development benefits. We are committed to nondiscrimination across our global organization and in all of our business operations. Employment is determined based upon personal capabilities and qualifications without discrimination on the basis of race, creed, color, religion, sex, gender identification and expression, marital status, military status or status as an honorably discharge/veteran, pregnancy (including a woman's potential to get pregnant, pregnancy-related conditions, and childbearing), sexual orientation, age (40 and over), national origin, ancestry, citizenship or immigration status, physical, mental, or sensory disability (including the use of a trained dog guide or service animal), HIV/AIDS or hepatitis C status, genetic information, status as an actual or perceived victim of domestic violence, sexual assault, or stalking, or any other protected class as established by law.. Remitly is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.. Show more Show less
Posted 1 week ago
2.0 - 5.0 years
5 - 8 Lacs
Bengaluru
Work from Office
C# .NET Development. AWS Cloud Services (EC2,. RDS, CloudWatch). DevOps & CI/CD: Jenkins,. Git, GitHub. API Integration and RESTful. Services. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
10.0 - 15.0 years
15 - 24 Lacs
Visakhapatnam
Work from Office
AWS Load Tester Architect (AWS Certified) to lead performance testing on EKS/serverless apps using JMeter, Gatling. Analyze metrics via Grafana/CloudWatch, run chaos tests, script in Python/NodeJS, and ensure scalable, resilient AWS systems.
Posted 1 week ago
5.0 - 8.0 years
66 - 108 Lacs
Kolkata
Work from Office
Seeking a results-driven Python Developer with expertise in API development, AWS services, SQL, and raw queries. Must have a basic grasp of backend architecture and system design principles.
Posted 1 week ago
4.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Were building the technological foundation for our companys Semantic Layera common data language powered by Anzo / Altair Graph Studio. As a Senior Software Engineer, youll play a critical role in setting up and managing this platform on AWS EKS, enabling scalable, secure, and governed access to knowledge graphs, parallel processing engines, and ontologies across multiple domains, including highly sensitive ones like clinical trials. Youll help design and implement a multi-tenant, cost-aware, access-controlled infrastructure that supports internal data product teams in securely building and using connected knowledge graphs. Key Responsibilities Implement a Semantic Layer for on Anzo / Altair Graph Studio and Anzo Graph Lakehouse in a Kubernetes or ECS environment (EKS / ECS) Develop and manage Infrastructure as Code using Terraform and configuration management via Ansible Integrate platform authentication and authorization with Microsoft Entra ID (Azure AD) Design and implement multi-tenant infrastructure patterns that ensure domain-level isolation and secure data access Build mechanisms for cost attribution and usage visibility per domain and use case team Implement fine-grained access control, data governance, and monitoring for domains with varying sensitivity (e.g., clinical trials) Automate deployment pipelines and environment provisioning for dev, test, and production environments Collaborate with platform architects, domain engineers, and data governance teams to curate and standardize ontologies Minimum Requirements 4 - 9 years of experience in Software / Platform Engineering, DevOps, or Cloud Infrastructure roles Proficiency in Python for automation, tooling, or API integration Hands-on experience with AWS EKS / ECS and associated services (IAM, S3, CloudWatch, etc.) Strong skills in Terraform / Ansible / IaC for infrastructure provisioning and configuration Familiarity with RBAC, OIDC, and Microsoft Entra ID integration for enterprise IAM Understanding of Kubernetes multi-tenancy and security best practices Experience building secure and scalable platforms supporting multiple teams or domains Preferred Qualifications Experience deploying or managing Anzo, Altair Graph Studio, or other knowledge graph / semantic layer tools Familiarity with RDF, SPARQL, or ontologies in an enterprise context Knowledge of data governance, metadata management, or compliance frameworks Exposure to cost management tools like AWS Cost Explorer / Kubecost or custom chargeback systems Why Join Us Be part of a cutting-edge initiative shaping enterprise-wide data access and semantics Work in a cross-functional, highly collaborative team focused on responsible innovation Influence the architecture and strategy of a foundational platform from the ground up
Posted 1 week ago
5.0 - 10.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As a Senior IT Systems & DevOps Engineer, you will be responsible for IT systems, incident, change, and release management. You will ensure the seamless deployment of new releases, maintaining system stability, security, and compliance in a regulated environment. You will collaborate with cross-functional teams, manage stakeholder communications, and drive automation and optimization across cloud-based platforms supporting drug discovery and development. Key Responsibilities: Oversee incident, change, and release management for IT systems, ensuring compliance with regulatory standards. Manage Azure AD (Entra ID) for identity and access management, including authentication flows for internal and third-party services. Implement and advocate DevOps best practices, including CI/CD, automation, and observability across platforms. Collaborate with cross-functional teams to influence architectural decisions and align infrastructure with business goals. Ensure compliance and security within a regulated (GxP) environment, implementing RBAC, secrets management, and monitoring frameworks. Design, develop, test, and document business requirements related to IT systems and infrastructure. Coordinate and perform system management tasks, ensuring alignment with quality and compliance standards. Autonomous review and clarification of business requirements, creation of technical designs, and stakeholder alignment. Manage and optimize cloud infrastructure (AWS & Azure), including cost management and performance tuning. Deploy and manage containerized applications using Docker, Kubernetes, Helm, and ArgoCD (GitOps). Implement Infrastructure as Code (IaC) using Terraform and AWS CloudFormation. Automate workflow and integration using Python and configuration management tools like Ansible. Ensure observability, including logging, monitoring, and tracing with tools like Prometheus, Grafana, ELK Stack, and AWS-native solutions. Participate in compliance activities, including audits, patch management, and cybersecurity initiatives. Provide technical guidance and support for IT systems, assisting users and resolving incidents efficiently. Your Skills & Experience: Must-Have: 5+ years of experience in IT systems management, DevOps, cloud infrastructure, and automation. Strong expertise in Change, Release, Incident, and Problem Management. Hands-on experience with Azure DevOps (project configurations, repositories, pipelines, environments). Strong knowledge of AWS services (VPC, IAM, STS, EKS, RDS, EC2, ECS, Route53, CloudWatch, CloudTrail, Secrets Manager, S3, API Gateway, Lambda, MWAA). Experience with Linux, Windows servers, and Oracle PLSQL. Strong understanding of IT landscapes and a willingness to work on diverse IT systems. Hands-on experience with containerization and Kubernetes (Docker, Helm, ArgoCD). Proficiency in Python scripting for automation and workflow integration. Experience with Infrastructure as Code (IaC) using Terraform and AWS CloudFormation. Strong experience in observability, security, and compliance frameworks, including RBAC, secrets management, and monitoring tools. Global stakeholder management experience with excellent English communication skills (written & oral). Good to Have: Experience in a regulated industry (GxP, Pharma, Life Sciences). Familiarity with Agile ways of working. Knowledge of Next.js, Storybook, Tailwind, TypeScript for front-end development. Experience with PostgreSQL, OpenSearch/Elasticsearch for data management. Familiarity with R and SAS for data analysis and statistical modeling. Understanding of AWS billing practices and cost optimization tools. Why Join Us? Work in a high-impact role contributing to cutting-edge R&D in drug discovery and development. Be part of a multicultural, agile team with high autonomy in decision-making. Exposure to a diverse tech stack combining Azure, AWS, Kubernetes, Python, and CI/CD tools. Opportunities for career growth and skill development in cloud computing, security, and automation. Work in a collaborative and innovative environment with global teams in the US, Europe, and India
Posted 1 week ago
4.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Work from Office
As a Senior Cloud Platform Back-End Engineer with a strong background in AWS tools and services, you will join the Data & AI Solutions - Engineering team in our Healthcare R&D business. Your expertise will enhance the development and continuous improvement of a critical AWS-Cloud-based analytics platform, supporting our R&D efforts in drug discovery. This role involves implementing the technical roadmap and maintaining existing functionalities. You will adapt to evolving technologies, manage infrastructure and security, design and implement new features, and oversee seamless deployment of updates. Additionally, you will implement strategies for data archival and optimize the data lifecycle processes for efficient storage management in compliance with regulations. Join a multicultural team working in agile methodologies with high autonomy. The role requires office presence at our Bangalore location. Who You Are: University degree in Computer Science, Engineering, or a related field Proficiency using Python, especially with the boto3 library to interact with AWS services programmatically, for infrastructure as a code with AWS CDK and AWS Lambdas Experience with API Development & Management by designing, developing, and managing APIs using AWS API Gateway and other relevant API frameworks. Strong understanding of AWS security best practices, IAM policies, encryption, auditing and regulatory compliance (e.g. GDPR). Experience with Application Performance Monitoring and tracing solutions like AWS CloudWatch, X-Ray, and OpenTelemetry. Proficiency in navigating and utilizing various AWS tools and services System design skills in cloud environment Experience with SQL and data integration into Snowflake Familiarity with Microsoft Entra ID for identity and access management Willingness to work in a multinational environment and cross-functional teams distributed between US, Europe (mostly, Germany) and India Sense of accountability and ownership, fast learner Fluency in English & excellent communication skills
Posted 1 week ago
8.0 - 13.0 years
15 - 30 Lacs
Pune
Work from Office
Job Description SecurityHQ is a global cybersecurity company. Our specialist teams design, engineer and manage systems that promote clarity and an inclusive culture of trust, build momentum around improving security posture, and increase the value of cybersecurity investment. Around the clock, 365 days per year, our customers are never alone. Were SecurityHQ. Were focused on engineering cybersecurity, by design Responsibilities Lead response to complex, high-impact security incidents in AWS, including unauthorized access, data breaches, malware infections, DDoS attacks, phishing, APTs, zero-day exploits, and cloud misconfigurations. Perform in-depth analysis of security incidents, including advanced log analysis, digital forensic investigation, and root cause analysis. Develop and implement containment, eradication, and recovery plans for complex security incidents, minimizing disruption and improving security posture. Coordinate with internal and external stakeholders during incident response activities. Document incident details, analysis findings, and remediation actions, including detailed forensic reports and security posture assessments. Identify and recommend security improvements to prevent future incidents and enhance cloud security posture, including: AWS security best practices Security tool implementation and configuration (with a focus on CSPM tools) Vulnerability management Security awareness training Threat hunting strategies Security architecture enhancements CSPM implementation and optimization Develop and maintain AWS-specific incident response plans, playbooks, and procedures, emphasizing automation, orchestration, and continuous security posture improvement. Stay current on cloud security, digital forensics, and cloud security posture management. Mentor junior security analysts in incident response and security posture management. Participate in on-call rotation, providing expert-level support and guidance on security posture. Develop and deliver training on incident response, forensic best practices, and cloud security posture management. Conduct proactive threat hunting and security posture assessments. Contribute to the development of security tools and automation to improve incident response efficiency, effectiveness, and security posture. Essential Skills Expert-level understanding of AWS services, including: EC2, S3, RDS, VPC, Lambda CloudTrail, CloudWatch, Config, Security Hub, GuardDuty IAM, KMS AWS Organizations, AWS Control Tower Extensive experience with SIEM systems (e.g., Datadog, Qradar, Azure Sentinel) in a cloud environment, with a focus on security posture monitoring. Mastery of log analysis, network analysis, and digital forensic investigation techniques, including experience with specialized forensic tools (e.g., EnCase, FTK, Autopsy, Velociraptor) and CSPM tools. Strong experience with scripting (e.g., Python, PowerShell) for automation, analysis, tool development, and security posture management. Deep familiarity with security tools and technologies, including: IDS/IPS EDR Vulnerability scanners Firewalls Network forensics tools CSPM tools Excellent communication and interpersonal skills, with the ability to convey highly technical information to technical and non-technical audiences, including executive leadership and legal counsel, regarding incident response and security posture. Exceptional problem-solving and analytical skills; ability to remain calm, focused, and decisive under high-pressure situations, including those involving significant security posture deficiencies. Ability to work independently, lead a team, and collaborate effectively to improve the organization's security posture. Expert-level understanding of AWS services, including: EC2, S3, RDS, VPC, Lambda CloudTrail, CloudWatch, Config, Security Hub, GuardDuty IAM, KMS AWS Organizations, AWS Control Tower Extensive experience with SIEM systems (e.g., Datadog, Qradar, Azure Sentinel) in a cloud environment, with a focus on security posture monitoring. Mastery of log analysis, network analysis, and digital forensic investigation techniques, including experience with specialized forensic tools (e.g., EnCase, FTK, Autopsy, Velociraptor) and CSPM tools. Strong experience with scripting (e.g., Python, PowerShell) for automation, analysis, tool development, and security posture management. Deep familiarity with security tools and technologies, including: IDS/IPS EDR Vulnerability scanners Firewalls Network forensics tools CSPM tools Excellent communication and interpersonal skills, with the ability to convey highly technical information to technical and non-technical audiences, including executive leadership and legal counsel, regarding incident response and security posture. Exceptional problem-solving and analytical skills; ability to remain calm, focused, and decisive under high-pressure situations, including those involving significant security posture deficiencies. Ability to work independently, lead a team, and collaborate effectively to improve the organization's security posture. Education Requirements & Experience Master's degree in Computer Science, Cybersecurity, or a related field. AWS Security certifications (e.g., AWS Certified Security - Specialty). Relevant security certifications (e.g., CISSP, GCIH, GCIA, GREM, GNFA, OSCP). Experience leading incident response teams and security posture improvement initiatives. Experience with cloud automation and orchestration (e.g., AWS Systems Manager, Lambda) for incident response and security posture management. Knowledge of DevSecOps principles and practices, including security integration into CI/CD pipelines and infrastructure as code (IaC) security. Experience with container security (e.g., Docker, Kubernetes) in AWS, including forensic analysis and security posture assessment. Experience with reverse engineering and malware analysis, focused on identifying threats that impact cloud security posture. Strong understanding of legal and regulatory issues related to digital forensics, incident response, and cloud security posture (e.g., data privacy, chain of custody, compliance requirements).
Posted 1 week ago
5.0 - 6.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 6+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane