Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
bengaluru, karnataka, india
Remote
Req ID: 337682 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Cloud Engineer (AWS) to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Duties: Design and implement AWS infrastructure to support ETL and database migration workloads. Set up and optimize AWS Glue, Amazon RDS (PostgreSQL), and related services. Ensure secure, scalable, and cost-effective architecture for data migration and processing. Collaborate with Data Engineers to ensure smooth integration between ETL tools and AWS infrastructure. Automate deployments using AWS CloudFormation/Terraform. Integrate AWS workloads with existing Unix systems. Implement monitoring, logging, and alerting solutions (CloudWatch, AWS Config). Minimum Skills Required: Proven experience with AWS (Glue, RDS PostgreSQL, S3, IAM, CloudFormation/Terraform). Strong Unix/Linux administration skills. Experience with networking, IAM policies, and security best practices in AWS. Familiarity with ETL pipelines and data migration processes. Experience with CI/CD pipelines (CodePipeline, Jenkins, or similar). About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each clients needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us . NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you&aposd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana, india
On-site
* Year of Experience: 6 - 10 years * Location: Chennai/Coimbatore/Bangalore/Hyderabad Requirement: 1. Cloud: (Mandatory): Proven technical experience with AWS, scripting, and automation * Hands-on knowledge on services and implementation such as Landing Zone, Control Tower, Transit Gateway, CloudFront, IAM, VPC, EC2, S3, Lambda, Load Balancers, Auto Scaling, etc. * Experience in scripting languages such as Python, Bash, Ruby, Groovy, Java, JavaScript 2. Automation (Mandatory): Hands-on experience with Infrastructure as Code automation (IaC) and Configuration Management tools such as: * Terraform, CloudFormation, Azure ARM, Bicep, Ansible, Chef, or Puppet 3. CI/CD (Mandatory): Hands-on experience in setting up or developing CI/CD pipelines using any of the tools such as (Not Limited To): * Jenkins, AWS CodeCommit, CodeBuild, CodePipeline, CodeDeploy, GitLab CI, Azure DevOps 4. Containers & Orchestration (Good to have): Hands-on experience in provisioning and managing containers and orchestration solutions such as: * Docker & Docker Swarm * Kubernetes (PrivatePublic Cloud platforms) * OpenShift * Helm Charts Certification Expectations 1. Cloud: Certification (Mandatory, any of): * AWS Certified SysOps Administrator Associate * AWS Certified Solutions Architect Associate * AWS Certified Developer Associate * Any AWS Professional/Specialty certification(s) 2. Automation: (Optional, any of): * RedHat Certified Specialist in Ansible Automation * HashiCorp Terraform Certified Associate 3. CI-CD: (Optional) * Certified Jenkins Engineer 4. Containers & Orchestration (Optional, any of): * CKA (Certified Kubernetes Administrator) * RedHat Certified Specialist in OpenShift Administration Responsibilities: * Lead architecture and design discussions with architects and clients. * Understanding of technology best practices and AWS frameworks such as Well- Architected Framework * Implementing solutions with an emphasis on Cloud Security, Cost Optimization, and automation * Independently handle customer engagements and new deals. * Ability to manage teams and derive results. * Ability to initiate proactive meetings with Leads and extended teams to highlight any gaps/delays or other challenges. * Subject Matter Expert in technology. * Ability to trainmentor the team in functional and technical skills. * Ability to decide and provide adequate help on the career progression of people. * Handle assets development * Support to the application team Work with application development teams to design, implement and where necessary, automate infrastructure on cloud platforms * Continuous improvement - Certain engagements will require you to support and maintain existing cloud environments with an emphasis on continuously innovating through automation and enhancing stability/availability through monitoring and improving the security posture Show more Show less
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
As a Software Engineer specializing in ML applications, you will be responsible for leveraging your expertise in Ansible, Python, and Unix to develop and maintain efficient and scalable solutions. Your experience in roles such as Software Engineer, DevOps, SRE, or System Engineer in a Cloud environment will be crucial in ensuring the smooth operation of ML applications. Your practical working experience in AWS using Native Tools will be essential for the successful implementation of ML projects. Proficiency in container technology, including Docker and Kubernetes/EKS, will enable you to design robust and resilient ML solutions. Additionally, your programming skills in Go, Python, Typescript, and Bash will be valuable assets in crafting effective applications. Your familiarity with Infrastructure as Code tools such as Cloudformation, CDK, and Terraform will be vital in automating and managing the infrastructure required for ML applications. Experience with CI/CD tools like Jenkins, BitBucket, Bamboo CodeBuild, CodePipeline, Docker Registry, and ArgoCD will be necessary for implementing efficient deployment pipelines. An understanding of DevOps and SRE best practices, including Logging, Monitoring, and Security, will ensure that the ML applications you develop are reliable, secure, and performant. Strong communication skills will be essential for effectively conveying your ideas and collaborating with team members to achieve project goals.,
Posted 4 days ago
8.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Overview Working at Atlassian Atlassians can choose where they work whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Our office is in Bengaluru, but we offer flexibility for eligible candidates to work remotely across India. Whatever your preferenceworking from home, an office, or in between you can choose the place that&aposs best for your work and your lifestyle. Responsibilities As a Principal Software Engineer, you will be a technical leader and hands-on contributor, designing and optimizing high-scale, distributed storage systems. You will play a pivotal role in shaping the architecture, performance, and reliability of backend storage solutions that power critical applications at scale. Your primary responsibilities will include designing, implementing, and optimizing backend storage services that support high throughput, low latency, and fault tolerance. You will work closely with senior engineers, architects, and cross-functional teams to drive scalability, availability, and efficiency improvements in large-scale storage solutions. You will also lead technical deep dives, architecture reviews, and root cause analyses to resolve complex production issues related to storage performance, consistency, and durability. As a thought leader, you will drive best practices in distributed system design, security, and cloud cost optimization. You will also mentor senior engineers, contribute to technical roadmaps, and help shape the long-term storage strategy. Your expertise in storage consistency models, data partitioning, indexing, and caching strategies will be instrumental in improving system performance and reliability. Additionally, you will collaborate with Site Reliability Engineers (SREs) to implement management interfaces, observability and monitoring, ensuring high availability and compliance with industry standards. You will advocate for automation, Infrastructure-as-Code (IaC), and DevOps best practices, Kubernetes (EKS), and CI/CD pipelines to enable scalable deployments and operational excellence. Qualifications Basic Requirements Bachelors or Masters degree in Computer Science, Software Engineering, or a related technical field. 8+ years of experience in backend software development, focusing on distributed systems and storage solutions. 5+ years of experience working with AWS relational database services (RDS and Aurora) or equivalent in GCP. Strong expertise in system design, architecture, and scalability for large-scale storage solutions. Proficiency in at least one major backend programming language (Kotlin, Java, Go, Rust, or Python). Experience designing and implementing highly available, fault-tolerant, and cost-efficient storage architectures. Deep understanding of distributed systems, replication strategies, backup, restore, sharding, and caching. Knowledge of data security, encryption best practices, and compliance requirements (SOC2, GDPR, HIPAA). Experience leading engineering teams, mentoring senior engineers, and driving technical roadmaps. Proficiency with observability tools, performance monitoring, and troubleshooting at scale. Core Requirements Expertise in Large-Scale Storage Systems Deep knowledge of AWS relational database services (RDS and Aurora) or equivalent in GCP and their performance characteristics. Strong understanding of storage durability, consistency models, replication, and fault tolerance. Experience implementing cost-optimized data retention strategies. Distributed Systems & Scalability Deep understanding of distributed storage architectures, CAP theorem, and consistency models. Expertise in partitioning, sharding, and replication strategies for low-latency, high-throughput storage. Experience designing and implementing highly available, fault-tolerant distributed systems using consensus algorithms (Raft / Paxos). Hands-on experience with Postgres. High-Performance Backend Engineering Strong programming skills in Kotlin, Java, Go, Rust, or Python for backend storage development. Experience building event-driven, microservices-based architectures using gRPC, REST, or WebSockets. Expertise in data serialization formats (Parquet, Avro, ORC) for optimized storage access. Experience implementing data compression, deduplication, and indexing strategies to improve storage efficiency. Cloud-Native & Infrastructure Automation Strong hands-on experience with cloud storage best practices. Proficiency in Infrastructure as Code (IaC) using Terraform, AWS CDK, or CloudFormation. Experience with Kubernetes (EKS), serverless architectures (Lambda, Fargate), and containerized storage workloads. Expertise in CI/CD automation for storage services, leveraging GitHub Actions, CodePipeline, Jenkins, or ArgoCD. Performance Optimization & Observability Experience with benchmarking, profiling, and optimizing storage workloads. Proficiency in performance monitoring tools (CloudWatch, Prometheus, OpenTelemetry, Grafana) for storage systems. Strong debugging and troubleshooting skills for latency bottlenecks, memory leaks, and concurrency issues. Experience designing observability strategies (tracing, metrics, structured logging) for large-scale storage systems. Security, Compliance, and Data Protection Deep knowledge of data security, encryption at rest/in transit, and IAM policies in AWS or equivalent in GCP. Experience implementing fine-grained access controls (IAM, KMS, STS, VPC Security Groups) for multi-tenant storage solutions. Familiarity with compliance frameworks (SOC2, GDPR, HIPAA, FedRAMP) and best practices for secure data storage. Expertise in disaster recovery, backup strategies, and multi-region failover solutions. Leadership & Architectural Strategy Proven ability to design, document, and drive large-scale storage architectures from concept to production. Experience leading technical design reviews, architecture discussions, and engineering best practices. Strong ability to mentor senior and mid-level engineers, fostering growth in distributed storage expertise. Ability to influence technical roadmaps, long-term vision, and cost optimization strategies for backend storage. Our Perks & Benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. About Atlassian At Atlassian, we&aposre motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone&aposs perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Solution Architect in the Pre-Sales department, with 4-6 years of experience in cloud infrastructure deployment, migration, and managed services, your primary responsibility will be to design AWS Cloud Professional Services and AWS Cloud Managed Services solutions tailored to meet customer needs and requirements. You will engage with customers to analyze their requirements, ensuring cost-effective and technically sound solutions are provided. Your role will also involve developing technical and commercial proposals in response to various client inquiries such as Requests for Information (RFI), Requests for Quotation (RFQ), and Requests for Proposal (RFP). Additionally, you will prepare and deliver technical presentations to clients, highlighting the value and capabilities of AWS solutions. Collaborating closely with the sales team, you will work towards supporting their objectives and closing deals that align with business needs. Your ability to apply creative and analytical problem-solving skills to address complex challenges using AWS technology will be crucial. Furthermore, you should possess hands-on experience in planning, designing, and implementing AWS IaaS, PaaS, and SaaS services. Experience in executing end-to-end cloud migrations to AWS, including discovery, assessment, and implementation, is required. You must also be proficient in designing and deploying well-architected landing zones and disaster recovery environments on AWS. Excellent communication skills, both written and verbal, are essential for effectively articulating solutions to technical and non-technical stakeholders. Your organizational, time management, problem-solving, and analytical skills will play a vital role in driving consistent business performance and exceeding targets. Desirable skills include intermediate-level experience with AWS services like AppStream, Elastic BeanStalk, ECS, Elasticache, and more, as well as IT orchestration and automation tools such as Ansible, Puppet, and Chef. Knowledge of Terraform, Azure DevOps, and AWS development services will be advantageous. In this role based in Noida, Uttar Pradesh, India, you will have the opportunity to collaborate with technical and non-technical teams across the organization, ensuring scalable, efficient, and secure solutions are delivered on the AWS platform.,
Posted 3 weeks ago
0.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
At Roche you can show up as yourself, embraced for the unique qualities you bring. Our culture encourages personal expression, open dialogue, and genuine connections, where you are valued, accepted and respected for who you are, allowing you to thrive both personally and professionally. This is how we aim to prevent, stop and cure diseases and ensure everyone has access to healthcare today and for generations to come. Join Roche, where every voice matters. The Position Title: Senior Frontend Developer (ReactJS & AWS) The Opportunity We seek a skilled front-end developer with expertise in ReactJS, AWS, and CI/CD best practices to design, develop, and maintain high-performance, scalable, and reliable web applications using modern cloud-native technologies. Role Description: The person will build and optimize high-performance frontend applications using ReactJS, integrated with AWS services to support the applications requirements. Additionally, the role involves setting up and managing CI/CD pipelines on GitLab, collaborating with our cross-functional, internal teams, and contributing to the overall user experience. Design, develop, and maintain high-performance, scalable, and reliable web applications using ReactJS and next.js Collaborate seamlessly with cross-functional teams (product management, design, engineering, data ops) to deliver exceptional user experiences Work closely with UX/UI designers to translate design systems, mockups or wireframes into functional, clean, and responsive code Implement robust and secure cloud-native architectures, leveraging AWS as our primary platform Optimize web applications for peak performance, scalability, and cost-efficiency Stay abreast of the latest trends and best practices in cloud-native development and web technologies Contribute to the development of reusable components and libraries to streamline development processes Who You Are Programming & Web Development: Proficiency in JavaScript, TypeScript, Python, React, Next.js, JavaScript (ES6+), HTML5, and CSS3 for modern web development Cloud & Serverless Expertise: In-depth knowledge of cloud-native technologies, especially AWS, serverless frameworks (AWS Lambda or Google Cloud Functions), and AWS services like CodePipeline, CodeBuild, and CodeDeploy Containerization & Microservices: Familiarity with Docker, Kubernetes, serverless computing, and microservices architectures for scalable applications API & Database Proficiency: Experience with RESTful APIs, GraphQL, SQL, API design principles, and managing cloud security, including IAM and data encryption DevOps Practices: Expertise in CI/CD pipelines, infrastructure as code, and DevOps tools for automation and efficient development workflows Problem-Solving & Collaboration: Strong troubleshooting and debugging capabilities combined with excellent interpersonal and communication skills Industry Knowledge & Certifications: Experience in regulated industries (e.g., pharmaceuticals, finance) with a focus on data compliance, security, and AWS certifications In exchange we provide you with Development opportunities: Roche is rich in learning resources. We provide constant development opportunities, free language courses & training, the possibility of international assignments, internal position changes and the chance to shape your own career. Excellent benefits & flexibility: competitive salary and cafeteria package, annual bonus, Private Medical Services, Employee Assistance Program, All You Can Move Sportpass, coaching / mentoring opportunity, buddy program, team buildings, holiday party. We also ensure flexibility, to help you find your balance: home office is a common practice (2 office days/week on average, and we provide fully remote working conditions within Hungary). We create the opportunity for freedom in working, where your corporate and private life coexist in harmony A global inclusive community, where we learn from each other. At Roche, we cooperate, debate, make decisions, celebrate successes and have fun as a team. Thats what makes us Roche Please read the Data Privacy Notice for further information about how we handle your personal data related to the recruitment process: https://go.roche.com/dpn4candidates Who we are A healthier future drives us to innovate. Together, more than 100000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact. Lets build a healthier future, together. Roche is an Equal Opportunity Employer. Show more Show less
Posted 3 weeks ago
9.0 - 13.0 years
0 Lacs
hyderabad, telangana
On-site
You will be leading data engineering activities on moderate to complex data and analytics-centric problems that have a broad impact and require in-depth analysis to achieve desired results. Your responsibilities will include assembling, enhancing, maintaining, and optimizing current data, enabling cost savings, and meeting project or enterprise maturity objectives. Your role will require an advanced working knowledge of SQL, Python, and PySpark. You should also have experience using tools like Git/Bitbucket, Jenkins/CodeBuild, and CodePipeline, as well as familiarity with platform monitoring and alerts tools. Collaboration with Subject Matter Experts (SMEs) is essential for designing and developing Foundry front-end applications with the ontology (data model) and data pipelines supporting these applications. You will be responsible for implementing data transformations to derive new datasets or create Foundry Ontology Objects necessary for business applications. Additionally, you will implement operational applications using Foundry Tools such as Workshop, Map, and/or Slate. Active participation in agile/scrum ceremonies (stand-ups, planning, retrospectives, etc.) is expected from you. Documentation plays a crucial role in this role, and you will create and maintain documentation describing data catalog and data objects. As applications grow in usage and requirements change, you will be responsible for maintaining these applications. A continuous improvement mindset is encouraged, and you will be expected to engage in after-action reviews and share learnings. Strong communication skills, especially in explaining technical concepts to non-technical business leaders, will be essential for success in this role.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
punjab
On-site
You are a highly skilled front-end React.js developer with a minimum of 2 years of experience, proficient in using Redux, Hooks, Webpack, etc. Additionally, you are highly proficient in Node.js/Express and REST API development with a minimum of 2 years of experience. You have experience working with containers and container management, specifically Docker. In addition, you have RDBMS experience and the ability to write efficient SQL queries, having worked with databases such as Oracle SQL, PostgreSQL, MySQL, or SQL Server. You also have experience with cloud-native design and micro-services-based architecture patterns, as well as familiarity with NoSQL databases like MongoDB and DocumentDB. You are familiar with AWS services such as ECS, EKS, ECR, EC2, and S3, and can build custom Dockerfiles. You can implement CI/CD pipelines using tools like Bamboo, GIT (BitBucket), and CodePipeline. You are proficient in working in Linux environments and competent at scripting. Furthermore, you have experience with other programming languages like Python and Java. Communication is one of your strengths, with good written and verbal skills. If you have any questions, feel free to ask.,
Posted 1 month ago
2.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Inviting applications for the role of Principal Consultant -MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities . Develop and maintain CI/CD pipelines for AI/GenAI models on AWS using GitHub Actions and CodePipeline. (Not Limited to) . Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) . Any cloud platform- Azure or AWS . Package and deploy AI/GenAI models on (SageMaker, Lambda, API Gateway). . Write Python scripts for automation, deployment, and monitoring. . Engaging in the design, development and maintenance of data pipelines for various AI use cases . Active contribution to key deliverables as part of an agile development team . Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). . Ensure model governance, versioning, and traceability across environments. . Collaborating with others to source, analyse, test and deploy data processes . Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. . Degree/qualification in Computer Science or a related field, or equivalent work experience . Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. . Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills . Experience with Docker-based deployments. . Exposure to model monitoring tools (Evidently, CloudWatch). . Familiarity with RAG stacks or fine-tuning LLMs. . Understanding of GitOps practices. . Knowledge of governance and compliance policies, standards, and procedures
Posted 1 month ago
3.0 - 7.0 years
8 - 12 Lacs
Noida
Work from Office
Job Summary: Skilled DevOps Engineer with expertise in AWS. The role involves automating, managing, and maintaining scalable, highly available, and secure AWS infrastructure to support our applications and business goals. Will work closely with developers, QA, and operations to streamline CI/CD processes and ensure robust cloud operations. Key Responsibilities: Design, implement, and manage scalable, secure, and high-availability AWS environments (EC2, ECS/EKS, S3, RDS, Lambda, CloudFront, etc.) - Develop Infrastructure as Code (IaC) using Terraform / CloudFormation. - Build and manage CI/CD pipelines (Jenkins, GitLab CI/CD, CodePipeline, etc.) - Implement automated monitoring and alerting solutions (CloudWatch, Prometheus, Grafana, ELK, etc.) - Manage containerization & orchestration (Docker, Kubernetes/EKS) - Ensure system security through IAM policies, security groups, VPC configurations, encryption, etc. - Handle system upgrades, patching, and performance tuning. - Troubleshoot production issues across services and levels of the stack. - Collaborate with development teams to optimize build and release processes. - Ensure compliance with best practices around availability, security, and disaster recovery. Required Skills & Experience: - Strong hands-on experience with AWS core services (EC2, VPC, ELB, S3, RDS, IAM, Lambda, etc.) - Proficiency in Terraform or AWS CloudFormation for Infrastructure as Code. - Experience with CI/CD tools like Jenkins, GitLab, CodePipeline, or similar. - Good understanding of containerization and orchestration (Docker, Kubernetes/EKS). - Expertise in Linux system administration & shell scripting. - Familiarity with monitoring, logging & alerting solutions (CloudWatch, ELK, Prometheus, etc.) - Knowledge of networking concepts: DNS, VPN, Load Balancing, Security Groups. - Experience with Git version control. - Strong problem-solving and troubleshooting skills. Must Have: - AWS certifications (Solutions Architect, DevOps Engineer, SysOps Administrator) - Exposure to serverless architectures (AWS Lambda, API Gateway) - Scripting with Python/Go for automation - Experience in blue-green / canary deployments - Knowledge of cost optimization strategies on AWS
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant - MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities Develop and maintain CI/CD pipelines for AI/ GenAI models on AWS using GitHub Actions and CodePipeline . (Not Limited to) Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) Any cloud platform - Azure or AWS Package and deploy AI/ GenAI models on (SageMaker, Lambda, API Gateway). Write Python scripts for automation, deployment, and monitoring. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). Ensure model governance, versioning, and traceability across environments. Collaborating with others to source, analyse , test and deploy data processes Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. Degree/qualification in Computer Science or a related field, or equivalent work experience Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/ GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience with Docker-based deployments. Exposure to model monitoring tools (Evidently, CloudWatch). Familiarity with RAG stacks or fine-tuning LLMs. Understanding of GitOps practices. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
15.0 - 17.0 years
30 - 37 Lacs
Hyderabad
Work from Office
Looking for an overall experience of 15 years with a min 1.5 to 2 years experience in Palantir Location : Hyderabad( 5 days work from office) Role : Palantir Tech Lead Mandatory Skills: Python, Pyspark and Palantir Need a strong hands on lead engineer. Onsite in Hyderabad. Budget : 32LPA-36LPA Notice period : Immediate to 30 days Tasks and Responsibilities: Leads data engineering activities on moderate to complex data and analytics-centric problems which have broad impact and require in-depth analysis to obtain desired results; assemble, enhance, maintain, and optimize current, enable cost savings and meet individual project or enterprise maturity objectives. advanced working knowledge of SQL, Python, and PySpark Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline Experience with platform monitoring and alerts tools Work closely with Subject Matter Experts (SMEs) to design and develop Foundry front end applications with the ontology (data model) and data pipelines supporting the applications Implement data transformations to derive new datasets or create Foundry Ontology Objects necessary for business applications Implement operational applications using Foundry Tools (Workshop, Map, and/or Slate) Actively participate in agile/scrum ceremonies (stand ups, planning, retrospectives, etc.) Create and maintain documentation describing data catalog and data objects Maintain applications as usage grows and requirements change Promote a continuous improvement mindset by engaging in after action reviews and sharing learnings Use communication skills, especially for explaining technical concepts to nontechnical business leader
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. Job Overview The AWS Architect with Data Engineering Skills will be responsible for designing, implementing, and managing scalable, robust, and secure cloud infrastructure and data solutions on AWS. This role requires a deep understanding of AWS services, data engineering best practices, and the ability to translate business requirements into effective technical solutions. Key Responsibilities Architecture Design: Design and architect scalable, reliable, and secure AWS cloud infrastructure. Develop and maintain architecture diagrams, documentation, and standards. Data Engineering Design and implement ETL pipelines using AWS services such as Glue, Lambda, and Step Functions. Build and manage data lakes and data warehouses using AWS services like S3, Redshift, and Athena. Ensure data quality, data governance, and data security across all data platforms. AWS Services Management Utilize a wide range of AWS services (EC2, S3, RDS, Lambda, DynamoDB, etc.) to support various workloads and applications. Implement and manage CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy. Monitor and optimize the performance, cost, and security of AWS resources. Collaboration And Communication Work closely with cross-functional teams including software developers, data scientists, and business stakeholders. Provide technical guidance and mentorship to team members on best practices in AWS and data engineering. Security And Compliance Ensure that all cloud solutions follow security best practices and comply with industry standards and regulations. Implement and manage IAM policies, roles, and access controls. Innovation And Improvement Stay up to date with the latest AWS services, features, and best practices. Continuously evaluate and improve existing systems, processes, and architectures. (ref:hirist.tech),
Posted 1 month ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant -AWS AI/ML Engineer Responsibilities Design, develop, and deploy scalable AI/ML solutions using AWS services such as Amazon Bedrock, SageMaker, Amazon Q, Amazon Lex, Amazon Connect, and Lambda. Implement and optimize large language model (LLM) applications using Amazon Bedrock, including prompt engineering, fine-tuning, and orchestration for specific business use cases. Build and maintain end-to-end machine learning pipelines using SageMaker for model training, tuning, deployment, and monitoring. Integrate conversational AI and virtual assistants using Amazon Lex and Amazon Connect, with seamless user experiences and real-time inference. Leverage AWS Lambda for event-driven execution of model inference, data preprocessing, and microservices. Design and maintain scalable and secure data pipelines and AI workflows, ensuring efficient data flow to and from Redshift and other AWS data stores. Implement data ingestion, transformation, and model inference for structured and unstructured data using Python and AWS SDKs. Collaborate with data engineers and scientists to support development and deployment of ML models on AWS. Monitor AI/ML applications in production, ensuring optimal performance, low latency, and cost efficiency across all AI/ML services. Ensure implementation of AWS security best practices, including IAM policies, data encryption, and compliance with industry standards. Drive the integration of Amazon Q for enterprise AI-based assistance and automation across internal processes and systems. Participate in architecture reviews and recommend best-fit AWS AI/ML services for evolving business needs. Stay up to date with the latest advancements in AWS AI services, LLMs, and industry trends to inform technology strategy and innovation. Prepare documentation for ML pipelines, model performance reports, and system architecture. Qualifications we seek in you: Minimum Qualifications Proven hands-on experience with Amazon Bedrock, SageMaker, Lex, Connect, Lambda, and Redshift. Strong knowledge and application experience with Large Language Models (LLMs) and prompt engineering techniques. Experience building production-grade AI applications using AWS AI or other generative AI services. Solid programming experience in Python for ML development, data processing, and automation. Proficiency in designing and deploying conversational AI/chatbot solutions using Lex and Connect. Experience with Redshift for data warehousing and analytics integration with ML solutions. Good understanding of AWS architecture, scalability, availability, and security best practices. Familiarity with AWS development, deployment, and monitoring tools (CloudWatch, CodePipeline, etc.). Strong understanding of MLOps practices including model versioning, CI/CD pipelines, and model monitoring. Strong communication and interpersonal skills to collaborate with cross-functional teams and stakeholders. Ability to troubleshoot performance bottlenecks and optimize cloud resources for cost-effectiveness Preferred Qualifications: AWS Certification in Machine Learning, Solutions Architect, or AI Services. Experience with other AI tools (e.g., Anthropic Claude, OpenAI APIs, or Hugging Face). Knowledge of streaming architectures and services like Kafka or Kinesis. Familiarity with Databricks and its integration with AWS services. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant - AWS AI/ML Engineer Responsibilities Design, develop, and deploy scalable AI/ML solutions using AWS services such as Amazon Bedrock, SageMaker, Amazon Q, Amazon Lex, Amazon Connect, and Lambda. Implement and optimize large language model (LLM) applications using Amazon Bedrock, including prompt engineering, fine-tuning, and orchestration for specific business use cases. Build and maintain end-to-end machine learning pipelines using SageMaker for model training, tuning, deployment, and monitoring. Integrate conversational AI and virtual assistants using Amazon Lex and Amazon Connect, with seamless user experiences and real-time inference. Leverage AWS Lambda for event-driven execution of model inference, data preprocessing, and microservices. Design and maintain scalable and secure data pipelines and AI workflows, ensuring efficient data flow to and from Redshift and other AWS data stores. Implement data ingestion, transformation, and model inference for structured and unstructured data using Python and AWS SDKs. Collaborate with data engineers and scientists to support development and deployment of ML models on AWS. Monitor AI/ML applications in production, ensuring optimal performance, low latency, and cost efficiency across all AI/ML services. Ensure implementation of AWS security best practices, including IAM policies, data encryption, and compliance with industry standards. Drive the integration of Amazon Q for enterprise AI-based assistance and automation across internal processes and systems. Participate in architecture reviews and recommend best-fit AWS AI/ML services for evolving business needs. Stay up to date with the latest advancements in AWS AI services, LLMs, and industry trends to inform technology strategy and innovation. Prepare documentation for ML pipelines, model performance reports, and system architecture. Qualifications we seek in you: Minimum Qualifications Proven hands-on experience with Amazon Bedrock, SageMaker, Lex, Connect, Lambda, and Redshift. Strong knowledge and application experience with Large Language Models (LLMs) and prompt engineering techniques. Experience building production-grade AI applications using AWS AI or other generative AI services. Solid programming experience in Python for ML development, data processing, and automation. Proficiency in designing and deploying conversational AI/chatbot solutions using Lex and Connect. Experience with Redshift for data warehousing and analytics integration with ML solutions. Good understanding of AWS architecture, scalability, availability, and security best practices. Familiarity with AWS development, deployment, and monitoring tools (CloudWatch, CodePipeline , etc.). Strong understanding of MLOps practices including model versioning, CI/CD pipelines, and model monitoring. Strong communication and interpersonal skills to collaborate with cross-functional teams and stakeholders. Ability to troubleshoot performance bottlenecks and optimize cloud resources for cost-effectiveness Preferred Qualifications: AWS Certification in Machine Learning, Solutions Architect, or AI Services. Experience with other AI tools (e.g., Anthropic Claude, OpenAI APIs, or Hugging Face). Knowledge of streaming architectures and services like Kafka or Kinesis. Familiarity with Databricks and its integration with AWS services. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
2.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Inviting applications for the role of Senior Principal Consultant - ML Engineers! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities . Develop and maintain CI/CD pipelines for AI/GenAI models on AWS using GitHub Actions and CodePipeline. (Not Limited to) . Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) . Any cloud platform- Azure or AWS . Package and deploy AI/GenAI models on (SageMaker, Lambda, API Gateway). . Write Python scripts for automation, deployment, and monitoring. . Engaging in the design, development and maintenance of data pipelines for various AI use cases . Active contribution to key deliverables as part of an agile development team . Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). . Ensure model governance, versioning, and traceability across environments. . Collaborating with others to source, analyse, test and deploy data processes . Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. . Degree/qualification in Computer Science or a related field, or equivalent work experience . Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. . Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills . Experience with Docker-based deployments. . Exposure to model monitoring tools (Evidently, CloudWatch). . Familiarity with RAG stacks or fine-tuning LLMs. . Understanding of GitOps practices. . Knowledge of governance and compliance policies, standards, and procedures . . . . .
Posted 1 month ago
0.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant -AWS AI/ML Engineer Responsibilities Design, develop, and deploy scalable AI/ML solutions using AWS services such as Amazon Bedrock, SageMaker, Amazon Q, Amazon Lex, Amazon Connect, and Lambda. Implement and optimize large language model (LLM) applications using Amazon Bedrock, including prompt engineering, fine-tuning, and orchestration for specific business use cases. Build and maintain end-to-end machine learning pipelines using SageMaker for model training, tuning, deployment, and monitoring. Integrate conversational AI and virtual assistants using Amazon Lex and Amazon Connect, with seamless user experiences and real-time inference. Leverage AWS Lambda for event-driven execution of model inference, data preprocessing, and microservices. Design and maintain scalable and secure data pipelines and AI workflows, ensuring efficient data flow to and from Redshift and other AWS data stores. Implement data ingestion, transformation, and model inference for structured and unstructured data using Python and AWS SDKs. Collaborate with data engineers and scientists to support development and deployment of ML models on AWS. Monitor AI/ML applications in production, ensuring optimal performance, low latency, and cost efficiency across all AI/ML services. Ensure implementation of AWS security best practices, including IAM policies, data encryption, and compliance with industry standards. Drive the integration of Amazon Q for enterprise AI-based assistance and automation across internal processes and systems. Participate in architecture reviews and recommend best-fit AWS AI/ML services for evolving business needs. Stay up to date with the latest advancements in AWS AI services, LLMs, and industry trends to inform technology strategy and innovation. Prepare documentation for ML pipelines, model performance reports, and system architecture. Qualifications we seek in you: Minimum Qualifications Proven hands-on experience with Amazon Bedrock, SageMaker, Lex, Connect, Lambda, and Redshift. Strong knowledge and application experience with Large Language Models (LLMs) and prompt engineering techniques. Experience building production-grade AI applications using AWS AI or other generative AI services. Solid programming experience in Python for ML development, data processing, and automation. Proficiency in designing and deploying conversational AI/chatbot solutions using Lex and Connect. Experience with Redshift for data warehousing and analytics integration with ML solutions. Good understanding of AWS architecture, scalability, availability, and security best practices. Familiarity with AWS development, deployment, and monitoring tools (CloudWatch, CodePipeline, etc.). Strong understanding of MLOps practices including model versioning, CI/CD pipelines, and model monitoring. Strong communication and interpersonal skills to collaborate with cross-functional teams and stakeholders. Ability to troubleshoot performance bottlenecks and optimize cloud resources for cost-effectiveness Preferred Qualifications: AWS Certification in Machine Learning, Solutions Architect, or AI Services. Experience with other AI tools (e.g., Anthropic Claude, OpenAI APIs, or Hugging Face). Knowledge of streaming architectures and services like Kafka or Kinesis. Familiarity with Databricks and its integration with AWS services. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant -AWS AI/ML Engineer Responsibilities Design, develop, and deploy scalable AI/ML solutions using AWS services such as Amazon Bedrock, SageMaker, Amazon Q, Amazon Lex, Amazon Connect, and Lambda. Implement and optimize large language model (LLM) applications using Amazon Bedrock, including prompt engineering, fine-tuning, and orchestration for specific business use cases. Build and maintain end-to-end machine learning pipelines using SageMaker for model training, tuning, deployment, and monitoring. Integrate conversational AI and virtual assistants using Amazon Lex and Amazon Connect, with seamless user experiences and real-time inference. Leverage AWS Lambda for event-driven execution of model inference, data preprocessing, and microservices. Design and maintain scalable and secure data pipelines and AI workflows, ensuring efficient data flow to and from Redshift and other AWS data stores. Implement data ingestion, transformation, and model inference for structured and unstructured data using Python and AWS SDKs. Collaborate with data engineers and scientists to support development and deployment of ML models on AWS. Monitor AI/ML applications in production, ensuring optimal performance, low latency, and cost efficiency across all AI/ML services. Ensure implementation of AWS security best practices, including IAM policies, data encryption, and compliance with industry standards. Drive the integration of Amazon Q for enterprise AI-based assistance and automation across internal processes and systems. Participate in architecture reviews and recommend best-fit AWS AI/ML services for evolving business needs. Stay up to date with the latest advancements in AWS AI services, LLMs, and industry trends to inform technology strategy and innovation. Prepare documentation for ML pipelines, model performance reports, and system architecture. Qualifications we seek in you: Minimum Qualifications Proven hands-on experience with Amazon Bedrock, SageMaker, Lex, Connect, Lambda, and Redshift. Strong knowledge and application experience with Large Language Models (LLMs) and prompt engineering techniques. Experience building production-grade AI applications using AWS AI or other generative AI services. Solid programming experience in Python for ML development, data processing, and automation. Proficiency in designing and deploying conversational AI/chatbot solutions using Lex and Connect. Experience with Redshift for data warehousing and analytics integration with ML solutions. Good understanding of AWS architecture, scalability, availability, and security best practices. Familiarity with AWS development, deployment, and monitoring tools (CloudWatch, CodePipeline , etc.). Strong understanding of MLOps practices including model versioning, CI/CD pipelines, and model monitoring. Strong communication and interpersonal skills to collaborate with cross-functional teams and stakeholders. Ability to troubleshoot performance bottlenecks and optimize cloud resources for cost-effectiveness Preferred Qualifications: AWS Certification in Machine Learning, Solutions Architect, or AI Services. Experience with other AI tools (e.g., Anthropic Claude, OpenAI APIs, or Hugging Face). Knowledge of streaming architectures and services like Kafka or Kinesis. Familiarity with Databricks and its integration with AWS services. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
4.0 - 9.0 years
6 - 16 Lacs
Bengaluru
Work from Office
Job Description: Backend Developer Location: South Bengaluru, Karnataka, India Experience: 4 to 9 years Salary Range: 6 to 16 LPA Working Days: Monday to Friday (5-day work week) Work Mode: Work from Office only (No hybrid or remote option) About the Opportunity Join a product-driven tech team building a scalable, intelligent platform that simplifies high-value ownership journeys. This role is ideal for developers who want to work on cloud-native architectures, solve meaningful backend challenges, and drive real-world user impact. Youll be developing and optimizing backend systems using modern stacks and AWS services within a fast-paced, agile environment. Key Responsibilities - Design, develop, and deploy cloud-native applications on AWS using Lambda, AppSync, EventBridge, DynamoDB, and OpenSearch. - Build robust RESTful APIs and ensure smooth integration with internal and external systems. - Migrate and modernize existing full-stack applications to a cloud-native setup. - Develop full-stack solutions using TypeScript, JavaScript, Node.js, and Python. - Monitor and optimize database health using MongoDB and PostgreSQL. - Maintain CI/CD pipelines using AWS CodePipeline and related tools. - Ensure scalability, security, and performance of backend systems in a dynamic, fast-moving environment. Technical Skills Required - Strong experience with Python, FastAPI, Node.js, and TypeScript. - Proficiency with MongoDB, PostgreSQL, and familiarity with DynamoDB. - Solid knowledge of AWS services including Lambda and serverless architecture. - Good grasp of RESTful API design and cloud-native principles. - Exposure to front-end tech like Angular or JavaScript is a plus. - Experience working with CI/CD tools and Agile methodologies (Scrum). What You Bring - 26 years of experience in backend development, preferably in agile teams. - A strong problem-solving mindset and ability to take ownership. - Experience in modern, cloud-native tech stacks. - Collaborative attitude and clear communication skills. Why Join Us This is your chance to work on innovative, tech-first solutionsnot a conventional sector job. You'll contribute to building intelligent, scalable platforms with a direct user impact. - Work on deep-tech problems in asset discovery and digitization. - Build backend engines using latest cloud technologies. - Be part of a transparent, ethical, innovation-focused product company. Parent Ecosystem & Vision Youll be contributing to a fast-growing technology initiative backed by a mission-driven ecosystem focused on sustainability, rural development, and innovation. The broader group emphasizes responsible growth, digital transformation, and creating real-world impact through technology.
Posted 1 month ago
2.0 - 7.0 years
5 - 12 Lacs
Bengaluru
Work from Office
Job Description: Backend Developer Location: South Bengaluru, Karnataka, India Experience: 2 to 6 years Salary Range: 5 12 LPA Working Days: Monday to Friday (5-day work week) Work Mode: Work from Office only (No hybrid or remote option) About the Opportunity Join a product-driven tech team building a scalable, intelligent platform that simplifies high-value ownership journeys. This role is ideal for developers who want to work on cloud-native architectures, solve meaningful backend challenges, and drive real-world user impact. Youll be developing and optimizing backend systems using modern stacks and AWS services within a fast-paced, agile environment. Key Responsibilities - Design, develop, and deploy cloud-native applications on AWS using Lambda, AppSync, EventBridge, DynamoDB, and OpenSearch. - Build robust RESTful APIs and ensure smooth integration with internal and external systems. - Migrate and modernize existing full-stack applications to a cloud-native setup. - Develop full-stack solutions using TypeScript, JavaScript, Node.js, and Python. - Monitor and optimize database health using MongoDB and PostgreSQL. - Maintain CI/CD pipelines using AWS CodePipeline and related tools. - Ensure scalability, security, and performance of backend systems in a dynamic, fast-moving environment. Technical Skills Required - Strong experience with Python, FastAPI, Node.js, and TypeScript. - Proficiency with MongoDB, PostgreSQL, and familiarity with DynamoDB. - Solid knowledge of AWS services including Lambda and serverless architecture. - Good grasp of RESTful API design and cloud-native principles. - Exposure to front-end tech like Angular or JavaScript is a plus. - Experience working with CI/CD tools and Agile methodologies (Scrum). What You Bring - 26 years of experience in backend development, preferably in agile teams. - A strong problem-solving mindset and ability to take ownership. - Experience in modern, cloud-native tech stacks. - Collaborative attitude and clear communication skills. Why Join Us This is your chance to work on innovative, tech-first solutionsnot a conventional sector job. You'll contribute to building intelligent, scalable platforms with a direct user impact. - Work on deep-tech problems in asset discovery and digitization. - Build backend engines using latest cloud technologies. - Be part of a transparent, ethical, innovation-focused product company. Parent Ecosystem & Vision Youll be contributing to a fast-growing technology initiative backed by a mission-driven ecosystem focused on sustainability, rural development, and innovation. The broader group emphasizes responsible growth, digital transformation, and creating real-world impact through technology.
Posted 2 months ago
2.0 - 6.0 years
1 - 4 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Work Timings: 2 PM to 11 PM Job Description AWS Services: Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, KMS, API Gateway, App Sync Infrastructure as Code (IaC): AWS CDK, CodePipeline (planned) Serverless Architecture & Event-Driven Design Cloud Monitoring & Observability: CloudWatch Logs, X-Ray, Custom Metrics Security & Compliance: IAM roles boundaries, PHI/PII tagging, Cognito, KMS, HIPAA standards, Isolation Pattern, Access Control Cost Optimization: S3 lifecycle policies, serverless tiers, service selection (e.g., Pinpoint vs SES) Scalability & Resilience: Auto-scaling, DLQs, retry/backoff, circuit breakers CI/CD Pipeline Concepts Documentation & Workflow Design Cross-Functional Collaboration and AWS Best Practices Skills: x-ray,cloud monitoring,aws,ci/cd pipeline,documentation,security & compliance,cost optimization,aws services,infrastructure,infrastructure as code (iac),cross-functional collaboration,observability,serverless architecture,api,event-driven design,scalability,workflow design,aws best practices,resilience Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 2 months ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Design and implement cloud-native data architectures on AWS, including data lakes, data warehouses, and streaming pipelines using services like S3, Glue, Redshift, Athena, EMR, Lake Formation, and Kinesis. Develop and orchestrate ETL/ELT pipelines Required Candidate profile Participate in pre-sales and consulting activities such as: Engaging with clients to gather requirements and propose AWS-based data engineering solutions. Supporting RFPs/RFIs, technical proposals
Posted 2 months ago
0.0 - 3.0 years
3 - 6 Lacs
Hyderabad
Work from Office
The ideal candidate will have a deep understanding of automation, configuration management, and infrastructure-as-code principles, with a strong focus on Ansible. You will work closely with developers, system administrators, and other collaborators to automate infrastructure related processes, improve deployment pipelines, and ensure consistent configurations across multiple environments. The Infrastructure Automation Engineer will be responsible for developing innovative self-service solutions for our global workforce and further enhancing our self-service automation built using Ansible. As part of a scaled Agile product delivery team, the Developer works closely with product feature owners, project collaborators, operational support teams, peer developers and testers to develop solutions to enhance self-service capabilities and solve business problems by identifying requirements, conducting feasibility analysis, proof of concepts and design sessions. The Developer serves as a subject matter expert on the design, integration and operability of solutions to support innovation initiatives with business partners and shared services technology teams. Please note, this is an onsite role based in Hyderabad. Key Responsibilities: Automating repetitive IT tasks - Collaborate with multi-functional teams to gather requirements and build automation solutions for infrastructure provisioning, configuration management, and software deployment. Configuration Management - Design, implement, and maintain code including Ansible playbooks, roles, and inventories for automating system configurations and deployments and ensuring consistency Ensure the scalability, reliability, and security of automated solutions. Troubleshoot and resolve issues related to automation scripts, infrastructure, and deployments. Perform infrastructure automation assessments, implementations, providing solutions to increase efficiency, repeatability, and consistency. DevOps Facilite continuous integration and deployment (CI/CD) Orchestration Coordinating multiple automated tasks across systems Develop and maintain clear, reusable, and version-controlled playbooks and scripts. Manage and optimize cloud infrastructure using Ansible and terraform automation (AWS, Azure, GCP, etc.). Continuously improve automation workflows and practices to enhance speed, quality, and reliability. Ensure that infrastructure automation adheres to best practices, security standards, and regulatory requirements. Document and maintain processes, configurations, and changes in the automation infrastructure. Participate in design review, client requirements sessions and development teams to deliver features and capabilities supporting automation initiatives Collaborate with product owners, collaborators, testers and other developers to understand, estimate, prioritize and implement solutions Design, code, debug, document, deploy and maintain solutions in a highly efficient and effective manner Participate in problem analysis, code review, and system design Remain current on new technology and apply innovation to improve functionality Collaborate closely with collaborators and team members to configure, improve and maintain current applications Work directly with users to resolve support issues within product team responsibilities Monitor health, performance and usage of developed solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelors degree and 0 to 3 years of computer science, IT, or related field experience OR Diploma and 4 to 7 years of computer science, IT, or related field experience Deep hands-on experience with Ansible including playbooks, roles, and modules Proven experience as an Ansible Engineer or in a similar automation role Scripting skills in Python, Bash, or other programming languages Proficiency expertise in Terraform & CloudFormation for AWS infrastructure automation Experience with other configuration management tools (e.g., Puppet, Chef). Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (GitHub Actions, CodePipeline, etc.) Familiarity with monitoring tools (e.g., Dynatrace, Prometheus, Nagios) Working in an Agile (SAFe, Scrum, and Kanban) environment Preferred Qualifications: Red Hat Certified Specialist in Developing with Ansible Automation Platform Red Hat Certified Specialist in Managing Automation with Ansible Automation Platform Red Hat Certified System Administrator AWS Certified Solutions Architect Associate or Professional AWS Certified DevOps Engineer Professional Terraform Associate Certification Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures. Knowledge of AWS Lambda and event-driven architectures. Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) Experience operating within a validated systems environment (FDA, European Agency for the Evaluation of Medicinal Products, Ministry of Health, etc.) Soft Skills: Strong analytical and problem-solving skills. Effective communication and collaboration with multi-functional teams. Ability to work in a fast-paced, cloud-first environment. Shift Information: This position is an onsite role and may require working during later hours to align with business hours. Candidates must be willing and able to work outside of standard hours as required to meet business needs.
Posted 2 months ago
10.0 - 12.0 years
32 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Job Description Lead the development team to deliver, on budget, high value complex projects. Guide the technical direction of a team, project or product area. Take technical responsibility for all stages or iterations in a software development project, providing method specific technical advice to project stakeholders. Specify and ensure the design of technology solutions fulfills all our requirements, achieve desired goals and fulfill return on investment goals. Lead the development team to ensure disciplines are followed, project schedules and issues are managed, and project stakeholders receive regular communications. Establish a successful team culture, helping team members grow their skillsets and careers. You will be reporting to a Director You will WFO 2 days a week(Hybrid mode) as Hyderabad being the workplace Qualifications 10+ years of working experience in a software development environment of which the last 5 years being in a team leader position. Experience with cloud development on the Amazon Web Services (AWS) platform with services including Lambda, EC2, S3, Glue, Kubernetes, Fargate, AWS Batch and Aurora DB. Comprehend and implement detailed project specifications and to multiple technologies and simultaneously work on multiple projects. Proficiency in Java full stack development, including Springboot Framework, Kafka. Experience with Continuous Integration/Continuous Delivery (CI/CD) practices (CodeCommit, CodeDeploy, CodePipeline/Harness/Jenkins/GitHub Actions, CLI, BitBucket/Git, etc.). Ability to mentor and motivate team members. Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's strong people first approach is award winning Great Place To Work in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together
Posted 3 months ago
10 - 15 years
17 - 22 Lacs
Mumbai, Hyderabad, Bengaluru
Work from Office
Job roles and responsibilities : The AWS DevOps Engineer is responsible for automating, optimizing, and managing CI/CD pipelines, cloud infrastructure, and deployment processes on AWS. This role ensures smooth software delivery while maintaining high availability, security, and scalability. Design and implement scalable and secure cloud infrastructure on AWS, utilizing services such as EC2,EKS, ECS, S3, RDS, and VPC Automate the provisioning and management of AWS resources using Infrastructure as Code tools: (Terraform/ Cloud Formation / Ansible ) and YAML Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins, GitLab, or AWS CodePipeline Advocate for a No-Ops model, striving for console-less experiences and self-healing systems Experience with containerization technologies: Docker and Kubernetes Mandatory Skills: Overall experience is 5 - 8 Years on AWS Devops Speicalization (AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, AWS CodeCommit) Work experience on AWS Devops, IAM Work expertise on coding tools - Terraform or Ansible or Cloud Formation , YAML Good on deployment work - CI/CD pipelining Manage containerized workloads using Docker, Kubernetes (EKS), or AWS ECS , Helm Chart Has experience of database migration Proficiency in scripting languages (Python AND (Bash OR PowerShell)). Develop and maintain CI/CD pipelines using (AWS CodePipeline OR Jenkins OR GitHub Actions OR GitLab CI/CD) Experience with monitoring and logging tools (CloudWatch OR ELK Stack OR Prometheus OR Grafana) Career Level - IC4 Responsibilities Job roles and responsibilities : The AWS DevOps Engineer is responsible for automating, optimizing, and managing CI/CD pipelines, cloud infrastructure, and deployment processes on AWS. This role ensures smooth software delivery while maintaining high availability, security, and scalability. Design and implement scalable and secure cloud infrastructure on AWS, utilizing services such as EC2,EKS, ECS, S3, RDS, and VPC Automate the provisioning and management of AWS resources using Infrastructure as Code tools: (Terraform/ Cloud Formation / Ansible ) and YAML Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins, GitLab, or AWS CodePipeline Advocate for a No-Ops model, striving for console-less experiences and self-healing systems Experience with containerization technologies: Docker and Kubernetes Mandatory Skills: Overall experience is 5 - 8 Years on AWS Devops Speicalization (AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, AWS CodeCommit) Work experience on AWS Devops, IAM Work expertise on coding tools - Terraform or Ansible or Cloud Formation , YAML Good on deployment work - CI/CD pipelining Manage containerized workloads using Docker, Kubernetes (EKS), or AWS ECS , Helm Chart Has experience of database migration Proficiency in scripting languages (Python AND (Bash OR PowerShell)). Develop and maintain CI/CD pipelines using (AWS CodePipeline OR Jenkins OR GitHub Actions OR GitLab CI/CD) Experience with monitoring and logging tools (CloudWatch OR ELK Stack OR Prometheus OR Grafana)
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City