Jobs
Interviews

901 Lambda Expressions Jobs - Page 31

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

7 - 11 Lacs

mumbai, bengaluru

Work from Office

Job Title : Snowflake Developer with Oracle Golden Gate/ Data Engineer About Oracle FSGIU - Finergy: The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals Responsibilities: Snowflake Data Modeling & Architecture: Design and implement scalable Snowflake data models using best practices such as Snowflake Data Vault methodology. Real-Time Data Replication & Ingestion: Utilize Oracle GoldenGate for Big Data to manage real-time data streaming and optimize Snowpipe for automated data ingestion. Cloud Integration & Management: Work with AWS services (S3, EC2, Lambda) to integrate and manage Snowflake-based solutions. Data Sharing & Security: Implement SnowShare for data sharing and enforce security measures such as role-based access control (RBAC), data masking, and encryption. CI/CD Implementation: Develop and manage CI/CD pipelines for Snowflake deployment and data transformation processes. Collaboration & Troubleshooting: Partner with cross-functional teams to address data-related challenges and optimize performance. Documentation & Best Practices: Maintain detailed documentation for data architecture, ETL processes, and Snowflake configurations. Performance Optimization: Continuously monitor and enhance the efficiency of Snowflake queries and data pipelines. Mandatory Skills: Should have 4 years of experience as Data Engineer Strong expertise in Snowflake architecture, data modeling, and query optimization . Proficiency in SQL for writing and optimizing complex queries. Hands-on experience with Oracle GoldenGate for Big Data for real-time data replication. Knowledge of Snowpipe for automated data ingestion. Familiarity with AWS cloud services (S3, EC2, Lambda, IAM) and their integration with Snowflake. Experience with CI/CD tools (e.g., Jenkins, GitLab) for automating workflows. Working knowledge of Snowflake Data Vault methodology . Good to Have Skills: Exposure to Databricks for data processing and analytics. Knowledge of Python or Scala for data engineering tasks. Familiarity with Terraform or CloudFormation for infrastructure as code (IaC). Experience in data governance and compliance best practices . Understanding of ML and AI integration with data pipelines . Self-Test Questions: Do I have hands-on experience in designing and optimizing Snowflake data models? Can I confidently set up and manage real-time data replication using Oracle GoldenGate? Have I worked with Snowpipe to automate data ingestion processes? Am I proficient in SQL and capable of writing optimized queries in Snowflake? Do I have experience integrating Snowflake with AWS cloud services? Have I implemented CI/CD pipelines for Snowflake development? Can I troubleshoot performance issues in Snowflake and optimize queries effectively? Have I documented data engineering processes and best practices for team collaboration? Responsibilities Responsibilities: Snowflake Data Modeling & Architecture: Design and implement scalable Snowflake data models using best practices such as Snowflake Data Vault methodology. Real-Time Data Replication & Ingestion: Utilize Oracle GoldenGate for Big Data to manage real-time data streaming and optimize Snowpipe for automated data ingestion. Cloud Integration & Management: Work with AWS services (S3, EC2, Lambda) to integrate and manage Snowflake-based solutions. Data Sharing & Security: Implement SnowShare for data sharing and enforce security measures such as role-based access control (RBAC), data masking, and encryption. CI/CD Implementation: Develop and manage CI/CD pipelines for Snowflake deployment and data transformation processes. Collaboration & Troubleshooting: Partner with cross-functional teams to address data-related challenges and optimize performance. Documentation & Best Practices: Maintain detailed documentation for data architecture, ETL processes, and Snowflake configurations. Performance Optimization: Continuously monitor and enhance the efficiency of Snowflake queries and data pipelines. Qualifications Career Level - IC2

Posted Date not available

Apply

3.0 - 8.0 years

5 - 9 Lacs

bengaluru

Work from Office

Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities - Managing large machine learning applications and designing and implementing new frameworks to build scalable and efficient data processing workflows and machine learning pipelines.- Build the tightly integrated pipeline that optimizes and compiles models and then orchestrates their execution.- Collaborate with CPU, GPU, and Neural Engine hardware backends to push inference performance and efficiency- Work closely with feature teams to facilitate and debug the integration of increasingly sophisticated models, including large language models- Automate data processing and extraction- Engage with sales team to find opportunities, understand requirements, and translate those requirements into technical solutions.- Develop reusable ML models and assets into production. Technical and Professional Requirements: - Excellent Python programming and debugging skills. (Refer to Pytho JD given below)- Proficiency with SQL, relational databases, & non-relational databases- Passion for API design and software architecture.- Strong communication skills and the ability to naturally explain difficult technical topics to everyone from data scientists to engineers to business partners- Experience with modern neural-network architectures and deep learning libraries (Keras, TensorFlow, PyTorch). - Experience unsupervised ML algorithms. - Experience in Timeseries models and Anomaly detection problems.- Experience with modern large language model (Chat GPT/BERT) and applications.- Expertise with performance optimization.- Experience or knowledge in public cloud AWS services - S3, Lambda.- Familiarity with distributed databases, such as Snowflake, Oracle.- Experience with containerization and orchestration technologies, such as Docker and Kubernetes. Preferred Skills: Technology->Big Data - Data Processing->Spark Technology->Machine Learning->R Technology->Machine Learning->Python

Posted Date not available

Apply

4.0 - 9.0 years

22 - 25 Lacs

hyderabad

Work from Office

4-5 Years exp with AWS with Cloud Practioner certification Exp in working with Cloud Formation to create AWS components (Like IAM roles, Lambdas, Event Bridge etc..) Exp in working with Terraform to create cloud components (Like setting up S3 related setup, permissions, AWS batch related config etc) Working exp in creating Lambdas using Java and Python (This is also more like an app development using Lambdas vs core infrastructure level tasks) Working exp with AWS batch using Java and Python Good to have: Experience with Appflow and Event bridge (Writing event rules) Experience in integrating with external applications like Salesforce

Posted Date not available

Apply

5.0 - 10.0 years

14 - 17 Lacs

mumbai

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted Date not available

Apply

4.0 - 9.0 years

5 - 9 Lacs

hyderabad

Work from Office

Job Purpose As a Senior Java/AWS Developer, you will be part of a team responsible for contributing to the design, development, maintenance and support of ICE Digital Trade, a suite of highly configurable enterprise applications. The ideal candidate must be results-oriented, self-motivated and can thrive in a fast-paced environment. This role requires frequent interactions with project and product managers, developers, quality assurance and other stakeholders, to ensure delivery of a world class application to our users. Responsibilities Reviewing application requirements and interface designs. Contributing to the design and development of enterprise Java applications Developing and implementing highly responsive user interface components using react concepts. Writing application interface codes using JavaScript following react.js workflows. Troubleshooting interface software and debugging application code. Developing and implementing front-end architecture to support user interface concepts. Monitoring and improving front-end performance. Documenting application changes and developing updates. Collaborate with QA team to ensure quality production code. Support and enhance multiple mission-critical enterprise applications. Write unit and integration tests for new and legacy code. Take initiative and work independently on some projects while contributing to a large team on others. Provide second-tier production support for 24/7 applications. Follow team guidelines for quality and consistency within the design and development phases of the application. Identify opportunities to improve and optimize the application. Knowledge and Experience Bachelors degree in computer science or information technology. 4+ years of full stack development experience. In-depth knowledge of Java, JavaScript, CSS, HTML, and front-end languages. Knowledge of performance testing frameworks, Proven success with test-driven development Experience with browser-based debugging and performance testing software. Excellent troubleshooting skills. Good Object-oriented concepts and knowledge of core Java and Java EE. First-hand experience with enterprise messaging (IBM WebSphere MQ or equivalent) Practical knowledge of Java application servers (JBoss, Tomcat) preferred. Spring Framework working knowledge. Experience with the core AWS services Experience with the serverless approaches using AWS resources. Experience in developing infrastructure as code using CDK by efficient usage of AWS services. Experience in AWS services such as API Gateway, Lambda, DynamoDB, S3, Cognito and AWS CLI. Experience in using AWS SDK Understanding of distributed transactions Track record of completing assignments on time with a high degree of quality Experience and/or knowledge of all aspects of the SDLC methodology and related concepts and practices. Experience with Agile development methodologies preferred Knowledge of Gradle / Maven preferred Experience working with commodity markets or financial trading environments preferred Open to learn and willing to participate in development using new frameworks, programming languages. Good to Have Knowledge of REACT tools including React.js, TypeScript and JavaScript ES6, Webpack, Enzyme, Redux, and Flux. Experience with user interface design. experience in AWS Amplify, RDS, EventBridge, SNS, SQS and SES.

Posted Date not available

Apply

5.0 - 10.0 years

6 - 10 Lacs

hyderabad

Work from Office

Job Purpose Intercontinental Exchange, Inc. (ICE) presents a unique opportunity to work with cutting-edge technology and business challenges in the financial services sector.? ICE team members work across departments and traditional boundaries to innovate and respond to industry demand.? A successful candidate will be able to multitask in a dynamic team-based environment demonstrating strong problem-solving and decision-making abilities and the highest degree of professionalism. We are seeking an experienced AWS solution design engineer/architect to join our infrastructure cloud team. The infrastructure cloud team is responsible for internal services that provide developer collaboration tools, the build and release pipeline, and shared AWS cloud services platform. The infrastructure cloud team enables engineers to build product features and efficiently and confidently them into production. Responsibilities Develop utilities or furthering existing application and system management tools and processes that reduce manual efforts and increase overall efficiency Build and maintain Terraform/CloudFormation templates and scripts to automate and deploy AWS resources and configuration changes Experience reviewing and refining design and architecture documents presented by teams for operational readiness, fault tolerance and scalability Monitor and research cloud technologies and stay current with trends in the industry Participate in an on-call rotation and identify opportunities for reducing toil and avoiding technical debt to reduce support and operations load. Knowledge and Experience Essential 1.5+ years of experience in an DevOps, preferably DevSecOps, or SRE role in an AWS cloud environment. 1.5+ years strong experience with configuring, managing, solutioning, and architecting with AWS (Lambda, EC2, ECS, ELB, EventBridge, Kinesis, Route 53, SNS, SQS, CloudTrail, API Gateway, CloudFront, VPC, TransitGW, IAM, Security Hub, Service Mesh) Python, or Golang proficiency. Proven background of implementing continuous integration, and delivery for projects. A track record of introducing automation to solve administrative and other business as usual tasks. Beneficial Proficiency in Terraform, CloudFormation, or Ansible A history of delivering services developed in an API-first approach. Coming from a system administration, network, or security background. Prior experience working with environments of significant scale (thousands of servers)

Posted Date not available

Apply

7.0 - 12.0 years

30 - 35 Lacs

bengaluru

Work from Office

Solution Architect – Data Platforms & Solution Delivery Req number: R5967 Employment type: Full time Worksite flexibility: Remote Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We are seeking an experienced Solution Architect to drive the design and delivery of enterprise data and analytics solutions. You will work closely with business and technical teams to understand project requirements, develop conceptual and logical architectures, and oversee full lifecycle implementations— ensuring that solutions are scalable, secure, and aligned to business objectives. This role requires strong expertise in modern data platforms, cloud architectures, and end-to-end solution delivery in complex environments. This is a Full-time and Remote position. Job Description What You’ll Do Work with stakeholders to gather and analyze business, data, and technical requirements for new and evolving data solutions. Develop conceptual, logical, and physical solution architectures for data platforms, data products, analytics, and AI/ML projects. Guide projects through the entire solution lifecycle, from requirements and design to build, testing, and deployment. Integrate data from multiple sources (SAP and non-SAP), ensuring interoperability and seamless data flow. Define solution-level security, access controls, and compliance requirements in partnership with data governance and security teams. Lead efforts in performance optimization, cost management, and solution scalability. Produce clear technical documentation and solution architecture diagrams to support implementation and knowledge transfer. Stay up-to-date with advances in cloud, data engineering, and analytics technologies and recommend improvements. Collaborate across teams—including data engineers, data scientists, DevOps, and business stakeholders—to ensure effective delivery and adoption of solutions. What You'll Need Technical Proficiency: Solution Architecture & Data Platforms: Designing data platforms and architecture (conceptual, logical, physical). Cloud-native and hybrid data solutions (Databricks Lakehouse, AWS), Integration of SAP and non-SAP data sources. Databricks Lakehouse Platform: Medallion Architecture, Delta Lake & DLT Pipelines, PySpark Workbooks, Spark SQL & SQL Warehouse, Unity Catalog (data governance, lineage), Genie (query performance, indexing), Security & Role-Based Access Control. Programming: Python, SQL, PySpark, Spark, Scala. AWS Cloud Services: IAM, S3, Lambda, EMR, Redshift , Bedrock. Solution Delivery: Familiarity with DevOps and CI/CD processes, Solution documentation and architecture diagramming, Performance tuning and cost optimization, Experience with data modeling (ER, dimensional), Knowledge of data security and compliance frameworks. Strong communication, presentation, and stakeholder management skills. Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or a related field. Certifications in Databricks, AWS, or Solution Architecture Experience with SAP ERP, S/4HANA, DataSphere, ABAP, and CDS views Exposure to data mesh or data product management concepts Background in manufacturing or enterprise analytics environments Physical Demands This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc. Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor. Reasonable accommodation statement If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to application.accommodations@cai.io or (888) 824 – 8111.

Posted Date not available

Apply

1.0 - 6.0 years

3 - 6 Lacs

kolhapur

Work from Office

We are looking for a skilled AWS Developer with 15 hours of experience to join our team at Ecobillz Private Limited. The ideal candidate will have a strong background in software product development and proficiency in AWS technologies. Roles and Responsibility Design, develop, and deploy scalable and efficient software products using AWS services. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain high-quality code that meets industry standards and best practices. Troubleshoot and resolve technical issues efficiently. Participate in code reviews and contribute to improving overall code quality. Stay updated with the latest trends and technologies in AWS and software product development. Job Requirements Strong understanding of software product development principles and methodologies. Proficiency in AWS services such as EC2, S3, Lambda, and DynamoDB. Experience with agile development methodologies and version control systems like Git. Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. Ability to work in a fast-paced environment and adapt to changing priorities.

Posted Date not available

Apply

15.0 - 25.0 years

10 - 14 Lacs

gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Core Infrastructure Good to have skills : NAMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:- Expected to be a SME with deep knowledge and experience.- Should have Influencing and Advisory skills.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Lead the application development team in designing and implementing software solutions.- Collaborate with stakeholders to gather requirements and define project scope.- Provide technical guidance and mentorship to team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Core Infrastructure.- Strong understanding of cloud computing principles and best practices.- Experience in designing and implementing scalable and secure AWS solutions.- Hands-on experience with AWS services such as EC2, S3, RDS, and Lambda.- Knowledge of infrastructure as code tools like Terraform or CloudFormation. Additional Information:- The candidate should have a minimum of 15 years of experience in AWS Core Infrastructure.- This position is based at our Gurugram office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted Date not available

Apply

7.0 - 12.0 years

10 - 14 Lacs

gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Amazon Web Services (AWS) Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process, collaborating with team members, and ensuring project success. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process effectively- Ensure timely delivery of projects- Mentor and guide team members for skill enhancement Professional & Technical Skills: - Must To Have Skills: Proficiency in Amazon Web Services (AWS)- Strong understanding of cloud computing concepts- Experience in designing scalable and reliable applications on AWS- Knowledge of infrastructure as code and automation tools- Hands-on experience with AWS services like EC2, S3, Lambda, and RDS Additional Information:- The candidate should have a minimum of 7.5 years of experience in Amazon Web Services (AWS)- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education

Posted Date not available

Apply

3.0 - 8.0 years

5 - 9 Lacs

bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Lambda Administration Good to have skills : AWS S3 (Simple Storage Service), Python (Programming Language), AWS Application Integration, Event Bridge, API GatewayMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct thorough testing and debugging of applications to ensure high-quality deliverables. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Lambda Administration.- Good To Have Skills: Experience with AWS S3 (Simple Storage Service), Python (Programming Language), AWS Application Integration.- Strong understanding of serverless architecture and its implementation.- Experience in developing and deploying applications using AWS services.- Familiarity with application monitoring and performance tuning. Additional Information:- The candidate should have minimum 3 years of experience in AWS Lambda Administration.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

5.0 - 9.0 years

9 - 13 Lacs

gurugram, bengaluru

Work from Office

About the Role: Grade Level (for internal use): 11 The RoleLead Cloud Engineer The Team: We are looking for a dynamic AWS Cloud Support Engineer to join our team, working across multiple AWS accounts to ensure seamless cloud operations. This is a varied role that requires deep technical expertise, strategic planning, and strong stakeholder communication. Collaboration is at the core of our team, so if you thrive in a fast-paced, problem-solving environment, we'd love to hear from you. The Impact: Contribute significantly to the growth of the firm byDeveloping innovative functionality in existing and new products/ Supporting and maintaining high revenue products. Whats in it for you: A collaborative team culture that values innovation and problem-solving. Opportunity to work on diverse projects spanning multiple AWS accounts. A chance to shape cloud strategy and architecture in a growing organizational division. Actively supported in taking learning opportunities . Exciting open-door collaboration within the EDO Agentic AI experience. Key Responsibilities: Architecture Planning: Design and refine AWS architectures to meet business needs, ensuring security, scalability, and cost-effectiveness. Cost Management: Keep an eye on infrastructure costs and recommendations, propose changes to stakeholders to reduce cloud spend and waste. Multi-Account Management: Oversee cloud environments across numerous AWS accounts, maintaining best practices for governance and security. Troubleshooting & Incident Response: Diagnose and resolve complex technical issues related to AWS services, infrastructure, and networking. Stakeholder Collaboration: Communicate effectively with teams across the organization, providing insights, technical recommendations, and status updates. Automation & Optimization: Develop scripts and tools to automate deployments, monitoring, and management processes. Security & Compliance: Ensure adherence to security policies and regulatory requirements within AWS environments. Continuous Improvement: Stay updated with AWS advancements and recommend improvements for existing cloud strategies. : Proven experience in AWS cloud infrastructure and services. Strong understanding of networking, security, and cloud architecture best practices. Proficiency in Terraform, CloudFormation, or other Infra as Code tools is a plus. Hands-on experience with EC2, S3, RDS, Lambda, VPC , Bedrock and other AWS services preferred. Ability to troubleshoot complex system and network issues across cloud environments. Excellent communication skills and the ability to work collaboratively in a team-oriented environment . AWS certifications (Solutions Architect, SysOps, or Developer) are preferred but not mandatory. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.2 - Middle Professional Tier II (EEO Job Group)

Posted Date not available

Apply

7.0 - 12.0 years

2 - 5 Lacs

mumbai

Work from Office

Inviting applications for the role of Principal Consultant- AWS Developer We are seeking an experienced Developer with expertise in AWS-based big data solutions, particularly leveraging Apache Spark on AWS EMR, along with strong backend development skills in Java and Spring. The ideal candidate will also possess a solid background in data warehousing, ETL pipelines, and large-scale data processing systems.. Responsibilities Design and implement scalable data processing solutions using Apache Spark on AWS EMR. Develop microservices and backend components using Java and the Spring framework. Build, optimize, and maintain ETL pipelines for structured and unstructured data. Integrate data pipelines with AWS services such as S3, Lambda, Glue, Redshift, and Athena. Collaborate with data architects, analysts, and DevOps teams to support data warehousing initiatives. Write efficient, reusable, and reliable code following best practices. Ensure data quality, governance, and lineage across the architecture. Troubleshoot and optimize Spark jobs and cloud-based processing workflows. Participate in code reviews, testing, and deployments in Agile environments. Qualifications we seek in you! Minimum Qualifications Bachelors degree Preferred Qualifications/ Skills Strong experience with Apache Spark and AWS EMR in production environments. Solid understanding of AWS ecosystem, including services like S3, Lambda, Glue, Redshift, and CloudWatch. Proven experience in designing and managing large-scale data warehousing systems. Expertise in building and maintaining ETL pipelines and data transformation workflows. Strong SQL skills and familiarity with performance tuning for analytical queries. Experience working in Agile development environments using tools such as Git, JIRA, and CI/CD pipelines. Familiarity with data modeling concepts and tools (e.g., Star Schema, Snowflake Schema). Knowledge of data governance tools and metadata management. Experience with containerization (Docker, Kubernetes) and serverless architectures.

Posted Date not available

Apply

6.0 - 11.0 years

13 - 18 Lacs

pune

Hybrid

Who are we looking for? We are looking for a data bricks engineer (developer), having strong software development experience of 6 to 10 years on Apache Spark & Scala Technical Skills: Strong knowledge & hands-on experience in Apache Spark & Scala Experience in AWS S3, Redshift, EC2 and Lambda services Extensive experience in developing and deploying Bigdata pipelines Experience in Azure data lake Strong hands on in SQL development / Azure SQL and in-depth understanding of optimization and tuning techniques in SQL with Redshift Development in Notebooks (like Jupyter, DataBricks, Zeppelin etc) Development experience in Spark Experience in scripting language like python and any other programming language Roles and Responsibilities: Candidate must have hands on experience in AWS Data Databricks Good development experience using Python/Scala, Spark SQL and Data Frames Hands-on with Databricks, Data Lake and SQL knowledge is a must. Performance tuning, troubleshooting, and debugging SparkTM Process Skills: Agile Scrum Qualification: Bachelor of Engineering (Computer background preferred)

Posted Date not available

Apply

5.0 - 10.0 years

20 - 25 Lacs

noida

Work from Office

Description: Year of exp- 5 to 8 Years Location- Pune, Noida, Bangalore Requirements: Strong experience with MS SQL Server and Snowflake Solid knowledge of AWS cloud services (e.g., S3, EC2, Glue, Lambda) Proficient in Python and C# for scripting, API integration, and data processing Experience working with Apache NiFi for data ingestion and orchestration Hands-on with Apache Airflow for scheduling and managing workflows Proficient in Apache Spark for big data processing and transformation Experience working with Qubole or similar data lake platforms Strong problem-solving skills and attention to detail Excellent communication and collaboration abilities Job Responsibilities: Design and develop scalable ETL/ELT pipelines using Apache NiFi, Airflow, and Spark Implement and manage data workflows across AWS cloud services, including data ingestion, transformation, and storage Develop and optimize queries in MS SQL and Snowflake for efficient data retrieval and reporting Collaborate with data scientists, analysts, and application developers to ensure reliable and timely data delivery Build and maintain data lake solutions on Qubole, ensuring data quality, governance, and security Write clean, efficient code in Python and C# for automation, integration, and backend data processing tasks Ensure pipeline and job reliability through monitoring, alerting, and error handling Participate in performance tuning and optimization of data jobs and warehouse structures Drive best practices for data modeling, metadata management, and documentation What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted Date not available

Apply

3.0 - 8.0 years

5 - 9 Lacs

bengaluru

Work from Office

About The Role - Grade Specific We are seeking a talented Golang & Cloud Infrastructure Engineer with 3+ years of experience in building scalable backend systems and managing cloud infrastructure. The ideal candidate will have strong expertise in writing production-ready Go code, deploying containerized applications, and managing AWS resources using Terraform. This role requires a deep understanding of cloud architecture, CI/CD automation, and performance optimization. Responsibilities: Develop and maintain backend services using Go (Golang) for high-performance applications. Design and manage AWS infrastructure using ECS, Fargate, S3, CloudFront, and Terraform. Build and deploy containerized applications using Docker. Implement and maintain CI/CD pipelines for automated testing and deployment. Optimize system performance and troubleshoot production issues. Collaborate with cross-functional teams to deliver reliable and scalable solutions. Primary Skills: 3+ years of experience in Go (Golang) development. Strong hands-on experience with AWS servicesECS, Fargate, S3, CloudFront. Proficiency in Terraform for infrastructure as code. Solid understanding of Docker and container orchestration. Experience with CI/CD tools and deployment automation. Strong debugging and performance tuning capabilities. Secondary Skills: Experience with monitoring and observability tools (e.g., CloudWatch, Datadog). Familiarity with serverless AWS services (e.g., Lambda, API Gateway). Knowledge of cloud cost optimization strategies. Excellent written and verbal communication skills. Educational Qualification: Bachelors or Masters degree in Computer Science, Software Engineering, or a related field.

Posted Date not available

Apply

15.0 - 20.0 years

15 - 20 Lacs

mumbai

Work from Office

Project Role :Solution Architect Project Role Description : Translate client requirements into differentiated, deliverable solutions using in-depth knowledge of a technology, function, or platform. Collaborate with the Sales Pursuit and Delivery Teams to develop a winnable and deliverable solution that underpins the client value proposition and business case. Must have skills : Python (Programming Language) Good to have skills : AWS ArchitectureMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time educationKey Responsibilities:Define architectural vision and ensure scalability, maintainability, and performance.Design and implement event-driven architectures and microservices using AWS.Lead adoption of serverless frameworks and containerized solutions (Lambda, EKS, Docker).Establish best practices for data storage and processing with AWS S3 and DynamoDB.Architect and automate workflows using AWS EventBridge.Guide teams on designing and optimizing secure, high-performance RESTful APIs.Recommend database solutions and ensure efficient querying with SQL.Align architecture with business goals while ensuring compliance and security.Required Skills: Expertise in Python and large-scale system architecture.Deep knowledge of AWS services like EKS, Lambda, S3, DynamoDB, and EventBridge.Strong background in event-driven architectures and serverless computing.Experience with Docker, Kubernetes, and containerization.Advanced REST API design skills with a focus on performance and security.Proficiency in SQL and exposure to NoSQL databases.Strong leadership and communication skills. Qualification 15 years full time education

Posted Date not available

Apply

5.0 - 10.0 years

11 - 15 Lacs

hyderabad

Work from Office

Stellantis is seeking a passionate, innovative, results-oriented Information Communication Technology (ICT) Manufacturing AWS Cloud Architect to join the team. As a Cloud architect, the selected candidate will leverage business analysis, data management, and data engineering skills to develop sustainable data tools supporting Stellantiss Manufacturing Portfolio Planning. This role will collaborate closely with data analysts and business intelligence developers within the Product Development IT Data Insights team. Job responsibilities include but are not limited to Having deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms Assembling large, complex sets of data that meet non-functional and functional business requirements Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS, cloud and other SQL technologies. Working with stakeholders to support their data infrastructure needs while assisting with data-related technical issues Maintain high-quality ontology and metadata of data systems Establish a strong relationship with the central BI/data engineering COE to ensure alignment in terms of leveraging corporate standard technologies, processes, and reusable data models Ensure data security and develop traceable procedures for user access to data systems Qualifications, Experience and Competency Education Bachelors or Masters degree in Computer Science, or related IT-focused degree Experience Essential Overall 10-15 years of IT experience Develop, automate and maintain the build of AWS components, and operating systems. Work with application and architecture teams to conduct proof of concept (POC) and implement the design in a production environment in AWS. Migrate and transform existing workloads from on premise to AWS Minimum 5 years of experience in the area of data engineering or data architectureconcepts, approach, data lakes, data extraction, data transformation Proficient in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies. Experience operating very large data warehouses or data lakes Investigate and develop new micro services and features using the latest technology stacks from AWS Self-starter with the desire and ability to quickly learn new technologies Strong interpersonal skills with ability to communicate & build relationships at all levels Hands-on experience from AWS cloud technologies like S3, AWS glue, Glue Catalog, Athena, AWS Lambda, AWS DMS, pyspark, and snowflake. Experience with building data pipelines and applications to stream and process large datasets at low latencies. Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Desirable Familiarity with data analytics, Engineering processes and technologies Ability to work successfully within a global and cross-functional team A passion for technology. We are looking for someone who is keen to leverage their existing skills while trying new approaches and share that knowledge with others to help grow the data and analytics teams at Stellantis to their full potential! Specific Skill Requirement AWS services (GLUE, DMS, EC2, RDS, S3, VPCs and all core services, Lambda, API Gateway, Cloud Formation, Cloud watch, Route53, Athena, IAM) andSQL, Qlik sense, python/Spark, ETL optimization , If you are interested, please Share below details and Updated Resume Matched First Name Last Name Date of Birth Pass Port No and Expiry Date Alternate Contact Number Total Experience Relevant Experience Current CTC Expected CTC Current Location Preferred Location Current Organization Payroll Company Notice period Holding any offer

Posted Date not available

Apply

12.0 - 16.0 years

14 - 20 Lacs

pune

Work from Office

AI/ML/GenAI AWS SME Job Description Role Overview: An AWS SME with a Data Science Background is responsible for leveraging Amazon Web Services (AWS) to design, implement, and manage data-driven solutions. This role involves a combination of cloud computing expertise and data science skills to optimize and innovate business processes. Key Responsibilities: Data Analysis and Modelling: Analyzing large datasets to derive actionable insights and building predictive models using AWS services like SageMaker, Bedrock, Textract etc. Cloud Infrastructure Management: Designing, deploying, and maintaining scalable cloud infrastructure on AWS to support data science workflows. Machine Learning Implementation: Developing and deploying machine learning models using AWS ML services. Security and Compliance: Ensuring data security and compliance with industry standards and best practices. Collaboration: Working closely with cross-functional teams, including data engineers, analysts, DevOps and business stakeholders, to deliver data-driven solutions. Performance Optimization: Monitoring and optimizing the performance of data science applications and cloud infrastructure. Documentation and Reporting: Documenting processes, models, and results, and presenting findings to stakeholders. Skills & Qualifications Technical Skills: Proficiency in AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker). Strong programming skills in Python. Experience with AI/ML project life cycle steps. Knowledge of machine learning algorithms and frameworks (e.g., TensorFlow, Scikit-learn). Familiarity with data pipeline tools (e.g., AWS Glue, Apache Airflow). Excellent communication and collaboration abilities.

Posted Date not available

Apply

5.0 - 9.0 years

5 - 9 Lacs

bengaluru

Hybrid

PF Detection is mandatory : Managing data storage solutions on AWS, such as Amazon S3, Amazon Redshift, and Amazon DynamoDB. Implementing and optimizing data processing workflows using AWS services like AWS Glue, Amazon EMR,and AWS Lambda. Working with Spotfire Engineers and business analysts to ensure data is accessible and usable for analysisand visualization. Collaborating with other engineers, and business stakeholders to understand requirements and deliversolutions. Writing code in languages like SQL, Python, or Scala to build and maintain data pipelines and applications. Using Infrastructure as Code (IaC) tools to automate the deployment and management of datainfrastructure. A strong understanding of core AWS services, cloud concepts, and the AWS Well-Architected Framework Conduct an extensive inventory/evaluation of existing environments workflows. Designing and developing scalable data pipelines using AWS services to ensure efficient data flow andprocessing. Integrating / combining diverse data sources to maintain data consistency and reliability. Working closely with data engineers and other stakeholders to understand data requirements and ensureseamless data integration. Build and maintain CI/CD pipelines. Kindly Acknowledge back to this mail with updated Resume.

Posted Date not available

Apply

4.0 - 9.0 years

4 - 8 Lacs

hyderabad

Work from Office

Extensive experience with AWS services : IAM, S3, Glue, CloudFormation and CloudWatch In-depth understanding of AWS IAM policy evaluation for permissions and access control Proficient in using Bitbucket, Confluence, GitHub, and Visual Studio Code Proficient in policy languages, particularly Rego scriptingGood to Have Skills : Experience with the WIZ tool for security and compliance Good programming skills in Python Advanced knowledge of additional AWS services : ECS, EKS, Lambda, SNS and SQSRoles & ResponsibilitiesSenior Developer on the Wiz team specializing in Rego and AWS------ ------Project Manager - One to Three Years,AWS Cloud Formation - Four to Six Years,AWS IAM - Four to Six Years------PSP Defined SCU in Data engineer.

Posted Date not available

Apply

8.0 - 12.0 years

10 - 14 Lacs

pune

Hybrid

So, what’s the role all about? We are looking for a highly skilled and experienced Senior Specialist Software Engineer with strong expertise in C++ and .NET technologies to join our software development team. In this role, you will be responsible for designing, developing, and maintaining robust, scalable, and high-performance software applications aligned with business requirements and technical specifications. How will you make an impact? Apply a strong understanding of software development best practices, principles, and standards throughout the development lifecycle. Write clean, efficient, and high-quality code that adheres to coding standards and software engineering best practices. Stay current with the latest trends, technologies, and methodologies in software development and incorporate them into project work. Provide technical guidance and support to team members, helping to resolve complex technical challenges. Conduct thorough code reviews and provide constructive feedback to ensure code quality and maintainability. Demonstrate deep knowledge of modern strong expertise in .NET technologies and C++ standards , along with a solid understanding of object-oriented design principles, design patterns, and software architecture. Work on large-scale applications and manage complex codebases effectively, leveraging strong knowledge of algorithms and data structures . Optimize application performance and use profiling and debugging tools to identify and address bottlenecks and issues. Utilize AWS cloud services for application development, deployment, and monitoring. This includes working with services such as EC2, S3, Lambda, CloudWatch, RDS , and ECS/EKS . Design and implement cloud-native or cloud-migrated solutions using AWS architecture best practices. Collaborate effectively with cross-functional teams and exhibit strong communication and interpersonal skills. Manage and track project timelines to ensure timely delivery of milestones and project goals. Promote and enforce adherence to software development best practices within the team. Mentor and coach junior developers, supporting their professional development and technical growth. Have you got what it takes? Bachelor’s degree in computer science , Software Engineering , or a related field. 8 to 12 years of professional experience in software development using .NET and C++ technologies. Strong understanding of Object-Oriented Programming (OOP) principles and experience applying design patterns in real-world scenarios. Hands-on experience in telephony systems , including VoIP , media streaming , SIP signaling , and RTP protocols. Deep knowledge of software development best practices , including design principles, testing strategies, version control, and continuous integration. Experience in database design and development using SQL Server or similar relational database systems. Proficient with development tools such as Visual Studio , Git , and JIRA . Strong analytical and problem-solving skills , with a focus on performance and scalability. Excellent verbal and written communication skills , with the ability to explain technical concepts clearly to both technical and non-technical stakeholders. Proven ability to work independently as well as collaboratively in a team-oriented environment. Self-motivated, detail-oriented, and committed to continuous learning and improvement. Nice to Have: Experience working with public cloud platforms , preferably AWS . Hands-on experience in developing and deploying applications. Practical understanding of microservices architecture and distributed systems. Familiarity with Contact Center as a Service (CCaaS) platforms and Automatic Call Distribution (ACD) systems. Working knowledge of Agile/Scrum software development methodologies. Experience with C++, C#, .NET, and .NET Core for modern application development. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 8260 Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor

Posted Date not available

Apply

7.0 - 11.0 years

7 - 17 Lacs

hyderabad

Work from Office

Role & responsibilities Deploy and support automated AWS cloud-based tools and environments in support of application teams. Analyze and response to incidents and problems including the development of automated monitoring and remediation to maintain uptime and expected service levels. This includes cloud infrastructure, applications, middleware, and other 3rd party software. Analyze and resolve problems associated with the operating systems and middleware, for example Red hat Linux, JBoss, Apache, Tomcat, Windows Server, IIS, etc. Manage, configure, respond, and resolve AWS Security alerts including vulnerabilities and patch management. Design, generate and interpret operational reports related to system health status, capacity management and system performance management. Determine root cause for incidents, correlate recurring incidents to systemic problems, and drive towards resolution. Contribute to the build-out of cloud infrastructure, for example, working with services such as load balancers, gateways, firewalls, subnets, security groups, and storage options. Use scripting and automation tools to increase efficiency, performance, and cost reductions, for example CloudFormation, Terraform, Unix Shell, Python, PowerShell, Ansible, etc. Participate in the development of Systems Engineering departmental architecture, standards, and guidelines. Work closely with application teams following Agile methods and principles. Contribute and collaborate to design, document, and publish Engineering standards, principles, guidelines, and best practices. Seek opportunities to increase efficiency through research and investigation, application team input, automation options, POCs, etc. Adhere to ethical standards and comply with the laws and regulations applicable to your job function. Key Skills (Must-Have): Experience with core AWS services like EC2, S3, SNS, Lambda, CloudWatch and CloudTrail. Experience in the design, development, and implementation of AWS-based infrastructure solutions using AWS APIs, and Python with boto3. Strong scripting experience in Python and PowerShell/Bash. Windows and Linux system administration: OS, middleware, application layer Server, network, and storage performance benchmarking and optimization. In-depth understanding of the operational dependencies of applications, networks, systems, security, and policy. Experience with cloud orchestrations tools like AWS CloudFormation and/or terraform, with an emphasis on creating modular architecture. Experience with AWS IAM. Proficient in using Git branching, push/pull requests, and advanced Git workflows. Preferred candidate profile Experience with Jenkins, Ansible or similar tools. Experience with application build technologies. Demonstrated knowledge of DevOps principles. Hands-on experience required. Strong networking knowledge, preferably with DNS, subnets, routing, security groups, whitelisting, firewalls, and various networking infrastructure. CDK, Control Tower, AWS Control Tower Customization Solution Experience in containerization and orchestration using Docker, Kubernetes, or Fargate/EKS/ECS. Familiar with analytics and log aggregation tools such as Splunk or Microsoft BI Job Type: Permanent Role: Cloud Engineer Location: Hyderabad Experience: 7+ yrs Notice Period : Immediate- 15 Days

Posted Date not available

Apply

8.0 - 13.0 years

20 - 35 Lacs

pune

Work from Office

Position: Senior AWS Cloud Engineer Location: Smartworks,43EQ,Balewadi High Street, Pune Shift: 4:30 PM IST 1:30 AM IST (First 3 Months), Flexible to Regular IST Hours Thereafter About Reliable Group Reliable Group is a US-based company headquartered in New York , with two offices in India: New Mumbai (Airoli) Smartworks, 43EQ, Balewadi High Street, Pune We operate across three key business verticals : On-Demand Providing specialized technology talent for global clients. GCC (Global Capability Centers) Partnering with enterprises to build and scale their India operations. Product Development Our in-house AI/ML product company develops AI chatbots and intelligent solutions for US healthcare and insurance companies . About This Opportunity This role is for one of Reliable Groups biggest GCC accounts (RSC India) , which we are building in Pune. We are on a mission to hire 1,000+ people for this account over the next phase. You will be joining the founding team for this GCC and playing a critical role in shaping its AWS cloud infrastructure from the ground up.The client is the second-largest healthcare company in the USA , ranked in the Fortune 50 , offering a unique opportunity to work on high-impact, enterprise-scale cloud solutions in the healthcare domain. Role Overview We are seeking a highly skilled Senior AWS Cloud Engineer with proven experience in building AWS environments from the ground up —not just consuming existing services. This role requires an AWS builder mindset, capable of designing, provisioning, and managing multi-account AWS architectures , networking, security, and database platforms end-to-end. Key Responsibilities: AWS Environment Provisioning: Design and provision multi-account AWS environments using best practices (Control Tower, Organizations). Set up and configure networking (VPC, Transit Gateway, Private Endpoints, Subnets, Routing, Firewalls) . Provision and manage AWS database platforms (RDS, Aurora, DynamoDB) with high availability and security. Manage full AWS account lifecycle, including IAM roles, policies, and access controls. Infrastructure as Code (IaC): Develop and maintain AWS infrastructure using Terraform and AWS CloudFormation . Automate account provisioning, networking, and security configuration. Security & Compliance: Implement AWS security best practices, including IAM governance , encryption, and compliance automation. Use tools like AWS Config, GuardDuty, Security Hub, and Vault to enforce standards. Automation & CI/CD: Create automation scripts in Python, Bash, or PowerShell for provisioning and management tasks. Integrate AWS infrastructure with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD). Monitoring & Optimization: Implement monitoring solutions (CloudWatch, Prometheus, Grafana) for infrastructure health and performance. Optimize cost, performance, and scalability of AWS environments. Required Skills & Experience: 8+ years of experience in Cloud Engineering, with 4 + years focused on AWS provisioning . Strong expertise in: AWS multi-account setup (Control Tower/Organizations) VPC design and networking (Transit Gateway, Private Endpoints, routing, firewalls) IAM policies, role-based access control, and security hardening Database provisioning (RDS, Aurora, DynamoDB) Proficiency in Terraform and AWS CloudFormation . Hands-on experience with scripting (Python, Bash, PowerShell). Experience with CI/CD pipelines and automation tools. Familiarity with monitoring and logging tools. Preferred Certifications AWS Certified Solutions Architect – Professional AWS Certified DevOps Engineer – Professional HashiCorp Certified: Terraform Associate Work Schedule Initial 3 months: 4:30 PM IST – 1:30 AM IST to work with the US team.After ramp-up: Option to transition to regular India working hours.

Posted Date not available

Apply

5.0 - 10.0 years

20 - 25 Lacs

pune

Work from Office

Lead Software Engineer (AWS Cloud, Platform Engineering)We are seeking an experienced and motivated Lead Software Engineer to join Mastercard's AWS Platform Engineering Team. This role is critical in designing, building, and maintaining a scalable, secure, and highly available cloud platform on AWS. You will collaborate with cross-functional teams to ensure optimal platform performance, cost efficiency, and alignment with best practices. The ideal candidate is expected to have a strong understanding of AWS services, DevOps practices, and infrastructure automation and to focus on enabling application teams to deliver value rapidly and securely. The Role Design, implement, and maintain a scalable multi-account AWS platform, leveraging services like AWS Organizations, VPC, IAM, EKS, EC2, S3, RDS, Glue, EMR, MSK, etc. Develop and manage infrastructure using tools like CloudFormation/CDK. Manage secure connectivity using technologies like AWS PrivateLink, Transit Gateway, and Direct Connect. Implement and maintain secure access controls and guardrails using AWS Control Tower, Service Control Policies (SCPs), and IAM. Engage and improve the lifecycle of the AWS platform and services -- from development to deployment, operation, and refinement. Scale systems sustainably through mechanisms like automation, and evolve systems by pushing for changes that improve reliability and velocity. Practice sustainable incident response and blameless postmortem. Proven experience in leading engineering teams, mentoring engineers, and driving technical excellence. Ability to lead architecture discussions, conduct code reviews, and foster a collaborative engineering culture All About You 5+ years of experience in AWS cloud engineering or similar roles. Strong understanding of Object-Oriented Programming (OOP) principles and experience applying them in languages like Python, TypeScript, and Java. Fluent in AWS Cloud Development Kit (CDK) Proficient with AWS Services esp. EKS, EC2, RDS, Lambda, API Gateway S3, Route 53, MSK, Glue, EMR, etc. Strong knowledge of networking in AWS (VPC, Direct Connect, PrivateLink, Transit Gateway, etc.). Experience with CI/CD tools like AWS CodePipeline, Jenkins, BitBucket/GitHub, Artifactory, Sonarqube, etc. Strong knowledge of the best practices around Logging, Monitoring, and Alerting solutions. Experience with software deployment and configuration automation. Expertise in designing, analyzing, and troubleshooting large-scale systems. Ability to debug, optimize code, and automate routine tasks. Systematic problem-solving approach, with effective communication skills and a sense of drive. Hands-on experience with AWS Control Tower, including setting up guardrails, managing Service Control Policies (SCPs), and configuring Landing Zones. Knowledge of security best practices and frameworks.

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies