Home
Jobs

290 Aws Iam Jobs - Page 6

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Req ID: 324967 We are currently seeking a Senior Cloud Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Senior Cloud Engineer - Grade 8 - At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company"™s growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring, the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here. Preferred Experience: "¢ Solid understanding of cloud computing, networking, and storage principles with focus on Azure. "¢ Cloud administration, maintenance, and troubleshooting experience. Willing to work on multiple cloud platforms. "¢ Independently work on issue resolutions, service requests and change requests following the ITIL process. "¢ Exposure to container deployments and container orchestration tools. "¢ Should have good experience in troubleshooting cloud platform level issues. "¢ Strong scripting skills. Experience in IaC and automated deployments using Azure pipelines or Terraform is a required. "¢ Ability to work closely with senior managers / consultants on new project requirements & should independently work on technical implementations. "¢ "¢ Able to work on On-Call rotations and provide shift hours support at L2 level "¢ Able to work independently in a project scenario and do POCs "¢ Experience in updating KB articles, Problem Management articles, and SOPs/Run-books "¢ Passion for delivering timely and outstanding customer service "¢ Able to Help Level 1 staff in support "¢ Great written and oral communication skills with internal and external customers Basic Qualifications: "¢ 6+ years of overall operational experience "¢ 4+ years of Azure/AWS experience "¢ 3+ years of experience working in a diverse cloud support environment in a 24*7 production support model "¢ 1+ years DevOps/scripting/APIs experience Preferred Certifications: "¢ Azure Administrator Associate and/or Expert certification is preferred. "¢ DevOps / Terraform certification is a plus "¢ Four Year BS/BA in Information Technology degree or equivalent experience

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Req ID: 306668 We are currently seeking a Cloud Solution Delivery Sr Advisor to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Lead Data Engineer to join our dynamic team. The ideal candidate will have a strong background in implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies; leading teams and directing engineering workloads. This role requires a deep understanding of data engineering, cloud services, and the ability to implement high quality solutions. Key Responsibilities Lead and direct a small team of engineers engaged in - Engineer end-to-end data solutions using AWS services, including Lambda, S3, Snowflake, DBT, Apache Airflow - Cataloguing data - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Providing best in class documentation for downstream teams to develop, test and run data products built using our tools - Testing our tooling, and providing a framework for downstream teams to test their utilisation of our products - Helping to deliver CI, CD and IaC for both our own tooling, and as templates for downstream teams - Use DBT projects to define re-usable pipelines Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 5+ years of experience in data engineering - 2+ years of experience inleading a team of data engineers - Experience in AWS cloud services - Expertise with Python and SQL - Experience of using Git / Github for source control management - Experience with Snowflake - Strong understanding of lakehouse architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Strong use of version control and proven ability to govern a team in the best practice use of version control - Strong understanding of Agile and proven ability to govern a team in the best practice use of Agile methodologies Preferred Skills and Qualifications - An understanding of Lakehouses - An understanding of Apache Iceberg tables - An understanding of data cataloguing - Knowledge of Apache Airflow for data orchestration - An understanding of DBT - SnowPro Core certification

Posted 2 weeks ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Req ID: 324966 We are currently seeking a Senior Cloud Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Senior Cloud Engineer - Grade 8 - At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company"™s growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring, the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here. Preferred Experience: "¢ Solid understanding of cloud computing, networking, and storage principles with focus on Azure. "¢ Cloud administration, maintenance, and troubleshooting experience. Willing to work on multiple cloud platforms. "¢ Independently work on issue resolutions, service requests and change requests following the ITIL process. "¢ Exposure to container deployments and container orchestration tools. "¢ Should have good experience in troubleshooting cloud platform level issues. "¢ Strong scripting skills. Experience in IaC and automated deployments using Azure pipelines or Terraform is a required. "¢ Ability to work closely with senior managers / consultants on new project requirements & should independently work on technical implementations. "¢ "¢ Able to work on On-Call rotations and provide shift hours support at L2 level "¢ Able to work independently in a project scenario and do POCs "¢ Experience in updating KB articles, Problem Management articles, and SOPs/Run-books "¢ Passion for delivering timely and outstanding customer service "¢ Able to Help Level 1 staff in support "¢ Great written and oral communication skills with internal and external customers Basic Qualifications: "¢ 6+ years of overall operational experience "¢ 4+ years of Azure/AWS experience "¢ 3+ years of experience working in a diverse cloud support environment in a 24*7 production support model "¢ 1+ years DevOps/scripting/APIs experience Preferred Certifications: "¢ Azure Administrator Associate and/or Expert certification is preferred. "¢ DevOps / Terraform certification is a plus "¢ Four Year BS/BA in Information Technology degree or equivalent experience

Posted 2 weeks ago

Apply

7.0 - 12.0 years

16 - 20 Lacs

Pune

Work from Office

Naukri logo

Req ID: 301930 We are currently seeking a Digital Solution Architect Lead Advisor to join our team in Pune, Mahrshtra (IN-MH), India (IN). Position Overview We are seeking a highly skilled and experienced Data Solution Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS - Design and implement data streaming pipelines using Kafka/Confluent Kafka - Develop data processing applications using Python - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Provide technical leadership and mentorship to development teams - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Proficiency in Kafka/Confluent Kafka and Python - Experience with Synk for security scanning and vulnerability management - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders

Posted 2 weeks ago

Apply

7.0 - 12.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

We are currently seeking a Lead Data Architect to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem - Architect data processing applications using Python, Kafka, Confluent Cloud and AWS - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams - Provide technical leadership and mentorship to development teams and lead engineers - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Strong experience with Confluent - Strong experience in Kafka - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Knowledge of Apache Airflow for data orchestration Preferred Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with Terraform - Deep experience with CI/CD pipelines - Strong understanding of the JVM language family - Understanding of GDPR and the correct handling of PII - Expertise with technical interface design - Use of Docker Responsibilities - Design and implement scalable data architectures using AWS services, Confluent and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture

Posted 2 weeks ago

Apply

2.0 - 6.0 years

1 - 5 Lacs

Noida

Work from Office

Naukri logo

Req ID: 324014 We are currently seeking a Tableau Admin with AWS Experience to join our team in NOIDA, Uttar Pradesh (IN-UP), India (IN). Tableau Admin with AWS Experience We are seeking a skilled Tableau Administrator with experience in AWS to join our team. The ideal candidate will be responsible for managing and optimizing our Tableau Server environment hosted on AWS, ensuring efficient operation, data security, and seamless integration with other data sources and analytics tools. Key Responsibilities - Manage, configure, and administer Tableau Server on AWS, including setting up sites and managing user access and permissions. - Monitor server activity/performance, conduct regular system maintenance, and troubleshoot issues to ensure optimal performance and minimal downtime. - Collaborate with data engineers and analysts to optimize data sources and dashboard performance. - Implement and manage security protocols, ensuring compliance with data governance and privacy policies. - Automate monitoring and server management tasks using AWS and Tableau APIs. - Assist in the design and development of complex Tableau dashboards. Provide technical support and training to Tableau users. - Stay updated on the latest Tableau and AWS features and best practices, recommending and implementing improvements. Qualifications - - Proven experience as a Tableau Administrator, with strong skills in Tableau Server and Tableau Desktop. - Experience with AWS, particularly with services relevant to hosting and managing Tableau Server (e.g., EC2, S3, RDS). - Familiarity with SQL and experience working with various databases. Knowledge of data integration, ETL processes, and data warehousing principles. - Strong problem-solving skills and the ability to work in a fast-paced environment. - Excellent communication and collaboration skills. - Relevant certifications in Tableau and AWS are a plus. A Tableau Administrator, also known as a Tableau Server Administrator, is responsible for managing and maintaining Tableau Server, a platform that enables organizations to create, share, and collaborate on data visualizations and dashboards. Here's a typical job description for a Tableau Admin 1. Server Administration Install, configure, and maintain Tableau Server to ensure its reliability, performance, and security. 2. User Management Manage user accounts, roles, and permissions on Tableau Server, ensuring appropriate access control. 3. Security Implement security measures, including authentication, encryption, and access controls, to protect sensitive data and dashboards. 4. Data Source Connections Set up and manage connections to various data sources, databases, and data warehouses for data extraction. 5. L icense Management: Monitor Tableau licensing, allocate licenses as needed, and ensure compliance with licensing agreements. 6. Backup and Recovery Establish backup and disaster recovery plans to safeguard Tableau Server data and configurations. 7. Performance Optimization Monitor server performance, identify bottlenecks, and optimize configurations to ensure smooth dashboard loading and efficient data processing. 8. Scaling Scale Tableau Server resources to accommodate increasing user demand and data volume. 9. Troubleshooting Diagnose and resolve issues related to Tableau Server, data sources, and dashboards. 10. Version Upgrades Plan and execute server upgrades, apply patches, and stay current with Tableau releases. 11. Monitoring and Logging Set up monitoring tools and logs to track server health, user activity, and performance metrics. 12. Training and Support Provide training and support to Tableau users, helping them with dashboard development and troubleshooting. 13. Collaboration Collaborate with data analysts, data scientists, and business users to understand their requirements and assist with dashboard development. 14. Documentation Maintain documentation for server configurations, procedures, and best practices. 15. Governance Implement data governance policies and practices to maintain data quality and consistency across Tableau dashboards. 16. Integration Collaborate with IT teams to integrate Tableau with other data management systems and tools. 17. Usage Analytics Generate reports and insights on Tableau usage and adoption to inform decision-making. 18. Stay Current Keep up-to-date with Tableau updates, new features, and best practices in server administration. A Tableau Administrator plays a vital role in ensuring that Tableau is effectively utilized within an organization, allowing users to harness the power of data visualization and analytics for informed decision-making.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Gurugram

Work from Office

Naukri logo

About The Role : AWS Cloud Engineer Required Skills and Qualifications: 4-7 years of hands-on experience with AWS services, including EC2, S3, Lambda, ECS, EKS, and RDS/DynamoDB, API Gateway. Strong working knowledge of Python, JavaScript. Strong experience with Terraform for infrastructure as code. Expertise in defining and managing IAM roles, policies, and configurations . Experience with networking, security, and monitoring within AWS environments. Experience with containerization technologies such as Docker and orchestration tools like Kubernetes (EKS) . Strong analytical, troubleshooting, and problem-solving skills. Experience with AI/ML technologies and Services like Textract will be preferred. AWS Certifications ( AWS Developer, Machine Learning - Specialty ) are a plus. Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback2Self- ManagementProductivity, efficiency, absenteeism, Training Hours, No of technical training completed

Posted 2 weeks ago

Apply

3.0 - 8.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Business Analysis Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary :Working closely with stakeholders across departments, the Business Analyst gathers and documents requirements, conducts data analysis, and supports project implementation to ensure alignment with business objectives. Roles & Responsibilities:1.Collaborate with stakeholders to gather, document, and validate business and technical requirements related to AWS cloud-based systems.2.Analyze current infrastructure, applications, and workflows to identify opportunities for migration, optimization, and cost-efficiency on AWS.3.Assist in creating business cases for cloud adoption or enhancements, including ROI and TCO analysis.4.Support cloud transformation initiatives by developing detailed functional specifications and user stories.5.Liaise with cloud architects, DevOps engineers, and developers to ensure solutions are aligned with requirements and business goals.6.Conduct gap analyses, risk assessments, and impact evaluations for proposed AWS solutions.7.Prepare reports, dashboards, and presentations to communicate findings and recommendations to stakeholders.8.Ensure compliance with AWS best practices and relevant security, governance, and regulatory requirements. Professional & Technical Skills: 1.Proven experience (3+ years) as a Business Analyst, preferably in cloud computing environments.2.Solid understanding of AWS services (EC2, S3, RDS, Lambda, IAM, etc.) and cloud architecture.3.Familiarity with Agile and DevOps methodologies.4.Strong analytical, problem-solving, and documentation skills.5.Excellent communication and stakeholder management abilities.6.AWS certification (e.g., AWS Certified Cloud Practitioner or Solutions Architect Associate) is a plus.7.Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information:- The candidate should have minimum 3 years of experience in Business Analyst.- This position is based at our Hyderabad office.- A Bachelor of Engineering in Electronics or any related stream is required. Qualification Bachelor of Engineering in Electronics or any related stream

Posted 2 weeks ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Microsoft ASP.NET Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Lead, you will be responsible for developing and configuring software systems, applying knowledge of technologies, methodologies, and tools to support projects or clients in Hyderabad. Roles & Responsibilities:- Lead the design, development, and maintenance of .NET applications.Architect and implement cloud-native solutions using AWS services.Mentor and guide junior developers, fostering best practices and code quality.Collaborate with cross-functional teams to deliver high-quality software solutions.Participate in code reviews and ensure adherence to coding standards.Troubleshoot, debug, and optimize application performance. Professional & Technical Skills: - Experience in .NET development, with at least 3 years in a lead role.Strong proficiency in C#, ASP.NET Core, and .NET Core.Experience with Web API development and RESTful services.Familiarity with front-end technologies (HTML, CSS, JavaScript).Knowledge of database technologies (SQL Server, Entity Framework).Solid understanding of AWS services and cloud architecture. Additional Information:- The candidate should have a minimum of 5 years of experience in Microsoft ASP.NET.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Amazon Web Services (AWS) Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, ensuring that the applications meet the required standards and specifications while fostering a collaborative environment for your team members. Roles & Responsibilities:- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously assess and improve application performance and user experience.- A resource with six years of experience and expertise in AWS is expected to take on a variety of responsibilities that leverage their technical skills and industry knowledge. Cloud Architecture Design:Developing and implementing scalable and secure cloud architectures tailored to meet business needs.- Deployment and Management:Overseeing the deployment of applications and services on AWS, ensuring optimal performance and reliability.- Cost Optimization:Analyzing cloud usage and implementing strategies to optimize costs while maintaining service quality.- Security Compliance:Ensuring that all AWS services comply with security best practices and organizational policies.-Collaboration and Mentorship:Working closely with cross-functional teams and mentoring junior staff to enhance their AWS skills and knowledge.- Troubleshooting and Support:Providing technical support and troubleshooting for AWS-related issues, ensuring minimal downtime and disruption.- Continuous Learning:Staying updated with the latest AWS features and industry trends to continuously improve cloud solutions and practices.- This role is pivotal in driving cloud initiatives and ensuring that the organization maximizes its investment in AWS technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Amazon Web Services (AWS).- Strong understanding of cloud architecture and deployment strategies.- Experience with application lifecycle management and DevOps practices.- Familiarity with containerization technologies such as Docker and Kubernetes.- Ability to troubleshoot and resolve application issues efficiently.- A professional with six years of experience and expertise in AWS possesses a robust skill set that includes cloud architecture design, deployment, and management of scalable applications.- They demonstrate proficiency in various AWS services such as EC2, S3, RDS, and Lambda, enabling them to optimize cloud solutions for performance and cost-efficiency.- Their experience also encompasses implementing security best practices, automating processes using AWS tools, and collaborating effectively with cross-functional teams to drive project success. -This resource is adept at troubleshooting and resolving issues, ensuring high availability and reliability of cloud-based systems.- A resource with six years of experience in AWS possesses a robust technical skill set that encompasses a variety of cloud services and solutions. -Their expertise typically includes proficiency in AWS core services such as EC2, S3, RDS, and Lambda, enabling them to design, deploy, and manage scalable applications in the cloud.-They are adept at implementing security best practices, optimizing costs, and ensuring high availability of services. Additionally, their experience often extends to automation tools like CloudFormation and Terraform, as well as monitoring and logging services such as CloudWatch.- This combination of skills allows them to effectively contribute to cloud architecture and operations within an organization. Additional Information:- The candidate should have minimum 5 years of experience in Amazon Web Services (AWS).- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

8.0 - 13.0 years

1 - 4 Lacs

Pune

Work from Office

Naukri logo

Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

8+ years experience combined between backend and data platform engineering roles Worked on large scale distributed systems. 5+ years of experience building data platform with (one of) Apache Spark, Flink or with similar frameworks. 7+ years of experience programming with Java Experience building large scale data/event pipelines Experience with relational SQL and NoSQL databases, including Postgres/MySQL, Cassandra, MongoDB Demonstrated experience with EKS, EMR, S3, IAM, KDA, Athena, Lambda, Networking, elastic cache and other AWS services.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Noida, Bhubaneswar, Pune

Work from Office

Naukri logo

4+ years experience as an IoT developer Must have experience on AWS Cloud - IoTCore, Kinesis, DynamoDB, API Gateway Expertise in creating applications by integrating with various AWS services Must have worked one IoT implementationon AWS Ability to work in Agile delivery Skills: Java, AWS Certified(Developer), MQTT, AWS IoT Core, nodejs

Posted 2 weeks ago

Apply

7.0 - 10.0 years

10 - 14 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

We are looking for an experienced Senior Big Data Developer to join our team and help build and optimize high-performance, scalable, and resilient data processing systems. You will work in a fast-paced startup environment, handling highly loaded systems and developing data pipelines that process billions of records in real time. As a key member of the Big Data team, you will be responsible for architecting and optimizing distributed systems, leveraging modern cloud-native technologies, and ensuring high availability and fault tolerance in our data infrastructure. Primary Responsibilities: Design, develop, and maintain real-time and batch processing pipelines using Apache Spark, Kafka, and Kubernetes. Architect high-throughput distributed systems that handle large-scale data ingestion and processing. Work extensively with AWS services, including Kinesis, DynamoDB, ECS, S3, and Lambda. Manage and optimize containerized workloads using Kubernetes (EKS) and ECS. Implement Kafka-based event-driven architectures to support scalable, low-latency applications. Ensure high availability, fault tolerance, and resilience of data pipelines. Work with MySQL, Elasticsearch, Aerospike, Redis, and DynamoDB to store and retrieve massive datasets efficiently. Automate infrastructure provisioning and deployment using Terraform, Helm, or CloudFormation. Optimize system performance, monitor production issues, and ensure efficient resource utilization. Collaborate with data scientists, backend engineers, and DevOps teams to support advanced analytics and machine learning initiatives. Continuously improve and modernize the data architecture to support growing business needs. Required Skills: 7-10+ years of experience in big data engineering or distributed systems development. Expert-level proficiency in Scala, Java, or Python. Deep understanding of Kafka, Spark, and Kubernetes in large-scale environments. Strong hands-on experience with AWS (Kinesis, DynamoDB, ECS, S3, etc.). Proven experience working with highly loaded, low-latency distributed systems. Experience with Kafka, Kinesis, Flink, or other streaming technologies for event-driven architectures. Expertise in SQL and database optimizations for MySQL, Elasticsearch, and NoSQL stores. Strong experience in automating infrastructure using Terraform, Helm, or CloudFormation. Experience managing production-grade Kubernetes clusters (EKS). Deep knowledge of performance tuning, caching strategies, and data consistency models. Experience working in a startup environment, adapting to rapid changes and building scalable solutions from scratch. Nice to Have Experience with machine learning pipelines and AI-driven analytics. Knowledge of workflow orchestration tools such as Apache Airflow.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architecture. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4to 6 years of Computer Science, IT or related field experience OR Bachelors degree and 6 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Preferred Qualifications: Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and collaboration skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for an experienced Senior BT Reliability Engineer to join our Business Technology team to maintain and continually improve our cloud-based services. The Site Reliability Engineering team in Bangalore is brand new, and builds foundational back-end infrastructure services and tooling for Okta s corporate teams. We enable teams to build infrastructure at scale and automate their software reliably and predictably. SREs are team players and innovators who build and operate technology using best practices and an agile mindset. We are looking for a smart, innovative, and passionate engineer for this role, someone who has a passion for designing complex and implementing cloud-based infrastructure. This is a new team, and the ideal candidate welcomes the challenge of building something new. They enjoy seeing their designs run at scale with automation, testing, and an excellent operational mindset. If you exemplify the ethics of, "If you have to do something more than once, automate it," we want to hear from you! Responsibilities Build and run development tools, pipelines, and infrastructure with a security-first mindset Actively participate in Agile ceremonies, write stories, and support team members through demos, knowledge sharing, and architecture sessions Promote and apply best practices for building secure, scalable, and reliable cloud infrastructure Develop and maintain technical documentation, network diagrams, runbooks, and procedures Designing, building, running, and monitoring Okta's IT infrastructure and cloud services Driving initiatives to evolve our current cloud platforms to increase efficiency and keep it in line with current security standards and best practices Recommend, develop, implement, and manage appropriate policy, standards, processes, and procedural updates Working with software engineers to ensure that development follows established processes and works as intended Create and maintain centralized technical processes, including container and image management Provide excellent customer service to our internal users and be an advocate for SRE services and DevOps practices Qualifications 5+ years of experience as a SRE, DevOps, Systems Engineer, or equivalent Demonstrated ability to develop complex applications for cloud infrastructure at scale and deliver projects on schedule and within budget Proficient in managing AWS multi-account environments and AWS authentication, governance, and using org management suite, including, but not limited to, AWS Orgs, AWS IAM, AWS Identity Center, and Stacksets Proficient with automating systems and infrastructure via Terraform Proficient in developing applications running on AWS or other cloud infrastructure resources, including compute, storage, networking, and virtualization Proficient with Git and building deployment pipeline using commercial tools, especially Github Actions Proficient with developing tooling and automation using Python Proficient with AWS container based workloads and concepts, especially EKS, ECS, and ECR. Experience with monitoring tools, especially Splunk, Cloudwatch, and Grafana Experience with reliability engineering concepts and security best practices on public cloud platforms Experience with image creation and management, especially for container and EC2 based workloads Knowledgeable with Linux system administration skills Familiar with configuration management tools, such as Ansible and SSM Familiar with Github Actions Runner Controller self-hosted runners Good communication skills, with the ability to influence others and communicate complex technical concepts to different audiences

Posted 3 weeks ago

Apply

6.0 - 9.0 years

13 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

At Amgen, our shared mission—to serve patients—drives all that we do. It is key to our becoming one of the world’s leading biotechnology companies, reaching over 10 million patients worldwide. Become the professional you are meant to be in this important role. The Financial Insights & Technology (FIT) team was created to: (1) maintain and build upon work/improvements enabled by various initiatives, (2) implement standardized reporting capabilities, (3) implement and maintain policies, processes and systems needed to drive an efficient and effective reporting environment and (4) explore new ways to evolve reporting, in the spirit of continuous improvement, to improve financial insights across the global finance organization. The Data Analytics Manager will be part of the FIT (Financial Insights & Technology) Data Analytics & Processes team within Corporate Finance. This role will be based in India-Hyderabad. What will you do In this vital role as a Data Analytics Manager you will play a meaningful role in fully understanding financial data and associated systems architecture in order to design data integrations for reporting and analysis to support the global Amgen organization. Key responsibilities include but are not limited to: Developing a robust understanding of Amgen’s financial data and systems in order to support data requests and integrations for different initiatives Working in close partnership with clients or team members to design, develop and augment the financial datasets Providing client-facing project management support and completing hands-on Databricks/Prophecy development as well as Power BI or Tableau as time and priorities allow Designing and developing the underlying ETL data processes used to build various financial datasets used for reporting and/or dashboards Identifying data enhancements or process improvements to optimize the financial datasets and processes Understanding the regularly scheduled financial datasets in Databricks and Tableau or Power BI dashboards refresh processes on an ongoing basis Involvement in the Financial Data Product Team, as a finance data subject matter expert. Key elements to success in this role include understanding Amgen’s financial systems and data, ability to define business requirements, and understanding how to design datasets compatible with Power BI, Tableau or other analytic tool reporting requirements. What we expect of you We are all different, yet we all use our outstanding contributions to serve patients. The Data Analytics Manager professional we seek is a go-getter with these qualifications. Basic Qualifications Doctorate degree Or Master’s degree and 2 years of Finance experience Or Bachelor’s degree and 4 years of Finance experience Or Associate’s degree and 10 years of Finance experience Or High school diploma / GED and 12 years of Finance experience Preferred Qualifications Experience performing data analysis across one or more areas of the business to derive business logic for data integration Experience working with business partners to identify complex functionality and translate it into requirements Experience with financial statements and Amgen Finance experience preferred Experience with data analysis, data modeling, and data visualization solutions such as Power BI, Tableau, Databricks, and Alteryx Familiar with Hyperion Planning, SAP, scripting languages like SQL or Python, Databricks Prophecy, and AWS services like S3 Able to work in matrixed teams, across geographic and functional reporting lines Excellent analytical and problem-solving skills Excellent facilitation, influencing, and negotiation skills Proficient in MS Office Suite

Posted 3 weeks ago

Apply

1.0 - 4.0 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: Let’s do this. Let’s change the world. We are looking for highly motivated expert Data Engineer who can own the design, development & maintenance of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Minimum 5 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 3 weeks ago

Apply

3.0 - 6.0 years

16 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Overview We are seeking an experienced multi cloud Cloud Engineer to join PepsiCos Global Cloud Architecture and Engineering Team The ideal candidate will have a strong understanding of Public Cloud (Azure & AWS) & private cloud infrastructure, with a focus on AWS Outposts and Azure Stack (including Azure Local). The successful candidate will be responsible for designing, implementing and escalation support for Public and Private Cloud Environment ensuring alignment with our overall cloud strategy and architecture Responsibilities Design and Implementation : Design and implement multi cloud (Azure & AWS ) & private cloud infrastructure using AWS Outposts and Azure Stack (including Azure Local), ensuring seamless integration with existing infrastructure and applications Cloud Architecture : Collaborate with the Cloud Solution Architecture team to define standards and pattern for private cloud solutions align with our overall cloud architecture and Engineering standards. Engineering and Deployment : Develop and deploy multi cloud solutions, including infrastructure as code (IaC) and automation scripts Troubleshooting and Optimization : Troubleshoot and resolve issues related to public & private cloud infrastructure, including performance optimization and security Documentation and Knowledge Management : Develop and maintain documentation for private cloud infrastructure, including architecture diagrams and technical guides Stakeholder Management : Collaborate with stakeholders across the organization to ensure private cloud solutions meet business requirements and expectations Qualifications Minimum 9 years of experience in public (Azure&AWS) & private cloud engineering, with a focus on AWS Outposts and Azure Stack (including Azure Local) Strong understanding of private cloud infrastructure, including architecture, security, and performance optimization Experience with AWS and Azure cloud platforms, including migration and deployment of workloads Strong scripting skills in languages such as Python , PowerShell , or Bash Experience with ITIL framework and DevOps practices Strong communication and collaboration skills, with ability to work with cross-functional teams Bachelors degree in computer science , Information Technology, or related field Nice to Have: Experience with Kubernetes and containerization Familiarity with CI/CD pipelines and automation tools (e.g. Ansible) Knowledge of networking and storage technologies (e.g. Cisco, NetApp) Experience with cloud security and compliance (e.g. AWS IAM, Azure Security Center) Certification in AWS or Azure (e.g. AWS Certified Solutions Architect, Azure Certified Solutions Architect)

Posted 3 weeks ago

Apply

10.0 - 20.0 years

30 - 40 Lacs

Hyderabad

Work from Office

Naukri logo

Overview We are looking for a seasoned Senior Manager of Site Reliability Engineering (SRE) to lead our AWS-focused SRE initiatives. In this role, you will be responsible for overseeing the reliability, scalability, and performance of critical applications and infrastructure hosted on AWS. You will lead a team of experienced SREs, drive strategic operational improvements, and ensure the seamless functioning of our cloud ecosystem to meet business and customer needs Responsibilities Leadership and Team Management : Lead and mentor a team of SRE professionals, fostering a culture of innovation, collaboration, and accountability. Develop and implement career development plans, provide coaching, and facilitate knowledge-sharing within the team. Operational Excellence : Drive the adoption of SRE principles, including SLAs, SLOs, and error budgets, to enhance system reliability and performance. Oversee incident management processes, ensuring timely resolution and comprehensive root cause analysis. Establish and monitor operational KPIs to measure and improve system availability and performance. Automation and Tooling : Champion the use of automation to reduce manual processes, improve efficiency, and enhance system reliability. Implement and optimize Infrastructure as Code (IaC) using tools like Terraform, CloudFormation, or CDK. AWS Infrastructure Management : Design, build, and maintain scalable and secure AWS-based infrastructure to support current and future workloads. Leverage AWS services such as EC2, RDS, Lambda, S3, CloudWatch, and others to enhance operational capabilities. Collaboration and Stakeholder Engagement : Partner with engineering, product, and DevOps teams to align SRE initiatives with business objectives. Act as a key liaison between the SRE team and executive stakeholders, communicating updates on reliability and risks. Risk and Security Management : Ensure compliance with security standards and best practices within AWS environments. Identify risks related to cloud infrastructure and implement strategies for mitigation. Qualifications Bachelors degree in Computer Science, Engineering, or a related field (or equivalent experience). 10+ years of experience in cloud-based infrastructure and operations, with at least 4 years in a leadership role. Deep expertise in AWS services, architecture, and tools, including hands-on experience with core AWS services (e.g., EC2, ECS, Lambda, S3, VPC, IAM). Proficiency in automation scripting (e.g., Python, Bash) and Infrastructure as Code (e.g., Terraform, CloudFormation). Strong knowledge of monitoring and observability tools like CloudWatch, Prometheus, Grafana, or Datadog. Proven experience managing large-scale production environments, incident response, and operational scaling. Hands-on experience with CI/CD pipelines and DevOps methodologies. Preferred Qualifications AWS certifications, such as AWS Certified Solutions Architect (Professional) or AWS Certified DevOps Engineer. Experience with Kubernetes (EKS) and containerization technologies like Docker. Familiarity with FinOps principles for cost optimization in AWS environments. Strong analytical skills and a data-driven approach to decision-making. Exceptional communication, leadership, and stakeholder management abilities.

Posted 3 weeks ago

Apply

4.0 - 7.0 years

18 - 22 Lacs

Hyderabad

Work from Office

Naukri logo

Overview As PepsiCo continues to scale its multi-cloud strategy (Azure, AWS, GCP, and optionally Alibaba), the Cloud Foundation & Governance Analyst will support the standardization, automation, and secure enablement of core cloud infrastructure services. This role is key to ensuring the foundational building blocks - networking, security, and landing zones - are implemented in a compliant, scalable, and reusable manner across business units and global regions. Familiarity with SAP on cloud environments is a strong plus. Responsibilities Cloud Foundation Design & Governance Support the design, implementation, and lifecycle management of PepsiCos standardized landing zones (Azure/AWS/GCP) across development, test, and production environments. Manage enterprise cloud constructsVNETs/VPCs, subnets, hybrid connectivity (ExpressRoute, VPN, Direct Connect), and DNS. Help define and enforce naming standards, resource hierarchies, tagging strategies, and identity integrations (AAD/Entra ID, AWS IAM, GCP IAM). Collaborate with enterprise security and compliance teams to integrate cloud-native controls with PepsiCos regulatory obligations (SOX, GDPR, HIPAA, etc.). Infrastructure as Code & Automation Contribute to IaC modules for repeatable deployment of foundational components (Terraform, Bicep, ARM, CloudFormation). Implement guardrails via policy-as-code (Azure Policy, AWS SCP, GCP Org Policies). Automate provisioning and CI/CD integrations for self-service cloud onboarding platforms. Maintain documentation and templates aligned to PepsiCos enterprise standards. Security & Risk Enablement Ensure integration with CSPM tooling such as Wiz (preferred), Defender for Cloud, Prisma Cloud, and enforce posture controls at foundation layer. Monitor and drive remediation of misconfigurations, identity risks, and insecure deployments at the foundational level. Partner with Security and Risk teams for cloud readiness assessments, hardening baselines, and audit prep. SAP Cloud Enablement (Nice to Have) Support infrastructure provisioning and security patterns for SAP workloads (S/4HANA, BW/4HANA) on Azure or AWS. Understand SAP landscape requirements such as HA/DR, large-scale VMs, shared storage, and dedicated network zoning. Coordinate with SAP Basis and application teams to ensure cloud foundation services meet performance and compliance expectations. Qualifications Bachelors degree in computer science, Information Technology, or related field Minimum of 9 years in IT and 5 years in cloud platform engineering, cloud infrastructure, or enterprise IT operations. Hands-on experience with Azure (required) and at least one of AWS or GCP. Deep understanding of cloud-native networking, hybrid connectivity, identity, and encryption services. Experience implementing and managing Landing Zones or enterprise-scale cloud foundations. Familiarity with IaC and DevOps principles; Git, CI/CD, and artifact repositories. Security and compliance alignment in regulated enterprise environments.

Posted 3 weeks ago

Apply

0.0 - 1.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

Client - Cisco Experience - 0-1 Year Location - Bangalore only (WFO all 5 days) Job Summary: AWS Engineer with approximately 6 months of experience to join our growing team. The ideal candidate will have hands-on experience in designing, deploying, and managing AWS cloud infrastructure. You will work closely with development and operations teams to ensure the reliability, scalability, and security of our cloud-based applications. Responsibilities: Design, implement, and maintain AWS cloud infrastructure using best practices. Deploy and manage applications on AWS services such as EC2, S3, RDS, VPC, and Lambda. Implement and maintain CI/CD pipelines for automated deployments. Monitor and troubleshoot AWS infrastructure and applications to ensure high availability and performance. Implement security best practices and ensure compliance with security policies. Automate infrastructure tasks using infrastructure-as-code tools (e.g., CloudFormation, Terraform). Collaborate with development and operations teams to resolve technical issues. Document infrastructure configurations and operational procedures. Participate in on-call rotations as needed. Optimize AWS costs and resource utilization. Required Skills and Qualifications: Bachelor s degree in computer science, Information Technology, or a related field. 6 Months of hands-on experience1 with AWS cloud services. Proficiency in AWS services such as EC2, S3, RDS, VPC, IAM, and Lambda. Experience with infrastructure-as-code tools (e.g., CloudFormation, Terraform). Experience with CI/CD pipelines and tools (e.g., Jenkins, AWS Code Pipeline). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Desire to learn new technologies.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Software Engineering Senior Analyst – HIH - Evernorth About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSECI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Responsibilities: Design and implement the software in for provider experience group on various initiatives. Provide support to our end-users by resolving their issues, responding to queries, and helping them analyze/interpret the results from the models. Develop, code, and unit test with variety of cloud services and infrastructure code using Terraform, build ETL using Python/PySpark and testing automation pipeline. Participate in peer code reviews. Develop reusable infrastructure code for commonly occurring work across multiple processes and services. Participate in planning and technical design discussions with other developers, managers, and architects to meet application requirements and performance goals. Manage the Pipeline using JENKINS to move the application to higher environments such as System Testing, User Acceptance Testing, Release Testing, and Users Training environments. Contribute to production support to resolve application production issues. Follow the guidelines of Cloud COE and other teams for production deployment and maintenance activities for all applications running in AWS. Manage the application demos to business users and Product Owners regularly in Sprint and PI demos. Work with Business users and Product Owners to understand business requirements. Participate in Program Increment (PI) planning and user stories grooming with Scrum masters, developers, QA Analysts, and product owners. Participate in daily stand-up meetings to provide daily work status updates to the Scrum master and product owner, following Agile Methodology. Write Structured Query Language (SQL) stored procedures and SQL queries for create, read, update, and delete (CRUD) operations for database. Write and maintain technical and design documents. Understand best practices for using the Guarantee Management’s tools and applications. Required Skills: Excellent debugging, analytical, and problem-solving skills. Excellent communication skills. Required Experience & Education: Bachelors in computer science or related field, or equivalent relevant work experience and technical knowledge. 3 - 5 years of total related experience. Experience Full Stack Python /PySpark Developer and Hands-on experience on AWS Cloud Services. Experienced in software development in Java and open-source tech stack. Strong and Proficient in React or Angular AND NodeJS client-side languages and frameworks. Hands on Experience in AWS Cloud Development. Experience in CI/CD tools such as AWS Cloudformation, Jenkins, Conduits, GitHub. Experience in Microservice Architecture. Exposure to SOLID, Architectural Patterns , Development Best Practices. Experience in Unit Testing automation, Test Driven Development and use of mocking frameworks. Experience working in Agile/Scrum teams. Hands on experience in infrastructure as a code in a Terraform . SQL and NoSQL experience Desired Experience: Experience building in Event Driven Architecture a plus. Security Engineering or Knowledge of AWS IAM Principles a plus Kafka knowledge a plus. NoSQL Solutions a plus. Location & Hours of Work: Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate Primarily based in the Innovation Hub in Hyderabad, India in a hybrid working model (3 days WFO and 2 days WFH) About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 3 weeks ago

Apply

3.0 - 7.0 years

2 - 6 Lacs

Hyderabad, Pune, Gurugram

Work from Office

Naukri logo

Location Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : Python, Pyspark, SQL, AWS Services - AWS Glue, S3, IAM, Athena, AWS CloudFormation, AWS Code Pipeline, AWS Lambda, Transfer Family, AWS Lake Formation, and CloudWatch, CI/CD automation of AWS CloudFormation stacks Not Ready to Apply Join our talent pool and we'll reach out when a job fits your skills.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

1 - 4 Lacs

Pune

Work from Office

Naukri logo

Job Information Job Opening ID ZR_1594_JOB Date Opened 29/11/2022 Industry Technology Job Type Work Experience 6-10 years Job Title AWS GLUE Engineer City Pune Province Maharashtra Country India Postal Code 411001 Number of Positions 4 Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies