Home
Jobs

1649 Dynamodb Jobs - Page 50

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About the Role We are seeking a Director of Software Engineering to lead our engineering team in Noida. This role requires a strategic and hands-on leader with deep expertise in Java and Amazon Web Services (AWS) with experience in modernizing platforms, cloud native migrations and hybrid strategies. The ideal candidate will have a strong product mindset, extensive experience in building scalable cloud-native applications, and the ability to drive engineering excellence in a fast-paced environment. Key Responsibilities • Technical Leadership: Define and implement best practices for Java-based architectures and scalable backend systems. • Team Management: Lead, mentor, and grow a high-performing team of software engineers and engineering managers. • Cloud & Infrastructure: Design, deploy, and optimize AWS-based solutions, leveraging services like EC2, Lambda, S3, RDS, DynamoDB. • Performance & Scalability: Ensure high availability, security, and performance of distributed systems on AWS and in our data centers. • APIs: Architect, design and document Restful APIs as a product for both internal and external customers • Agile Development: Foster an engineering culture of excellence with focus on product delivery with quality and technological advantage • Technology Roadmap: Stay ahead of industry trends, identifying opportunities for modernization and innovation. • Stakeholder Collaboration: Work closely with leadership, product, and operation teams to align engineering efforts with business goals. Required Qualifications • Experience: 12+ years in software engineering, with at least 5 years in a leadership role. Technical Expertise: • Strong background in Java, JDK and its ecosystem • Hands-on expertise in both data center and AWS architectures, deployments, and automation. • Strong experience with SQL/NoSQL databases (Oracle, PostgreSQL, MySQL, DynamoDB). • Proficiency in RESTful APIs, event-driven architecture (Kafka, SNS/SQS), and service design. • Strong grasp of security best practices, IAM roles, and compliance standards on AWS. • Leadership & Strategy: Proven track record of scaling engineering teams and aligning technology with business goals. • Problem-Solving Mindset: Ability to diagnose complex technical issues and optimize outcomes. Preferred Qualifications • Experience in high-scale SaaS applications using Java and AWS. • Knowledge of AI/ML services on AWS (SageMaker, Bedrock) and data engineering pipelines. • Agile & DevOps: Experience implementing DevOps pipelines, CI/CD, and Infrastructure as Code (Terraform, CloudFormation). • Background in fintech, e-commerce, or enterprise software is a plus. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location- Hyderabad, India - Hybrid Software Engineer II Why Deliveroo? Our mission is to transform the way you shop and eat, bringing the neighbourhood to your door by connecting consumers, restaurants, shops and riders. We are transforming the way the world eats and shops by making access to food and products more convenient and enjoyable. We give people the opportunity to buy what they want, as they want it, when and where they want it. We are a technology-driven company at the forefront of the most rapidly expanding industry in the world. We are still a small team, making a very large impact, looking to answer some of the most interesting questions out there. We move fast, value autonomy and ownership, and we are always looking for new ideas. What you'll do? As a Software Engineer at Deliveroo, your individual work contributes to achieving goals in across your team. While you will work with your team and you may lead projects, some of your work will contribute outside of your direct remit. You will report to managers and group leads and together deliver the results. Expectations Technical Execution: You will improve code structure, have an impact on architecture, and review code of any scope produced by your team. You'll aim to simplify the maintenance and operation of production systems, visibility, operational readiness, and health of your team's systems. Collaboration & Leadership As well as leading from the front regarding technical execution, you'll build relationships with other engineering teams and, identify collaboration opportunities. You'll own larger pieces of work, assist with design and technical / implementation choices and influence the roadmap within your team. You will take an active role in the hiring process and conducting engineering interviews. This will also extend to the current team where you will support the personal growth of colleagues, encouraging efficiency in their roles. We want to emphasise that we don't expect you to meet all of the below but would love you to have experience in some of these areas. Pride in readable, well-designed, well-tested software Experience writing web-based applications in any language, and an interest in learning (Go, Ruby/Rails, Python, Scala, or Rust) Experience with relational databases (PostgreSQL, MySQL) Experience with web architecture at scale (20krpm and above) Experience with "NoSQL" data backends and other (Redis, DynamoDB, ElasticSearch, Memcache) Experience solving logistical problems with software Workplace & Benefits At Deliveroo we know that people are the heart of the business and we prioritise their welfare. Benefits differ by country, but we offer many benefits in areas including healthcare, well-being, parental leave, pensions, and generous annual leave allowances, including time off to support a charitable cause of your choice. Benefits are country-specific, please ask your recruiter for more information. Diversity At Deliveroo, we believe a great workplace is one that represents the world we live in and how beautifully diverse it can be. That means we have no judgement when it comes to any one of the things that make you who you are - your gender, race, sexuality, religion or a secret aversion to coriander. All you need is a passion for (most) food and a desire to be part of one of the fastest-growing businesses in a rapidly growing industry. We are committed to diversity, equity and inclusion in all aspects of our hiring process. We recognise that some candidates may require adjustments to apply for a position or fairly participate in the interview process. If you require any adjustments, please don't hesitate to let us know. We will make every effort to provide the necessary adjustments to ensure you have an equitable opportunity to succeed. Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Telstra is a leading telecommunications and technology company with a proudly Australian heritage and a longstanding, growing international business. Today, Telstra International has over 3,200 employees based in more than 30 countries outside of Australia, providing services to thousands of businesses, government, carrier, and OTT customers. Over several decades, Telstra has established the largest wholly owned subsea cable network in the Asia-Pacific, with a unique and diverse set of infrastructure that offers access to the most intra-Asia lit capacity. Telstra empowers businesses with innovative technology solutions including data and IP networks, and network application services such as managed networks, security, unified communications, cloud, industry solutions, integrated software applications, and services. Minimum Experience: 8+ years Responsibilities: Design and build Java/Spring Boot applications leveraging microservice architecture. Design technical architecture solutions, including integration and authentication across systems. Manage the development lifecycle to ensure the delivery of highly secure solutions optimized for performance and scalability. Articulate design considerations, trade-offs, benefits, and recommendations for technical architecture. Monitor the process of software configuration, development, and testing to assure quality deliverables. Exhibit critical thinking, a strong sense of accountability for product delivery, and a passion for developing quality software. Demonstrate good communication skills and the ability to work effectively as a team player. Experience working (or willingness to work) with a geographically distributed team. Provide training and educate other team members about core capabilities to help them deliver high-quality solutions and documentation. Essential Skills: Strong knowledge of OOP concepts and design patterns. Strong understanding of data structures and algorithms. Hands-on experience in backend development using Java 8+. Hands-on experience with the Spring ecosystem (Spring, Spring Data, Spring JPA, Spring Integration, Spring Cloud, Spring Boot). Unit testing using JUnit 5/Spock and integration testing using Spring Boot. End-to-end testing using Cucumber and mock containers. Understanding of package managers Maven/Gradle. Understanding of microservices design and interaction patterns. Hands-on experience in creating OCI image building using Docker/Buildah. Understanding of cloud deployment and orchestration frameworks. Understanding of security frameworks like OAuth/OpenID Connect. Involvement in the design and implementation of secure, scalable, fault-tolerant systems in the cloud. Hands-on experience with AWS/Azure (EC2, S3, SQS, SNS, Kinesis). Understanding of async messaging systems like MQ/Kafka/Apache Pulsar. Understanding of application logging and monitoring (Splunk/New Relic/Open Telemetry/Prometheus). Experience with SQL and NoSQL databases. Understanding of CI/CD processes with hands-on experience with Bamboo/GitLab/Jenkins. Desirable Skills: Experience in cloud technologies (primarily AWS): RDS, DynamoDB, S3, SQS, SNS, Kinesis. Understanding of security (authentication and authorization). Experience in UI development using React/Angular. Basic understanding of Terraform. Understanding of change management principles and experience in production support. Show more Show less

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Gachibowli, Hyderabad, Telangana

On-site

Indeed logo

Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: M Bhavya Sree Sponsorship Available: No Relocation Assistance Available: No Duties and Responsibilities: Develop and support Data Driven applications Help design and develop back-end services and APIs for data-driven applications and simulations. Work with our technical partners to collaborate on system requirements and data integration needs for our new applications. Support the deployment and scaling of new back-end technologies and cloud-native architectures within the organization. Work closely with our data scientists to support model deployment into production environments. Develop and maintain server-side components for digital tools and products using Python or other modern back-end technologies and frameworks. Build scalable, secure, and efficient services that support a seamless experience across multiple platforms. Design, implement, and maintain robust database systems (SQL and NoSQL), ensuring high availability and performance for critical applications. Contribute to DevOps practices including CI/CD pipelines, infrastructure as code, containerization (Docker), and orchestration (Kubernetes). Learn about the tire industry and tire manufacturing processes from subject matter experts. Be a part of cross-functional teams working together to deliver impactful results. Skills Required: Significant experience in server-side development using Python Strong understanding of RESTful API design, microservices architecture, and service-oriented design Experience with relational and non-relational databases such as PostgreSQL, MySQL, MongoDB, or DynamoDB Application of software design skills and methodologies (algorithms, data structures, design patterns, software architecture and testing) Hands-on experience working with cloud platforms such as AWS, Microsoft Azure, or Google Cloud Platform Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Exposure to DevOps practices including CI/CD pipelines (e.g., GitHub Actions, Jenkins), containerization (e.g., Docker), and orchestration tools (e.g., Kubernetes) Familiarity with front-end technologies like React, HTML, CSS, and JavaScript for API integration purposes Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate #Li-Hybrid

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Gachibowli, Hyderabad, Telangana

On-site

Indeed logo

Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: M Bhavya Sree Sponsorship Available: No Relocation Assistance Available: No Duties and Responsibilities: Develop and support data-driven applications with integrated front-end and back-end services. Create responsive web and mobile interfaces using Python, JavaScript, HTML5/CSS3, and modern frameworks. Build efficient back-end services, APIs, and server-side components using Python etc. Design and maintain SQL/NoSQL databases for secure, high-performance data management. Implement CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes) for deployment. Collaborate with cross-functional teams, data scientists, and subject matter experts to align solutions with business goals. Learn tire industry processes and apply them to technical development. Requirements: Significant experience in front-end development using modern frameworks and languages such as React, Angular, JavaScript/TypeScript, HTML5, and CSS3 Significant experience in server-side development using Python Strong understanding of RESTful API design, microservices architecture, and service-oriented design Understanding of modern cloud platforms such as AWS, Azure, or GCP, particularly as they relate to front-end deployment and performance Experience visualizing data sourced from relational or NoSQL databases (e.g., PostgreSQL, MongoDB, DynamoDB) Ability to translate feature requirements and technical design into a working implementation Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Application of software design skills and methodologies (algorithms, design patterns, performance optimization, responsive design and testing) and modern DevOps practices (GitHub Actions, CI/CD) Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate #Li-Hybrid

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

Location Hyderabad, Telangana, India Category Technology Careers Job Id JREQ188644 Job Type Full time Hybrid Job Description The role of the Lead – Site Reliability Engineer is to be hands-on and provide mentorship to other team members on core SRE principles and tools. The lead SRE will participate in end to end operational aspects of Production environment. The individual concerned will be able to work on cloud systems, networks, databases and help drive incident lifecycle management. As a member of the SRE team, you will also be working closely with the Architects, DevOps, Product and development teams to ensure we get the most out of the software on AWS platform. This role requires a highly skilled technology professional with excellent communication skills, strategic mindset, strong analytical and troubleshooting skills on AWS Cloud Platform. Other responsibilities include working with internal business partners to gather requirements, prototyping, architecting, implementing/updating solutions, building and executing test plans, performing quality reviews, managing operations, and triaging and fixing operational issues. Site Reliability Engineers must be able to adjust to constant business change; common types of changes include new requirements, evolving goals and strategies, and emerging technologies. About the Role: Be hands-on and provide mentorship to a growing SRE team on core SRE principles and tools. Foster a sense of automation in issue resolution; everything possible should be automated, and only when automation can’t resolve an issue should people get involved in the resolution Lead efforts for updating production with new versions/infrastructures as they are available Lead capacity planning efforts in collaboration with Architects and DevOps engineers to determine changes to infrastructure that are needed to support new load and performance characteristics Leads engagement with software developers, DevOps and other infrastructure engineers to integrate software development and delivery from inception to full operation, ensuring robust released software and systems. Ensure highest level of uptime to meet the customer SLA by implementing system wide corrections to prevent reoccurrence of issues. Mentor other SRE team members to further develop their soft and hard skills Triage, troubleshoot and resolve issues using golden signals and go past golden signals Go past golden signals with additional principles such as chaos engineering to detect failure points and lead Game days for testing resiliency of team when it comes to incident response and remediations and synthetic monitoring. Lead SRE team members to create and maintain Recovery Procedures, RCA’s in collaboration with other engineering teams. Ensure Incidents assigned to the team are being managed within agreed SLAs Ensure alarms are documented in up to date Knowledge Base Articles. Ensures Production infrastructure is up to date with server/security patches and certificates. Continuous improvement of system and application monitoring and automation Identify and automate manual workarounds and process improvements Proactive monitoring of Monitor the availability, latency, scalability and efficiency of all services Perform periodic on-call duty as part of the SRE team About You: Skilled with cloud operations/administration in Amazon AWS. Tax/Accounting domain experience Bachelors or Master’s in Computer Science discipline. 5+ years’ experience focussed on Site Reliability Engineering or related position in AWS Cloud Platform. At least 2 AWS Certifications are must. (AWS Sysops Admin and Architects certifications preferred). Experience working with SQL, Windows Servers, Load balancers, Linux Deep experience with AWS, Docker and Kubernetes, CloudFormation, CloudWatch, CodeDeploy, DynamoDB, Lambda, SQS, Amazon FSX, Elastic Search and networking concepts are must. Program at a high level in at least one language such as: Java, C#, Javascript, Python or Ruby. Integration experience with PagerDuty, ServiceNow, Datadog, CloudWatch. Good understanding of Site Reliability Engineering (SRE) philosophies, technologies, platforms and tools, SLO management, incident resolution, and automation; Ability to explain technical concepts in clear, non-technical language Working knowledge of infrastructure components (e.g. routers, load balancers, cloud products, container systems, compute, storage, and networks) Knowledge of security and compliance standards such as SOC/PCI is a plus #LI-NR1 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law.

Posted 3 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Senior Data Engineer(GCP, Python) Gurgaon, India Information Technology 314204 Job Description About The Role: Grade Level (for internal use): 10 S&P Global Mobility The Role: Senior Data Engineer Department overview Automotive Insights at S&P Mobility, leverages technology and data science to provide unique insights, forecasts and advisory services spanning every major market and the entire automotive value chain—from product planning to marketing, sales and the aftermarket. We provide the most comprehensive data spanning the entire automotive lifecycle—past, present and future. With over 100 years of history, unmatched credentials, and the largest base of customers than any other provider, we are the industry benchmark for clients around the world, helping them make informed decisions to capitalize on opportunity and avoid risk. Our solutions are used by nearly every major OEM, 90% of the top 100 tier one suppliers, media agencies, governments, insurance companies, and financial stakeholders to provide actionable insights that enable better decisions and better results. Position summary S&P Global is seeking an experienced and driven Senior data Engineer who is passionate about delivering high-value, high-impact solutions to the world’s most demanding, high-profile clients. The ideal candidate must have at least 5 years of experience in developing and deploying data pipelines on Google Cloud Platform (GCP). They should be passionate about building high-quality, reusable pipelines using cutting-edge technologies. This role involves designing, building, and maintaining scalable data pipelines, optimizing workflows, and ensuring data integrity across multiple systems. The candidate will collaborate with data scientists, analysts, and software engineers to develop robust and efficient data solutions. Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines. Optimize and automate data ingestion, transformation, and storage processes. Work with structured and unstructured data sources, ensuring data quality and consistency. Develop and maintain data models, warehouses, and databases. Collaborate with cross-functional teams to support data-driven decision-making. Ensure data security, privacy, and compliance with industry standards. Troubleshoot and resolve data-related issues in a timely manner. Monitor and improve system performance, reliability, and scalability. Stay up-to-date with emerging data technologies and recommend improvements to our data architecture and engineering practices. What you will need: Strong programming skills using python. 5+ years of experience in data engineering, ETL development, or a related role. Proficiency in SQL and experience with relational (PostgreSQL, MySQL, etc.) and NoSQL (DynamoDB, MongoDB etc…) databases. Proficiency building data pipelines in Google cloud platform(GCP) using services like DataFlow, Cloud Batch, BigQuery, BigTable, Cloud functions, Cloud Workflows, Cloud Composer etc.. Strong understanding of data modeling, data warehousing, and data governance principles. Should be capable of mentoring junior data engineers and assisting them with technical challenges. Familiarity with orchestration tools like Apache Airflow. Familiarity with containerization and orchestration. Experience with version control systems (Git) and CI/CD pipelines. Excellent problem-solving skills and ability to work in a fast-paced environment. Excellent communication skills. Hands-on experience with snowflake is a plus. Experience with big data technologies is a plus. Experience in AWS is a plus. Should be able to convert business queries into technical documentation. Education and Experience Bachelor’s degree in Computer Science, Information Systems, Information Technology, or a similar major or Certified Development Program 5+ years of experience building data pipelines using python & GCP (Google Cloud platform). About Company Statement: S&P Global delivers essential intelligence that powers decision making. We provide the world’s leading organizations with the right data, connected technologies and expertise they need to move ahead. As part of our team, you’ll help solve complex challenges that equip businesses, governments and individuals with the knowledge to adapt to a changing economic landscape. S&P Global Mobility turns invaluable insights captured from automotive data to help our clients understand today’s market, reach more customers, and shape the future of automotive mobility. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 314204 Posted On: 2025-05-30 Location: Gurgaon, Haryana, India

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Key Responsibilities Design, develop, and maintain robust server-side applications using Node.js. Build scalable and secure RESTful APIs and backend logic to support frontend and mobile apps. Develop reusable, testable, and efficient code with adherence to best coding practices. Integrate AWS services (Lambda, S3, EC2, RDS, DynamoDB, API Gateway, etc.) into application infrastructure. Write and optimize complex SQL queries, stored procedures, and data models for relational databases. Collaborate with frontend developers, DevOps, and QA teams for end-to-end system development and deployment. Troubleshoot, debug, and optimize performance bottlenecks in production and staging environments. Ensure code quality through code reviews, unit testing, and CI/CD pipeline integration. Maintain clear documentation of APIs, technical processes, and deployment Skillset : Node.js deep understanding of event-driven architecture, asynchronous programming, and Express.js. JavaScript (ES6+) clean, modular coding practices, familiarity with functional programming concepts. SQL advanced knowledge of MySQL, PostgreSQL, or SQL Server; schema design, joins, indexing, performance : AWS Services hands-on experience with: Lambda (serverless functions) S3 (object storage) EC2 (compute instances) API Gateway RDS / DynamoDB CloudWatch (logging and monitoring) IAM (security and to Have : TypeScript NoSQL databases (e.g., MongoDB) GraphQL Docker & Kubernetes Redis or in-memory caching CI/CD using Jenkins, GitHub Actions, or AWS Skills : Strong analytical and problem-solving skills. Effective communication and collaboration within cross-functional teams. Proactive attitude and ability to work in agile/scrum environments. Adaptability to learn and apply new technologies quickly. (ref:hirist.tech) Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

About The Role We are looking for a highly skilled Rancher AWS Senior Engineer to join our team on a full-time contractual basis. The ideal candidate will possess deep expertise in AWS cloud services, Rancher, Kubernetes, and modern DevOps/DevSecOps practices. Responsibilities You will be instrumental in building and managing scalable, secure, and reliable cloud infrastructure, with a focus on automation and Responsibilities : Design, deploy, and manage Kubernetes clusters using Rancher on AWS. Develop and maintain CI/CD pipelines using Jenkins, GitHub, and Harness. Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) principles. Work on containerization using Docker and orchestrate workloads using Kubernetes. Implement DevSecOps practices for secure and compliant deployments. Integrate and manage AWS services such as Lambda, API Gateway, and DynamoDB. Collaborate with cross-functional teams to define infrastructure requirements and implement solutions. Monitor system performance, troubleshoot issues, and ensure system availability and Skills: Cloud Platforms: AWS (strong hands-on experience) Containerization & Orchestration: Rancher, Kubernetes, Docker CI/CD Tools: Jenkins, GitHub, Harness Serverless & API Management: AWS Lambda, API Gateway Database: DynamoDB DevOps & DevSecOps: Proven experience in automating and securing cloud infrastructure (ref:hirist.tech) Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

React, Javascript Cypress/Jest API Consumption Git, Scrum, Pair Programming, Peer Reviewing CI/CD with Jenkins pipeline Kubernetes, Docker Amazon Web Services and cloud deployments (S3, SNS, SQS, RDS, DynamoDB, etc.), using tools such as Terraform or AWS CLI Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Java Developer with AWS- Manager 7 Y+ears Experience Key Responsibilities: Design, develop, and maintain scalable Java-based applications on AWS. Implement microservices using Spring Boot and integrate with AWS services like Lambda, S3, and DynamoDB. Collaborate with cross-functional teams in an Agile environment to deliver high-quality software solutions. Ensure application performance, scalability, and security. Strategy Development: Define and implement a robust strategy for identifying, analyzing, and resolving software bottlenecks across various systems and applications, including modernized cloud-based applications. Performance Optimization: Lead efforts to optimize software performance, ensuring systems are running efficiently and effectively, particularly in cloud environments. Technical Leadership: Provide technical guidance and mentorship to engineering teams, fostering a culture of continuous improvement and innovation. Experience in TDD/BDD EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Country India Location: Building No 12D, Floor 5, Raheja Mindspace, Cyberabad, Madhapur, Hyderabad - 500081, Telangana, India Job Description Job Title – Senior Engineer ( Node.Js, React, Typescript/Javascript & AWS) Preferred Location: Hyderabad, India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do. Role Summary Lead Engineer(Full Stack) is crucial role in product development team at Carrier. This role would focus on design and development of Backend & Frontend modules by following Carrier software development standards . Role Responsibilities Design, develop AWS IoT/Cloud-based applications using Typescript, Node.Js, ReactJS Work closely with onsite, offshore, and cross functional teams, Product Management, frontend developers, SQA teams to effectively use technologies to build and deliver high quality and on-time delivery Work closely with solutions architects on low level design. Effectively plan and delegate the sprint work to the development team while also contributing individually. Proactively Identify risks and failure modes early in the development lifecycle and develop POCs to mitigate the risks early in the program This individual is self-directed, highly motivated, and organized with strong analytical thinking and problem-solving skills, and an ability to work on multiple projects and function in a team environment. Should be able to help and direct junior developers in a right direction if needed Participate in peer code reviews to ensure that respective developers are following highest standards in implementing the product. Participate in PI planning and identify any challenges in terms of technology side to implement specific Epic/Story. Keep an eye on NFR’s and ensure our product is meeting all required compliances as per Carrier standards. Minimum Requirements 6-10 years of overall experience in Software domain At least 4 years of experience in Cloud native applications in AWS Solid working knowledge of Typescript, NodeJS, ReactJS Experience in executing CI/CD processes Experience in developing APIs [REST, GraphQL, Websockets]. Knowledge of (AWS IoT Core) and In-depth knowledge of AWS cloud native services including Kinesis, DynamoDB, Lambda, API Gateway, Timestream, SQS, SNS, Cloudwatch Solid understanding of creating AWS infra using serverless framework/CDK. Experience in implementing alerts and monitoring to support smooth opera tions. Solid understanding of Jest framework (unit testing) and integration tests. Experience in cloud cost optimization and securing AWS services. Benefits We are committed to offering competitive benefits programs for all of our employees, and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice Show more Show less

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Country India Location: Building No 12D, Floor 5, Raheja Mindspace, Cyberabad, Madhapur, Hyderabad - 500081, Telangana, India Job Title – Senior Engineer Location: Hyderabad, India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do. Role Summary Senior Engineer (Full Stack) is crucial role in product development team at Carrier. This role would focus on design and development of Backend & Frontend modules by following Carrier software development standards . Role Responsibilities Design, develop AWS IoT/Cloud-based applications using Typescript, Node.Js, ReactJS Work closely with onsite, offshore, and cross functional teams, Product Management, frontend developers, SQA teams to effectively use technologies to build and deliver high quality and on-time delivery Work closely with solutions architects on low level design. Effectively plan and delegate the sprint work to the development team while also contributing individually. Proactively Identify risks and failure modes early in the development lifecycle and develop POCs to mitigate the risks early in the program This individual is self-directed, highly motivated, and organized with strong analytical thinking and problem-solving skills, and an ability to work on multiple projects and function in a team environment. Should be able to help and direct junior developers in a right direction if needed Participate in peer code reviews to ensure that respective developers are following highest standards in implementing the product. Participate in PI planning and identify any challenges in terms of technology side to implement specific Epic/Story. Keep an eye on NFR’s and ensure our product is meeting all required compliances as per Carrier standards. Minimum Requirements 3-7 years of overall experience in Software domain At least 2 years of experience in Cloud native applications in AWS Solid working knowledge of Typescript, NodeJS, ReactJS Experience in executing CI/CD processes Experience in developing APIs [REST, GraphQL, Websockets]. Knowledge of (AWS IoT Core) and In-depth knowledge of AWS cloud native services including Kinesis, DynamoDB, Lambda, API Gateway, Timestream, SQS, SNS, Cloudwatch Solid understanding of creating AWS infra using serverless framework/CDK. Solid understanding of Jest framework (unit testing) and integration tests. Knowledge in cloud cost optimization and securing AWS services. Benefits We are committed to offering competitive benefits programs for all of our employees, and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice Show more Show less

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Country India Location: Building No 12D, Floor 5, Raheja Mindspace, Cyberabad, Madhapur, Hyderabad - 500081, Telangana, India Job Title – Senior Engineer Location: Hyderabad, India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do. Role Summary Senior Engineer (Full Stack) is crucial role in product development team at Carrier. This role would focus on design and development of Backend & Frontend modules by following Carrier software development standards . Role Responsibilities Design, develop AWS IoT/Cloud-based applications using Typescript, Node.Js, ReactJS Work closely with onsite, offshore, and cross functional teams, Product Management, frontend developers, SQA teams to effectively use technologies to build and deliver high quality and on-time delivery Work closely with solutions architects on low level design. Effectively plan and delegate the sprint work to the development team while also contributing individually. Proactively Identify risks and failure modes early in the development lifecycle and develop POCs to mitigate the risks early in the program This individual is self-directed, highly motivated, and organized with strong analytical thinking and problem-solving skills, and an ability to work on multiple projects and function in a team environment. Should be able to help and direct junior developers in a right direction if needed Participate in peer code reviews to ensure that respective developers are following highest standards in implementing the product. Participate in PI planning and identify any challenges in terms of technology side to implement specific Epic/Story. Keep an eye on NFR’s and ensure our product is meeting all required compliances as per Carrier standards. Minimum Requirements 3-7 years of overall experience in Software domain At least 2 years of experience in Cloud native applications in AWS Solid working knowledge of Typescript, NodeJS, ReactJS Experience in executing CI/CD processes Experience in developing APIs [REST, GraphQL, Websockets]. Knowledge of (AWS IoT Core) and In-depth knowledge of AWS cloud native services including Kinesis, DynamoDB, Lambda, API Gateway, Timestream, SQS, SNS, Cloudwatch Solid understanding of creating AWS infra using serverless framework/CDK. Solid understanding of Jest framework (unit testing) and integration tests. Knowledge in cloud cost optimization and securing AWS services. Benefits We are committed to offering competitive benefits programs for all of our employees, and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice Show more Show less

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Country India Location: Building No 12D, Floor 5, Raheja Mindspace, Cyberabad, Madhapur, Hyderabad - 500081, Telangana, India Job Description Job Title – Senior Engineer ( Node.Js, React, Typescript/Javascript & AWS) Preferred Location: Hyderabad, India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do. Role Summary Lead Engineer(Full Stack) is crucial role in product development team at Carrier. This role would focus on design and development of Backend & Frontend modules by following Carrier software development standards . Role Responsibilities Design, develop AWS IoT/Cloud-based applications using Typescript, Node.Js, ReactJS Work closely with onsite, offshore, and cross functional teams, Product Management, frontend developers, SQA teams to effectively use technologies to build and deliver high quality and on-time delivery Work closely with solutions architects on low level design. Effectively plan and delegate the sprint work to the development team while also contributing individually. Proactively Identify risks and failure modes early in the development lifecycle and develop POCs to mitigate the risks early in the program This individual is self-directed, highly motivated, and organized with strong analytical thinking and problem-solving skills, and an ability to work on multiple projects and function in a team environment. Should be able to help and direct junior developers in a right direction if needed Participate in peer code reviews to ensure that respective developers are following highest standards in implementing the product. Participate in PI planning and identify any challenges in terms of technology side to implement specific Epic/Story. Keep an eye on NFR’s and ensure our product is meeting all required compliances as per Carrier standards. Minimum Requirements 6-10 years of overall experience in Software domain At least 4 years of experience in Cloud native applications in AWS Solid working knowledge of Typescript, NodeJS, ReactJS Experience in executing CI/CD processes Experience in developing APIs [REST, GraphQL, Websockets]. Knowledge of (AWS IoT Core) and In-depth knowledge of AWS cloud native services including Kinesis, DynamoDB, Lambda, API Gateway, Timestream, SQS, SNS, Cloudwatch Solid understanding of creating AWS infra using serverless framework/CDK. Experience in implementing alerts and monitoring to support smooth opera tions. Solid understanding of Jest framework (unit testing) and integration tests. Experience in cloud cost optimization and securing AWS services. Benefits We are committed to offering competitive benefits programs for all of our employees, and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 16,700 stores in 31 countries, serving more than 9 million customers each day. The India Data & Analytics Global Capability Centre is an integral part of ACT’s Global Data & Analytics Team and the Senior Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About The Role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3–4 years of relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) and use big data technologies Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.) Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AWS Staff-Senior The opportunity We are looking for a skilled AWS Data Engineer to join our growing data team. This role involves building and managing scalable data pipelines that ingest, process, and store data from various sources using modern AWS technologies. You will work with both batch and streaming data and contribute to a robust, scalable data architecture to support analytics, BI, and data science use cases. As a problem-solver with the keen ability to diagnose a client’s unique needs, one should be able to see the gap between where clients currently are and where they need to be. The candidate should be capable of creating a blueprint to help clients achieve their end goal. Key Responsibilities: Design and implement data ingestion pipelines from various sources including on-premise Oracle databases, batch files, and Confluent Kafka. Develop Python producers and AWS Glue jobs for batch data processing. Build and manage Spark streaming applications on Amazon EMR. Architect and maintain Medallion Architecture-based data lakes on Amazon S3. Develop and maintain data sinks in Redshift and Oracle. Automate and orchestrate workflows using Apache Airflow. Monitor, debug, and optimize data pipelines for performance and reliability. Collaborate with cross-functional teams including data analysts, scientists, and DevOps. Required Skills and Experience: Good programming skills in Python and Spark (Pyspark). Hands on Experience with Amazon S3, Glue, EMR. Good SQL knowledge on Amazon Redshift and Oracle Proven experience in handling streaming data with Kafka and building real-time pipelines. Good understanding of data modeling, ETL frameworks, and performance tuning. Experience with workflow orchestration tools like Airflow. Nice-to-Have Skills: Infrastructure as Code using Terraform. Experience with AWS services like SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Familiarity with DataSync for file movement and medallion architecture for data lakes. Monitoring and alerting using CloudWatch, Datadog, or Splunk. Qualifications : BTech / MTech / MCA / MBA EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

1.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Description Python Developer with AWS At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Python Developer with AWS CGI Position: Software Engineer Experience: 1- 4 years Main location: Bangalore location (1st preference), Gurgaon Employment Type: Full Time Your future duties and responsibilities The incumbent should have, Hands on experience in Python for Data Engineering and delivering projects in Data related projects using different AWS technologies like S3, Lambda, DynamoDB, RDS Postgres SQL, Step Functions, Kinesis, Docker & K8S. Working knowledge of AWS environment (S3, CLI, Lambda, RDS Postgres SQL) The incumbent would be responsible to understand & develop Data Flows for Contact Centres based on Amazon Connect using Python, Lambda, S3, Kinesis, Postgres SQL . S/he would work towards creating a positive and innovation friendly environment. Programming: Python, SQL (Intermediate),boto3, AWS SDK Design and develop Data Flows using AWS Lambda, S3, Kinesis, Postgres SQL. Experience on Amazon Connect to build Call Center Contact Flows will be an added advantage. Accountability Core Responsibility: Strong experience in delivering projects in using Python. Exposure of working in Global environment and have delivered at-least 1-2 projects on Python. Delivery collaboration & coordination with multiple business partners. Must have good experience in leading projects. Good to have Industry knowledge of the Insurance industry with proven experience across multiple clients Implemented the developed methodology on Cloud, using AWS services like S3, Lambda, SQS, SNS, RDS Postgres SQL, DynamoDB. Good to have Call Center experience using Amazon Connect to build Contact Flows (IVR Flows) will be an added advantage. Required Qualifications To Be Successful In This Role Bachelors in Technology(B.Tech) Computer Engineering or Masters in computer application(MCA) Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 8.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Requirements Years of Experience: 5+ years in GIS development, with a strong emphasis on web-based GIS applications and tools. Job Description: We are looking for an experienced and innovative Senior GIS Developer to join our team. The ideal candidate will have an in-depth understanding of Geographic Information Systems (GIS) principles, coupled with expertise in web development technologies. You will play a critical role in designing, implementing, and maintaining GIS applications, leveraging modern web frameworks and tools. This position offers the opportunity to work on cutting-edge GIS projects and contribute to the advancement of spatial data analysis and visualization. Required Skills: GIS Fundamentals: Strong understanding of Geographic Information Systems (GIS) principles and concepts. Expertise in spatial data management, map projections, and coordinate systems. Proficiency with GIS tools and libraries, including Leaflet, ArcGIS, or Mapbox. Familiarity with GIS software (e.g., ArcGIS, QGIS) and geospatial data formats (e.g., shapefiles, GeoJSON). Web Development Skills: Proficiency in JavaScript libraries and frameworks. Strong front-end development skills in HTML, CSS, and responsive design principles. Experience with web frameworks such as Angular (preferred) or similar frameworks. Desired Skills: Server-Side Development: Familiarity with server-side languages like Node.js or Python. Experience working with RESTful APIs and web services. Database Management: Knowledge of spatial databases such as PostGIS or experience with SQL/NoSQL databases like PostgreSQL, MongoDB, or DynamoDB. DevOps and Cloud Technologies: Familiarity with CI/CD pipelines for efficient deployment processes. Basic understanding of cloud platforms like AWS, Azure, or Google Cloud. Testing and Quality Assurance: Experience with automated testing tools and frameworks. Soft Skills: Excellent analytical and problem-solving skills. Strong communication and collaboration abilities. Ability to work effectively in an agile environment. Responsibilities: Design, develop, and maintain GIS applications with a focus on performance, scalability, and usability. Implement innovative solutions for spatial data visualization and analysis. Collaborate with cross-functional teams, including product owners, designers, and developers, to deliver high-quality GIS solutions. Optimize application performance and ensure seamless integration with GIS tools and libraries. Stay updated on emerging GIS and web development technologies and trends. Provide technical guidance and mentorship to junior developers in the team. This role is ideal for someone passionate about GIS and web technologies, looking to take on a leadership position in a challenging and rewarding environment.

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position Title: Data Architect Employment Type: Full time (hybrid) Location: Hyderabad Position Summary: As a Data Architect, you will play a pivotal role in leading and mentoring data engineering teams, architecture and designing robust data solutions on AWS, and serving as the primary technical point of contact for clients. You will leverage your deep expertise in AWS cloud technologies, data engineering best practices, and leadership skills to drive successful project delivery and ensure client satisfaction. Key Responsibilities: Technical Leadership: Provide technical guidance and mentorship to data engineering teams, fostering a culture of innovation and continuous improvement. Solution Architecture: Design and architect scalable, high-performance data solutions on AWS, leveraging services such as S3, Glue, Lambda, EMR, Redshift, DynamoDB, Kinesis, and Athena to meet client requirements. Client Engagement: Serve as the primary technical point of contact for clients, understanding their business needs, and translating them into effective data solutions. Project Management: Lead and manage end-to-end project delivery, ensuring projects are completed on time, within budget, and to the highest quality standards. Data Modeling: Develop and maintain comprehensive data models that support analytics, reporting, and machine learning use cases. Performance Optimization: Continuously monitor and optimize data pipelines and systems to ensure optimal performance and cost-efficiency, utilizing tools like CloudWatch and AWS Cost Explorer. Technology Evangelism: Stay abreast of the latest AWS technologies and industry trends, and advocate for their adoption within the organization. Team Collaboration: Foster strong collaboration with data scientists, analysts, and other stakeholders to ensure alignment and successful project outcomes. Technical Skills and Expertise: AWS Services: Deep understanding of core AWS services for data engineering, including S3, Glue, Lambda, EMR, Redshift, DynamoDB, Kinesis, Athena, CloudWatch, and IAM. Programming Languages: Proficiency in Python, SQL, and PySpark for data processing, transformation, and analysis. Data Warehousing and ETL: Expertise in designing and implementing data warehousing solutions, ETL processes, and data modeling techniques. Infrastructure as Code (IaC): Experience with tools like CloudFormation or Terraform to automate and manage infrastructure provisioning. Big Data Technologies: Familiarity with big data frameworks like Apache Spark and Hadoop for handling large-scale data processing. Skills and Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 10+ years of experience in data engineering, with at least 3+ years in a technical leadership role. Deep expertise in AWS cloud technologies and data services. Proven track record in architecture and delivering complex data solutions on AWS. Strong leadership, communication, and client-facing skills. Experience in data modeling, ETL processes, and data warehousing. Excellent problem-solving, analytical, and decision-making skills. Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 10+ years of experience in data engineering, with at least 3+ years in a technical leadership role. Deep expertise in AWS cloud technologies and data services. Proven track record in architecture and delivering complex data solutions on AWS. Strong leadership, communication, and client-facing skills. Experience in data modeling, ETL processes, and data warehousing. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad, India - Hybrid About The Role As a Senior Software Engineer at Deliveroo, your individual work contributes to achieving goals in multiple teams. While you will work with your team and lead projects, some of your work will contribute outside of your direct remit. You will report to managers and groups leads and together deliver the results. Technical Execution What you'll be doing You will improve code structure, architecture, review code of any scope produced by your team. It will also include work to maximise the efficiency of your team by leading team project planning, foreseeing dependencies and risks, and constructively partnering with other disciplines (e.g. PM, Experience) You'll aim to simplify the maintenance and operation of production systems, promoting visibility, operational readiness, and health of your team's systems. Collaboration & Leadership As well as leading from the front regarding technical execution, you'll build relationships with other engineering teams and, identify collaboration opportunities. You'll break down large pieces of work, guide design and technical / implementation choices and influence the roadmap within your team. You will take an active role in the hiring process and conducting engineering interviews. This will also extend to the current team where you will support the personal growth of colleagues, encouraging efficiency in their roles. Requirements We want to emphasise that we don't expect you to meet all of the below but would love you to have experience in some of these areas. Pride in readable, well-designed, well-tested software Experience writing web-based applications in any language, and an interest in learning (Go, Ruby/Rails, Python, Scala, or Rust) Familiarity and practical experience with relational databases (PostgreSQL, MySQL) Familiarity and practical experience with web architecture at scale (20krpm and above) Familiarity and practical experience with "NoSQL" data backends and other such as Redis, DynamoDB, ElasticSearch, Memcache. Why Deliveroo? Our mission is to transform the way you shop and eat, bringing the neighbourhood to your door by connecting consumers, restaurants, shops and riders. We are transforming the way the world eats and shops by making access to food and products more convenient and enjoyable. We give people the opportunity to buy what they want, as they want it, when and where they want it. We are a technology-driven company at the forefront of the most rapidly expanding industry in the world. We are still a small team, making a very large impact, looking to answer some of the most interesting questions out there. We move fast, value autonomy and ownership, and we are always looking for new ideas. Workplace & Benefits At Deliveroo we know that people are the heart of the business and we prioritise their welfare. Benefits differ by country, but we offer many benefits in areas including healthcare, well-being, parental leave, pensions, and generous annual leave allowances, including time off to support a charitable cause of your choice. Benefits are country-specific, please ask your recruiter for more information. Diversity At Deliveroo, we believe a great workplace is one that represents the world we live in and how beautifully diverse it can be. That means we have no judgement when it comes to any one of the things that make you who you are - your gender, race, sexuality, religion or a secret aversion to coriander. All you need is a passion for (most) food and a desire to be part of one of the fastest-growing businesses in a rapidly growing industry. We are committed to diversity, equity and inclusion in all aspects of our hiring process. We recognise that some candidates may require adjustments to apply for a position or fairly participate in the interview process. If you require any adjustments, please don't hesitate to let us know. We will make every effort to provide the necessary adjustments to ensure you have an equitable opportunity to succeed. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Kanayannur, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AWS Staff-Senior The opportunity We are looking for a skilled AWS Data Engineer to join our growing data team. This role involves building and managing scalable data pipelines that ingest, process, and store data from various sources using modern AWS technologies. You will work with both batch and streaming data and contribute to a robust, scalable data architecture to support analytics, BI, and data science use cases. As a problem-solver with the keen ability to diagnose a client’s unique needs, one should be able to see the gap between where clients currently are and where they need to be. The candidate should be capable of creating a blueprint to help clients achieve their end goal. Key Responsibilities: Design and implement data ingestion pipelines from various sources including on-premise Oracle databases, batch files, and Confluent Kafka. Develop Python producers and AWS Glue jobs for batch data processing. Build and manage Spark streaming applications on Amazon EMR. Architect and maintain Medallion Architecture-based data lakes on Amazon S3. Develop and maintain data sinks in Redshift and Oracle. Automate and orchestrate workflows using Apache Airflow. Monitor, debug, and optimize data pipelines for performance and reliability. Collaborate with cross-functional teams including data analysts, scientists, and DevOps. Required Skills and Experience: Good programming skills in Python and Spark (Pyspark). Hands on Experience with Amazon S3, Glue, EMR. Good SQL knowledge on Amazon Redshift and Oracle Proven experience in handling streaming data with Kafka and building real-time pipelines. Good understanding of data modeling, ETL frameworks, and performance tuning. Experience with workflow orchestration tools like Airflow. Nice-to-Have Skills: Infrastructure as Code using Terraform. Experience with AWS services like SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Familiarity with DataSync for file movement and medallion architecture for data lakes. Monitoring and alerting using CloudWatch, Datadog, or Splunk. Qualifications : BTech / MTech / MCA / MBA EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AWS Staff-Senior The opportunity We are looking for a skilled AWS Data Engineer to join our growing data team. This role involves building and managing scalable data pipelines that ingest, process, and store data from various sources using modern AWS technologies. You will work with both batch and streaming data and contribute to a robust, scalable data architecture to support analytics, BI, and data science use cases. As a problem-solver with the keen ability to diagnose a client’s unique needs, one should be able to see the gap between where clients currently are and where they need to be. The candidate should be capable of creating a blueprint to help clients achieve their end goal. Key Responsibilities: Design and implement data ingestion pipelines from various sources including on-premise Oracle databases, batch files, and Confluent Kafka. Develop Python producers and AWS Glue jobs for batch data processing. Build and manage Spark streaming applications on Amazon EMR. Architect and maintain Medallion Architecture-based data lakes on Amazon S3. Develop and maintain data sinks in Redshift and Oracle. Automate and orchestrate workflows using Apache Airflow. Monitor, debug, and optimize data pipelines for performance and reliability. Collaborate with cross-functional teams including data analysts, scientists, and DevOps. Required Skills and Experience: Good programming skills in Python and Spark (Pyspark). Hands on Experience with Amazon S3, Glue, EMR. Good SQL knowledge on Amazon Redshift and Oracle Proven experience in handling streaming data with Kafka and building real-time pipelines. Good understanding of data modeling, ETL frameworks, and performance tuning. Experience with workflow orchestration tools like Airflow. Nice-to-Have Skills: Infrastructure as Code using Terraform. Experience with AWS services like SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Familiarity with DataSync for file movement and medallion architecture for data lakes. Monitoring and alerting using CloudWatch, Datadog, or Splunk. Qualifications : BTech / MTech / MCA / MBA EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AWS Staff-Senior The opportunity We are looking for a skilled AWS Data Engineer to join our growing data team. This role involves building and managing scalable data pipelines that ingest, process, and store data from various sources using modern AWS technologies. You will work with both batch and streaming data and contribute to a robust, scalable data architecture to support analytics, BI, and data science use cases. As a problem-solver with the keen ability to diagnose a client’s unique needs, one should be able to see the gap between where clients currently are and where they need to be. The candidate should be capable of creating a blueprint to help clients achieve their end goal. Key Responsibilities: Design and implement data ingestion pipelines from various sources including on-premise Oracle databases, batch files, and Confluent Kafka. Develop Python producers and AWS Glue jobs for batch data processing. Build and manage Spark streaming applications on Amazon EMR. Architect and maintain Medallion Architecture-based data lakes on Amazon S3. Develop and maintain data sinks in Redshift and Oracle. Automate and orchestrate workflows using Apache Airflow. Monitor, debug, and optimize data pipelines for performance and reliability. Collaborate with cross-functional teams including data analysts, scientists, and DevOps. Required Skills and Experience: Good programming skills in Python and Spark (Pyspark). Hands on Experience with Amazon S3, Glue, EMR. Good SQL knowledge on Amazon Redshift and Oracle Proven experience in handling streaming data with Kafka and building real-time pipelines. Good understanding of data modeling, ETL frameworks, and performance tuning. Experience with workflow orchestration tools like Airflow. Nice-to-Have Skills: Infrastructure as Code using Terraform. Experience with AWS services like SNS, SQS, DynamoDB, DMS, Athena, and Lake Formation. Familiarity with DataSync for file movement and medallion architecture for data lakes. Monitoring and alerting using CloudWatch, Datadog, or Splunk. Qualifications : BTech / MTech / MCA / MBA EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies