Home
Jobs

2080 Dynamodb Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

For quick Response, please fill out the form Job Application Form 34043 - Data Scientist - Senior I - Udaipur https://docs.google.com/forms/d/e/1FAIpQLSeBy7r7b48Yrqz4Ap6-2g_O7BuhIjPhcj-5_3ClsRAkYrQtiA/viewform 3–5 years of experience in Data Engineering or similar roles Strong foundation in cloud-native data infrastructure and scalable architecture design Build and maintain reliable, scalable ETL/ELT pipelines using modern cloud-based tools Design and optimize Data Lakes and Data Warehouses for real-time and batch processing Ingest, transform, and organize large volumes of structured and unstructured data Collaborate with analysts, data scientists, and backend engineers to define data needs Monitor, troubleshoot, and improve pipeline performance, cost-efficiency, and reliability Implement data validation, consistency checks, and quality frameworks Apply data governance best practices and ensure compliance with privacy and security standards Use CI/CD tools to deploy workflows and automate pipeline deployments Automate repetitive tasks using scripting, workflow tools, and scheduling systems Translate business logic into data logic while working cross-functionally Strong in Python and familiar with libraries like pandas and PySpark Hands-on experience with at least one major cloud provider (AWS, Azure, GCP) Experience with ETL tools like AWS Glue, Azure Data Factory, GCP Dataflow, or Apache NiFi Proficient with storage systems like S3, Azure Blob Storage, GCP Cloud Storage, or HDFS Familiar with data warehouses like Redshift, BigQuery, Snowflake, or Synapse Experience with serverless computing like AWS Lambda, Azure Functions, or GCP Cloud Functions Familiar with data streaming tools like Kafka, Kinesis, Pub/Sub, or Event Hubs Proficient in SQL, and knowledge of relational (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) databases Familiar with big data frameworks like Hadoop or Apache Spark Experience with orchestration tools like Apache Airflow, Prefect, GCP Workflows, or ADF Pipelines Familiarity with CI/CD tools like GitLab CI, Jenkins, Azure DevOps Proficient with Git, GitHub, or GitLab workflows Strong communication, collaboration, and problem-solving mindset Experience with data observability or monitoring tools (bonus points) Contributions to internal data platform development (bonus points) Comfort working in data mesh or distributed data ownership environments (bonus points) Experience building data validation pipelines with Great Expectations or similar tools (bonus points)

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description The role of the Lead – Site Reliability Engineer is to be hands-on and provide mentorship to other team members on core SRE principles and tools. The lead SRE will participate in end to end operational aspects of Production environment. The individual concerned will be able to work on cloud systems, networks, databases and help drive incident lifecycle management. As a member of the SRE team, you will also be working closely with the Architects, DevOps, Product and development teams to ensure we get the most out of the software on AWS platform. This role requires a highly skilled technology professional with excellent communication skills, strategic mindset, strong analytical and troubleshooting skills on AWS Cloud Platform. Other responsibilities include working with internal business partners to gather requirements, prototyping, architecting, implementing/updating solutions, building and executing test plans, performing quality reviews, managing operations, and triaging and fixing operational issues. Site Reliability Engineers must be able to adjust to constant business change; common types of changes include new requirements, evolving goals and strategies, and emerging technologies. About the Role: Be hands-on and provide mentorship to a growing SRE team on core SRE principles and tools. Foster a sense of automation in issue resolution; everything possible should be automated, and only when automation can’t resolve an issue should people get involved in the resolution Lead efforts for updating production with new versions/infrastructures as they are available Lead capacity planning efforts in collaboration with Architects and DevOps engineers to determine changes to infrastructure that are needed to support new load and performance characteristics Leads engagement with software developers, DevOps and other infrastructure engineers to integrate software development and delivery from inception to full operation, ensuring robust released software and systems. Ensure highest level of uptime to meet the customer SLA by implementing system wide corrections to prevent reoccurrence of issues. Mentor other SRE team members to further develop their soft and hard skills Triage, troubleshoot and resolve issues using golden signals and go past golden signals Go past golden signals with additional principles such as chaos engineering to detect failure points and lead Game days for testing resiliency of team when it comes to incident response and remediations and synthetic monitoring. Lead SRE team members to create and maintain Recovery Procedures, RCA’s in collaboration with other engineering teams. Ensure Incidents assigned to the team are being managed within agreed SLAs Ensure alarms are documented in up to date Knowledge Base Articles. Ensures Production infrastructure is up to date with server/security patches and certificates. Continuous improvement of system and application monitoring and automation Identify and automate manual workarounds and process improvements Proactive monitoring of Monitor the availability, latency, scalability and efficiency of all services Perform periodic on-call duty as part of the SRE team About You: Skilled with cloud operations/administration in Amazon AWS. Tax/Accounting domain experience Bachelors or Master’s in Computer Science discipline. 5+ years’ experience focussed on Site Reliability Engineering or related position in AWS Cloud Platform. At least 2 AWS Certifications are must. (AWS Sysops Admin and Architects certifications preferred). Experience working with SQL, Windows Servers, Load balancers, Linux Deep experience with AWS, Docker and Kubernetes, CloudFormation, CloudWatch, CodeDeploy, DynamoDB, Lambda, SQS, Amazon FSX, Elastic Search and networking concepts are must. Program at a high level in at least one language such as: Java, C#, Javascript, Python or Ruby. Integration experience with PagerDuty, ServiceNow, Datadog, CloudWatch. Good understanding of Site Reliability Engineering (SRE) philosophies, technologies, platforms and tools, SLO management, incident resolution, and automation; Ability to explain technical concepts in clear, non-technical language Working knowledge of infrastructure components (e.g. routers, load balancers, cloud products, container systems, compute, storage, and networks) Knowledge of security and compliance standards such as SOC/PCI is a plus What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

Remote

Position: Full Stack Developer Experience: 4+ Years Location: Remote Mandatory Skills: .Net (Core / MVC) React.js AWS (EC2, Lambda, S3, RDS, DynamoDB, etc.) Key Responsibilities: Design, develop, and maintain backend services using .Net Core/MVC and frontend components using React.js. Build and scale backend systems on AWS cloud infrastructure using services like EC2, Lambda, S3, RDS, DynamoDB, etc. Collaborate closely with frontend developers to ensure seamless integration across the stack. Write clean, efficient, and maintainable code that adheres to best practices and coding standards. Debug, optimize, and troubleshoot backend issues to ensure optimal performance and reliability. Manage AWS services with a focus on uptime, scalability, cost-efficiency, and security. Conduct code reviews and provide technical mentorship to junior developers. Participate in setting up and maintaining CI/CD pipelines. Stay updated on current trends and best practices in full stack development and cloud computing. Required Skills & Qualifications: 4+ years of experience in backend development using .Net (.Net Core / MVC). Minimum 2 years of hands-on experience with React.js. At least 1 year of experience with AWS cloud services. Strong experience building and maintaining RESTful APIs and working with microservices architecture. Proficiency with AWS SDKs, CLI tools, and cloud resource management. Understanding of cloud security best practices. Familiarity with containerization technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes). Proficient with Git and working in Agile environments. Solid understanding of both SQL and NoSQL databases. Ability to work independently in a remote team environment. Preferred Qualifications: AWS Certifications (e.g., AWS Certified Developer, Solutions Architect). Experience with monitoring and logging tools (e.g., CloudWatch, ELK stack). Exposure to serverless computing and event-driven architecture. Familiarity with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation. Experience with DevOps practices and tools, including CI/CD pipelines.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Clearwater Analytics’ mission is to become the world’s most trusted and comprehensive technology platform for investment reporting, accounting, and analytics. With our team, you will partner with the most sophisticated and innovative institutional investors around the world. If you are infectiously passionate about what you do, intensely committed to clients, and driven by continuous innovation and improvement... We want you to apply! A career in Software Development , will provide you with the opportunity to participate in all phases of the software development lifecycle, including design, implementation, testing and deployment of quality software. With the use of advanced technology, you and your team will work in an agile environment producing designs and code that our customers will use every day . Responsibilities: Developing quality software that is used by some of the world's largest technology firms, fixed income asset managers, and custodian banks Participate in Agile meetings to contribute with development strategies and product roadmap Owning critical processes that are highly available and scalable Producing tremendous feature enhancements and reacting quickly to emerging technologies Encouraging collaboration and stimulating creativity Helping mentor entry-level developers Contributing to design and architectural decisions Providing leadership and expertise to our ever-growing workforce Testing and validating in development and production code that they own, deploy, and monitor Understanding, responding to, and addressing customer issues with empathy and in a timely manner Independently can move a major feature or service through an entire lifecycle of design, development, deployment, and maintenance Deep knowledge in multiple teams' domains; broad understanding of CW systems. Creates documentation of system requirements and behavior across domains Willingly takes on unowned and undesirable work that helps team velocity and quality Is in touch with client needs and understands their usage Consulted on quality, scaling and performance requirements before development on new features begins. Understands, finds, and proposes solutions for systemic problems Leads in the technical breakdown of deliverables and capabilities into features and stories. Expert in unit testing techniques and design for testability, contributes to automated system testing requirements and design Improves code quality and architecture to ensure testability and maintainability Understands, designs, and tests for impact/performance on dependencies and adjacent components and services. Builds and maintains code in the context and awareness of the larger system Helps less experienced engineers troubleshoot and solve problems Active in mentoring and training of others inside and outside their division Requirements: Strong problem-solving skills Experience with an object-oriented, or functional language Bachelor’s degree in Computer Science or related field Strong problem-solving skills 7+ years professional experience in industry-leading programming languages (Java/Python). Background in SDLC & Agile practices. Experience in monitoring production systems. Experience with Machine Learning Experience working with Cloud Platforms (AWS/Azure/GCP). Experience working with messaging systems such as Cloud Pub/Sub, Kafka, or SQS/SNS. Must be able to communicate (speak, read, comprehend, write in English). Desired Experience or Skills: Ability to build scalable backend services (Microservices, polyglot storage, messaging systems, data processing pipelines). Possess strong analytical skills, with excellent problem-solving abilities in the face of ambiguity. Excellent written and verbal skills. Ability to contribute to software design documentation, presentation, sequence diagrams and present complex technical designs in a concise manner. Professional experience in building distributed software systems, specializing in big data and NoSQL database technologies (Hadoop, Spark, DynamoDB, HBase, Hive, Cassandra, Vertica). Ability to work with relational and NoSQL databases Strong problem-solving skills. Strong organizational, interpersonal, and communication skills. Detail oriented. Motivated, team player.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Job Title: Sr Golang Backend Developer Employment Type: Full-Time Location - Ahmedabad On-site Experience Required: 5+ Years 🌟 Join Techiebutler as a Senior Golang Backend Engineer! 🌟 Are you a seasoned backend engineer passionate about building scalable, high-performance systems with cutting-edge technologies? Do you thrive in a collaborative environment where innovation and excellence are the norm? If so, we want you to be a key player in shaping the future of our backend architecture! At Techiebutler, we are on a mission to revolutionize the industry with innovation and ingenuity. As we refine and unify our tech stack, we’re looking for a Senior Golang Backend Engineer to drive technical excellence and lead backend development initiatives. Your Role: As our Senior Golang Backend Engineer, you will take the lead in designing, developing, and optimizing backend services that power our products. You'll leverage your expertise in Go, microservices, cloud technologies, and distributed systems to build robust and scalable solutions. Your key responsibilities will include: Designing & Developing scalable and high-performance backend services using Go. Optimizing Systems for reliability, efficiency, and maintainability. Establishing Technical Standards to ensure best practices in development, testing, and deployment. Mentoring & Code Reviews to uplift team capabilities and improve overall code quality. Monitoring & Troubleshooting using observability tools like DataDog, Prometheus, or New Relic. Cross-Team Collaboration on API design, integration, and architecture decisions. What We’re Looking For: Experience: Min 5+ years of backend development experience Golang Expertise : At least 3 years of hands-on experience with Go Cloud & Serverless: Proficiency with AWS services, including Lambda, DynamoDB, and SQS Containerisation & Orchestration: Hands-on experience with Docker and Kubernetes for deploying and managing services Microservices & Distributed Systems : Strong experience in designing, implementing, and maintaining microservices architectures Concurrency & Performance: Deep understanding of concurrent programming patterns and performance optimization techniques Domain-Driven Design (DDD): Practical experience applying DDD principles in software design Testing & Quality Assurance : Expertise in automated testing frameworks, TDD, and BDD CI/CD & DevOps : Familiarity with GitLab CI, GitHub Actions, and Jenkins for continuous integration and deployment Monitoring & Observability: Experience with centralized logging (ELK Stack) and distributed tracing (OpenTelemetry) Collaboration & Communication: Strong ability to work in cross-functional teams, participate in code reviews, and articulate technical concepts effectively Agile Development: Experience working in Agile/Scrum environments for iterative development and continuous improvement Why Join Us? Work with cutting-edge technologies and shape the future of our platform. A collaborative and inclusive work environment that values innovation and teamwork. Competitive salary. Career growth and development opportunities in a fast-paced tech company. If you're excited to work on impactful projects, solve challenging problems, and contribute to a high-performing team, we want to hear from you! Apply now and be part of our journey!

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary We are seeking an experienced software engineer to join our team at Almonds-AI. The ideal candidate will have a strong background in Node.js, AWS services, and database management systems. Key Responsibilities Design, develop, and maintain robust server-side applications using Node.js. Build scalable and secure RESTful APIs and backend logic to support frontend and mobile apps. Develop reusable, testable, and efficient code with adherence to best coding practices. Integrate AWS services (Lambda, S3, EC2, RDS, DynamoDB, API Gateway, etc.) into application infrastructure. Write and optimize complex SQL queries, stored procedures, and data models for relational databases. Collaborate with frontend developers, DevOps, and QA teams for end-to-end system development and deployment. Troubleshoot, debug, and optimize performance bottlenecks in production and staging environments. Ensure code quality through code reviews, unit testing, and CI/CD pipeline integration. Skillset Node.js: deep understanding of event-driven architecture, asynchronous programming, and Express.js. JavaScript (ES6+): clean, modular coding practices, familiarity with functional programming concepts. SQL: advanced knowledge of MySQL, PostgreSQL, or SQL Server; schema design, joins, indexing, performance optimization. AWS Services Lambda (serverless functions) S3 (object storage) EC2 (compute instances) API Gateway RDS / DynamoDB CloudWatch (logging and monitoring) IAM (security and access control) Desirable Skills TypeScript NoSQL databases (e.g., MongoDB) GraphQL Docker & Kubernetes Redis or in-memory caching CICD using Jenkins, GitHub Actions, or AWS CodePipeline Requirements Strong analytical and problem-solving skills . Effective communication and collaboration within cross-functional teams. Proactive attitude and ability to work in agile/scrum environments. Adaptability to learn and apply new technologies quickly (ref:hirist.tech)

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What Youll Be Doing... You will be part of a World Class Container Platform team that builds and operates highly scalable Kubernetes based container platforms (EKS, OCP, OKE, and GKE) at a large scale for Global Technology Solutions at Verizon, a top 20 Fortune 500 company. This individual will have a high level of technical expertise and daily hands-on implementation working in a product team developing services in two-week sprints using agile principles. This entails programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building Docker containers via a fully automated CI/CD pipeline utilizing AWS, Jenkins, Ansible playbooks, CI/CD tools and process (Jenkins, JIRA, GitLab, ArgoCD), Python, Shell Scripts, or any other scripting technologies. You will have autonomous control over day-to-day activities allocated to the team as part of agile development of new services. Automation and testing of different platform deployments, maintenance, and decommissioning Full Stack Development Participate in POC (Proof of Concept) technical evaluations for new technologies for use in the cloud What were looking for... Youll need to have: Bachelors degree or four or more years of experience. GitOps CI/CD workflows (ArgoCD, Flux) and working in Agile Ceremonies Model Address Jira tickets opened by platform customers Strong expertise in SDLC and Agile Development Experience in design, development, and implementation of scalable React/Node based applications (Full stack developer) Experience with development of HTTP/RESTful APIs, Microservices Experience with Serverless Lambda Development, AWS Event Bridge, AWS Step Functions, DynamoDB, Python Database experience (RDBMS, NoSQL, etc.) Familiarity integrating with existing web application portals Strong backend development experience with languages including Golang (preferred), Spring Boot, and Python. Experience with GitLab CI/CD, Jenkins, Helm, Terraform, Artifactory Strong development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh-related issues Strong Terraform and/or Ansible and Bash scripting experience Effective code review, quality, performance tuning experience, test-driven development Certified Kubernetes Application Developer (CKAD) Excellent cross-collaboration and communication skills Even better if you have one or more of the following: Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Experience with monitoring tools like NewRelic (NRDOT), OTLP Certified Kubernetes Administrator (CKA) Certified Kubernetes Security Specialist (CKS) Red Hat Certified OpenShift Administrator Development experience with the Operator SDK Experience creating validating and/or mutating webhooks Familiarity with creating custom EnvoyFilters for Istio service mesh and cost optimization tools like Kubecost, CloudHealth to implement right sizing recommendations If Verizon and this role sound like a fit for you, we encourage you to apply even if you dont meet every even better qualification listed above. Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Diversity and Inclusion Were proud to be an equal opportunity employer. At Verizon, we know that diversity makes us stronger. We are committed to a collaborative, inclusive environment that encourages authenticity and fosters a sense of belonging. We strive for everyone to feel valued, connected, and empowered to reach their potential and contribute their best. Check out our diversity and inclusion page to learn more. Locations Hyderabad, India Chennai, India

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What Youll Be Doing... You will be part of a World Class Container Platform team that builds and operates highly scalable Kubernetes based container platforms (EKS, OCP, OKE and GKE) at a large scale for Global Technology Solutions at Verizon, a top 20 Fortune 500 company. This individual will have a sound technical expertise and daily hands-on implementation working in a product team developing services in two week sprints using agile principles. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building Docker containers via a fully automated CI/CD pipeline utilizing AWS, Jenkins Ansible playbooks, AWS, CI/CD tools and process (Jenkins, JIRA, GitLab, ArgoCD), Python, Shell Scripts or any other scripting technologies. You will have autonomous control over day-to-day activities allocated to the team as part of agile development of new services. Automation and testing of different platform deployments, maintenance and decommissioning Full Stack Development What were looking for... Youll need to have: Bachelors degree or two or more years of experience. Address Jira tickets opened by platform customers GitOps CI/CD workflows (ArgoCD, Flux) and Working in Agile Ceremonies Model Expertise of SDLC and Agile Development Design, develop and implement scalable React/Node based applications (Full stack developer) Experience with development with HTTP/RESTful APIs, Microservices Experience with Serverless Lambda Development, AWS Event Bridge, AWS Step Functions, DynamoDB, Python, RDBMS, NoSQL, etc. Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Familiarity integrating with existing web application portals and backend development experience with languages to include Golang (preferred), Spring Boot, and Python. Experience with GitLab, GitLab CI/CD, Jenkins, Helm, Terraform, Artifactory Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and Working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Experience with Terraform and/or Ansible Experience with Bash scripting experience Effective code review, quality, performance tuning experience, Test Driven Development. Certified Kubernetes Application Developer (CKAD) Excellent cross collaboration and communication skills Even better if you have one or more of the following: GitOps CI/CD workflows (ArgoCD, Flux) and Working in Agile Ceremonies Model Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Networking of Microservices Solid understanding of Kubernetes networking and troubleshooting Experience with monitoring tools like NewRelic Working experience with Kiali, Jaeger Lifecycle management and assisting app teams on how they could leverage these tools for their observability needs K8S SRE Tools for Troubleshooting Certified Kubernetes Administrator (CKA) Certified Kubernetes Security Specialist (CKS) Red Hat Certified OpenShift Administrator Your benefits package will vary depending on the country in which you work. *subject to business approval Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Diversity and Inclusion Were proud to be an equal opportunity employer. At Verizon, we know that diversity makes us stronger. We are committed to a collaborative, inclusive environment that encourages authenticity and fosters a sense of belonging. We strive for everyone to feel valued, connected, and empowered to reach their potential and contribute their best. Check out our diversity and inclusion page to learn more. Locations: Hyderabad, India; Chennai, India

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Hello Visionary! We know that the only way a business thrive is if our people are growing. That’s why we always put our people first. Our global, diverse team would be happy to support you and challenge you to grow in new ways. Who knows where our shared journey will take you? We are looking for Software Engineer – Python + AWS You’ll make a difference by: Establish, maintain, and evolve concepts in continuous integration and deployment (CI/CD) pipelines for existing and new services. Collaborate with Engineering and Operations teams to improve automation of workflows, effective monitoring, infrastructure, code testing, scripting capabilities, deployment with lower costs and reduce non-conformance costs. System troubleshooting and problem resolution across various applications. Participate in on call rotation Conduct root cause analysis of incidents Implement enhancements to the monitoring solution to minimize the false positives and identify service health regressions Communicate findings in verbal and written format to the application team Generate weekly data reports summarizing the health of the application You’ll win us over by: You must have BE / B. Tech / MCA / ME / M. Tech qualification with 3 - 5 Years of confirmed ability Must have experience in Windows & Linux and Networking & security (Example: IAM, Authorization) topics Awareness of DevOps principles, Design Patterns, Enterprise Architecture Patterns, Micro service Architecture, ability to learn/use a wide variety of open-source technologies and tools You are expert and love to work in large project in an agile way (SAFe-Framework) Experience in AWS services: Serverless Services (Lambda, DynamoDB, API Gateway), Container Services(like ECS, ECR), Monitoring Services(like Cloudwatch, X-Ray), Orchestration Tools ( Kubernetes, Docker) Security Services(IAM, Secrets Manager), Network Services(VPC), EC2, Backup, S3, CDK, CloudFormation, Step functions Experience in Scripting languages: Python, Bash Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow Find out more about Siemens careers at: www.siemens.com/careers

Posted 1 week ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Mumbai, Maharastra

Work from Office

About the Role: Grade Level (for internal use): 10 S&P Global Dow Jones Indices The Role Development Engineer Python Full Stack S&P Dow Jones Indices a global leader in providing investable and benchmark indices to the financial markets, is looking for a Development Engineer with full stack experience to join our technology team. This is mostly a back-end development role but will also support UI development work. The Team : You will be part of global technology team comprising of Dev, QA and BA teams and will be responsible for analysis, design, development and testing. Responsibilities and Impact You will be working on one of the key systems that is responsible for calculating re-balancing weights and asset selections for S&P indices. Ultimately, the output of this team is used to maintain some of the most recognized and important investable assets globally. Development of RESTful web services and databases; supporting UI development requirements. Interfacing with various AWS infrastructure and services, deploying to Docker environment. Coding, Documentation, Testing, Debugging, Documentation and tier-3 support. Work directly with stakeholders and technical architect to formalize/document requirements for both supporting existing application as well as new initiatives. Perform Application & System Performance tuning and troubleshoot performance issues. Coordinately closely with the QA team and the scrum master to optimize team velocity and task flow. Helps establish and maintain technical standards via code reviews and pull requests Whats in it for you This is an opportunity to work on a team of highly talented and motivated engineers at a highly respected company. You will work on new development as well as enhancements to existing functionality. What Were Looking For: Basic Qualifications 7 - 10 years of IT experience in application development and support, primarily in a back-end API and database development roles with at least some UI development experience. Bachelor's degree in Computer Science, Information Systems, Engineering or, or in lieu, a demonstrated equivalence in work experience. Proficiency in modern Python 10+ (minimum 4 years dedicated, recent Python experience) AWS services experience including API Gateway, ECS / Docker, DynamoDB, S3, Kafka, SQS. SQL database experience, with at least 1 year of Postgres. Python libraries experience including Pydantic, SQLAlchemy and at least one of (Flask, FastAPI, Sanic), focusing on creating RESTful endpoints for data services. JavaScript / Typescript experience and at least one of (Vue 3, React, Angular) Strong unit testing skills with PyTest or UnitTest, and API testing using Postman or Bruno. CI/CD build process experience using Jenkins. Experience with software testing (unit testing, integration testing, test driven development). Strong Work Ethic and good communication skills. Additional Preferred Qualifications : Basic understanding of financial markets (stocks, funds, indices, etc.) Experience working in mission-critical enterprise organizations A passion for creating high quality code and broad unit test coverage. Ability to understand complex business problems, break into smaller executable parts, and delegate. About S&P Global Dow Jones Indic e s At S&P Dow Jones Indices, we provide iconic and innovative index solutions backed by unparalleled expertise across the asset-class spectrum. By bringing transparency to the global capital markets, we empower investors everywhere to make decisions with conviction. Were the largest global resource for index-based concepts, data and research, and home to iconic financial market indicators, such as the S&P 500 and the Dow Jones Industrial Average . More assets are invested in products based upon our indices than any other index provider in the world. With over USD 4 trillion in passively managed assets linked to our indices and over USD 13 trillion benchmarked to our indices, our solutions are widely considered indispensable in tracking market performance, evaluating portfolios and developing investment strategies. S&P Dow Jones Indices is a division of S&P Global (NYSESPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

On-site

Full stack Engineer - Java Angular Location- Kochi Years of Exp -5+ Years Looking for dynamic and creative full stack web developers who have the knowledge and experience to help shape our category leading solutions. You will be part of growing team which is responsible for developing and maintaining products and services that aid clients in the property insurance industry in quickly accessing reliable, current risk data enabling more informed business decisions while enhancing the customer experience. What the job is like: You will work closely with a cross functional team of developers, QA engineers, product owners and designers to build cutting edge single page web applications. Work with modern technology stack including Angular, TypeScript, Sass, HTML, Node/npm, Java, PostgreSQL, MongoDB and AWS. Designing, coding, testing, documenting, and debugging single page web applications to ensure they remain in “category killer” status Install, configure, and administer your own workstation software, including editors, development servers, and client applications Skills Required: Minimum of Bachelor’s Degree in Computer Science / Software Engineering or equivalent degree from a fouryear college or university with minimum 3.0 GPA with 2-4 years of related work experience Work experience with Java, JavaScript, Angular, TypeScript, HTML, CSS, Sass, XML, SQL Ability to create simple and well-designed solutions to complex software problems Dedication to excellence and championship work ethic Knowledge of internet client/server technologies and experience working building enterprise single page web applications Knowledge of PC and web based software testing in both client and server sides Team-player mindset with strong communication and problem solving skills What would put you above everyone else: Experience with the following Multi-layered Software Architectures Junit Agile Development Methodology Jasmine Spring MVC Relational databases (PostgreSQL, etc.) Apache Maven NoSQL databases (MongoDB, Cassandra, Apache Tomcat Amazon DynamoDB, Amazon Aurora Angular PostgreSQL, etc.) TypeScript • AWS Lambda Sass • Jira Node/npm • Bitbucket REST • Concourse

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Lead Software Engineer at JPMorgan Chase within the Consumer and Community Banking, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Lead a team of cloud engineers, fostering a culture of innovation and continuous improvement. Collaborate with technical teams and business stakeholders to propose and implement cloud solutions that meet current and future needs. Define and drive the technical target state of cloud products, ensuring alignment with strategic goals. Participate in architecture governance bodies to ensure compliance with best practices and standards. Evaluate and provide feedback on new cloud technologies, recommending solutions for future state architecture. Oversee the design, development, and deployment of cloud-based solutions on AWS, utilizing services such as EC2, S3, Lambda, and RDS. Integrate DevOps practices, including Infrastructure as Code (IaC) using tools like Terraform and AWS CloudFormation, and Configuration Management with Ansible or Chef. Establish and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines using Jenkins, GitLab CI, or AWS CodePipeline. Identify opportunities to automate remediation of recurring issues to improve operational stability of cloud applications and systems. Lead evaluation sessions with external vendors, startups, and internal teams to assess architectural designs and technical credentials. Required Qualifications, Capabilities, And Skills Formal training or certification in cloud engineering concepts with 5+ years of applied experience. Hands on experience in programming, analytical & logical skills of C# & .net core. Hands on experience in the background tasks with hosted services. Experience in developing secure, high performance, highly available and complex API's. Experience in AWS services like S3, SNS, SQS, Lambda, DynamoDB. Experience in Micro-services architecture. Advanced proficiency in one or more programming languages. Expertise in automation and continuous delivery methods. Proficient in all aspects of the Software Development Life Cycle, with a focus on cloud technologies. Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security. Demonstrated proficiency in cloud applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.). Practical cloud-native experience, particularly with AWS services and architecture, including VPC, IAM, and CloudWatch. Preferred Qualifications, Capabilities, And Skills In-depth knowledge of the financial services industry and their IT systems. Advanced knowledge of cloud software, applications, and architecture disciplines. Ability to evaluate current and emerging cloud technologies to recommend the best solutions for the future state architecture.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

🚀 We’re Hiring: Senior Backend Engineer (Remote | Full-Time) Experience: 3–5 years (Backend focused) Join our small, fast-paced team where autonomy and deep technical ownership are the norm. We’re building the backend architecture of a modern B2B platform using cutting-edge tech like .NET 8, event sourcing, CQRS, and AWS serverless infrastructure. What You’ll Work On: 🔧 Build scalable backend services with .NET 8, C#, PostgreSQL, DynamoDB 🧠 Design event-sourced architectures and implement CQRS in real-world production ✅ Write clean, well-tested code using a TDD-first approach ☁️ Operate AWS serverless components like Lambda, SQS, SNS, API Gateway, IAM🌍 Contribute to multi-tenant and multi-region architect ure🤝 Collaborate closely with product, security & infra team s Must-Have Skills :✔️ 3+ years of backend experience with C #/.N ET✔️ Strong understanding of enterprise architecture (DDD, CQRS, microservic es)✔️ Passion for TDD and cl ean code✔️ Hands-on with PostgreSQL, DynamoDB, and AWS ser v ices Nice to Have:✨ Experience with event sourcing in produ ction💳 Fintech, payments, or secure data hand ling exper ience🌐 Exposure to financial APIs or B2B plat forms🏗️ Worked on multi-tenant or multi-region s y s tems ⸻📩 If you thrive in ownership-driven environments and love solving real problems with elegant solutions, we’d love to talk to you. Apply now or reach out dir e ctly! #backendjobs #dotnet #aws #eventSourcing #CQRS #remotejobs #softwareengineering # hiring

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 22 Lacs

Pune, Chennai, Bengaluru

Hybrid

5-8 years of experience in backend development with a strong focus on Python. Proven experience with AWS serverless technologies, including Lambda, DynamoDB, and other related services. Hands-on experience with Terraform for infrastructure as code.

Posted 1 week ago

Apply

6.0 - 11.0 years

20 - 35 Lacs

Bengaluru

Remote

First American (India) Private Limited ("FAI") is a Global Capability Centre (GCC) of the First American Financial Corporation (FAF: NYSE) - a leading provider of title insurance, settlement services and risk solutions for real estate transactions since 1889. First American also provides title plant management, home warranty, and banking, trust & wealth management services. With campuses in Bangalore, Hyderabad and Salem, FAI delivers Software Development, IT infrastructure, Data & Analytics, back-office, and knowledge-processing operations to support First Americans global operations in the US, UK, Australia & Canada. We are seeking an accomplished .NET full-stack developers to join our team at FAI. The selected candidate will play a vital role in designing, coding, testing, and maintaining software applications that support the operations of our organisation. As an integral member of our software development team, you will also be responsible for developing back-end components to aid our front-end developers. We have a couple of positions open for different tech stacks, as mentioned below Position 1: Required Skills: He/She is expected to have good hands-on experience in MS Office, MS Visio and any prototyping tools\ He/She must possess good experience in SDLC, Agile/Scrum methodologies. Tech Stack - C#, .NET, React.js, Web API, Azure, TypeScript, JS Excellent communication, Good analysis and problem-solving skills, strong technical skills, Design, coding, code review and troubleshooting, ability to coach/mentor team Position 2: Required Skills Front End Must Have - Angular Nice to have - Pug, ASP.Net Web forms, MVC Back End Must Have - C#, Node.Js, Typescript,.Net Framework, Net Core, WebApi DB Must have - SQL, DynamoDB Nice to Have - PostgreSQL, Open Search Cloud Must Have - AWS Other Tools Must Have - JIRA Nice to Have -Selenium, Cypress, Pluralsight flow, Slack, Confluence Experience: Five plus years Work Location: Remote If you are interested, please share your updated CV and fill in the below details Total Experience: The Core technical skills you are working on: Current CTC: Expected CTC: Notice Period: Current Location: Availability for Interview: Position suitable: Best Regards Padmavathi HR Consultant, Talent Acquisition Mobile/WhatsApp : + 91 8892771029 Email: pavenkataramaiah@firstam.com http://www.firstam.co.in

Posted 1 week ago

Apply

360.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Us: MUFG Bank, Ltd. is Japan’s premier bank, with a global network spanning in more than 40 markets. Outside of Japan, the bank offers an extensive scope of commercial and investment banking products and services to businesses, governments, and individuals worldwide. MUFG Bank’s parent, Mitsubishi UFJ Financial Group, Inc. (MUFG) is one of the world’s leading financial groups. Headquartered in Tokyo and with over 360 years of history, the Group has about 120,000 employees and offers services including commercial banking, trust banking, securities, credit cards, consumer finance, asset management, and leasing. The Group aims to be the world’s most trusted financial group through close collaboration among our operating companies and flexibly respond to all the financial needs of our customers, serving society, and fostering shared and sustainable growth for a better world. MUFG’s shares trade on the Tokyo, Nagoya, and New York stock exchanges. MUFG Global Service Private Limited: Established in 2020, MUFG Global Service Private Limited (MGS) is 100% subsidiary of MUFG having offices in Bengaluru and Mumbai. MGS India has been set up as a Global Capability Centre / Centre of Excellence to provide support services across various functions such as IT, KYC/ AML, Credit, Operations etc. to MUFG Bank offices globally. MGS India has plans to significantly ramp-up its growth over the next 18-24 months while servicing MUFG’s global network across Americas, EMEA and Asia Pacific About the Role: Position Title: Data Engineer or Data Visualization Analyst Corporate Title: AVP Reporting to: Vice President Job Profile Position details: Cloud Data Engineer with a strong technology background and hands-on experience working in an enterprise environment, designing, and implementing data warehouses, data lakes, and data marts for large financial institutions. Alternatively, Data Visualization Analyst with BI & Analytics experience with exposure to or experience in Data Engineering or Data Pipelines can be considered. In this role you will work with technology and business leads to build or enhance critical enterprise applications both on-prem and in the cloud (AWS) along with Modern Data Platform such as Snowflake and Data Virtualization tool such as Starburst. Successful candidates will possess in-depth knowledge of current and emerging technologies and demonstrate a passion for designing and building elegant solutions and for continuous self-improvement. Roles and Responsibilities: Manage data analysis and data integration of disparate systems Create a semantic layer for data virtualization that connects to heterogenous data repositories Develop reports and dashboards using Tableau, Power BI, or similar BI tools as assigned Assist Data Management Engineering team (either for Data Pipelines Engineering or Data Service & Data Access Engineering) for ETL or BI design and other framework related items Work with business users to translate functional specifications into technical designs for implementation and deployment. Extract, transform, and load large volumes of structured and unstructured data from various sources into AWS data lakes or modern data platforms. Assist with Data Quality Controls as assigned Work with cross functional team members to develop prototypes, produce design artifacts, develop components, perform, and support SIT and UAT testing, triaging and bug fixing. Optimize and fine-tune data pipelines jobs for performance and scalability. Implement data quality and data validation processes to ensure data accuracy and integrity. Provide problem-solving expertise and complex analysis of data to develop business intelligence integration designs. Convert physical data integration models and other design specifications to source code. Ensure high quality and optimum performance of data integration systems to meet business solutions. Job Requirements: Bachelors’ Degree (or foreign equivalent degree) in Information Technology, Information Systems, Computer Science, Software Engineering, or a related field. Experience in the financial services or banking industry is preferred. 5+ Years of experience working as a Data Engineer, with a focus on building data pipelines and processing large datasets. AWS certifications on Data related specialties are a plus. Business Acumen – 15%: Knowledge of Banking & Financial Services Products (such as Loans, Deposits, Forex, etc.). Knowledge of Operational/MIS Reports, Risk and Regulatory Reporting for a US Bank is a plus. Data Skills – 25%: Must have proficiency in Data Warehousing concepts, Data Lake & Data Mesh concepts, Data Modeling, Databases, Data Governance, Data Security/Protection, and Data Access. Solid understanding of data modeling, database design, and ETL principles Familiarity with data governance, data security, and compliance practices in cloud environments. Tech Skills – 50%: 5+ Years of Expertise in Hive QL, Python programming, experience with Spark, Python, Scala and Spark for big data processing and analysis. 2+ Years of Strong proficiency in AWS services, including AWS Glue, Redshift, EMR, RDS, Kinesis, S3, Athena, DynamoDB, Step Functions and Lambda. 3+ years of experience with Data Visualization Tools such as Tableau or Power BI 2+ years of experience with ETL technologies such as Informatica Powercenter or SSIS coupled with most recent 2-3 years of cloud ETL technologies 2+ years of experience in dealing with data pipelines associated with modern data platforms such as Snowflake or Databricks Exposure to Data Virtualization tools such as Starburst or Denodo is a plus Strong problem-solving skills and the ability to optimize and fine-tune data pipelines and Spark jobs for performance. Experience working with data lakes, data warehouses, and distributed computing systems. Experience with Modern Data Stack and Cloud Technologies is a must. Human Skills – 10%: Excellent communication and collaboration skills, with the ability to work effectively in a team environment.

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Solution Architect This is an opportunity for an experienced Solution Architect to help us define the high-level technical architecture and design for a key data analytics and insights platform that powers the personalised customer engagement initiatives of the business You’ll define and communicate a shared technical and architectural vision of end-to-end designs that may span multiple platforms and domains Take on this exciting new challenge and hone your technical capabilities while advancing your career and building your network across the bank We're offering this role at vice president level What you'll do We’ll look to you to influence and promote the collaboration across platform and domain teams on the solution delivery. Partnering with platform and domain teams, you’ll elaborate the solution and its interfaces, validating technology assumptions, evaluating implementation alternatives, and creating the continuous delivery pipeline. You’ll also provide analysis of options and deliver end-to-end solution designs using the relevant building blocks, as well as producing designs for features that allow frequent incremental delivery of customer value. On top of this, you’ll be: Owning the technical design and architecture development that aligns with bank-wide enterprise architecture principles, security standards, and regulatory requirements Participating in activities to shape requirements, validating designs and prototypes to deliver change that aligns with the target architecture Promoting adaptive design practices to drive collaboration of feature teams around a common technical vision using continuous feedback Making recommendations of potential impacts to existing and prospective customers of the latest technology and customer trends Engaging with the wider architecture community within the bank to ensure alignment with enterprise standards Presenting solutions to governance boards and design review forums to secure approvals Maintaining up-to-date architectural documentation to support audits and risk assessment The skills you'll need As a Solution Architect, you’ll bring expert knowledge of application architecture, and in business data or infrastructure architecture with working knowledge of industry architecture frameworks such as TOGAF or ArchiMate. You’ll also need an understanding of Agile and contemporary methodologies with experience of working in Agile teams. A certification in cloud solutions like AWS Solution Architect is desirable while an awareness of agentic AI based application architectures using LLMs like OpenAI and agentic frameworks like LangGraph, CrewAI will be advantageous. Furthermore, you’ll need: Strong experience in solution design, enterprise architecture patterns, and cloud-native applications including the ability to produce multiple views to highlight different architectural concerns A familiarity with understanding big data processing in the banking industry Hands-on experience in AWS services, including but not limited to S3, Lambda, EMR, DynamoDB and API Gateway An understanding of big data processing using frameworks or platforms like Spark, EMR, Kafka, Apache Flink or similar Knowledge of real-time data processing, event-driven architectures, and microservices Conceptual understanding of data modelling and analytics, machine learning or deep-learning models The ability to communicate complex technical concepts clearly to peers and leadership level colleagues

Posted 1 week ago

Apply

3.0 years

6 - 24 Lacs

Chennai, Tamil Nadu, India

On-site

Backend Node.js Engineer AWS Industry Information Technology Services – Cloud Application Development and Managed AWS Solutions. Role & Responsibilities Design and code server-side modules in Node.js aligned to twelve-factor principles. Build event-driven microservices on AWS using Lambda, API Gateway, DynamoDB, and SQS. Optimise REST and GraphQL APIs for high throughput, low latency, and security best practices. Create automated test suites and integrate builds into CI/CD pipelines using CodePipeline and Jenkins. Instrument services with CloudWatch metrics, logs, and alarms to ensure 99.9 uptime. Collaborate with product, DevOps, and QA to deliver sprint commitments and mentor junior engineers. Skills & Qualifications Must-Have 3+ years professional Node.js back-end development. Hands-on AWS serverless stack Lambda, API Gateway, DynamoDB. Strong proficiency with Express.js or Fastify frameworks. Expertise in building and securing RESTful APIs. Working knowledge of Git, Docker, and CI/CD workflows. On-site availability at India delivery center. Preferred TypeScript experience in production. Exposure to GraphQL and event sourcing. Understanding of IaC using CloudFormation or Terraform. Experience with performance profiling and APM tools. Knowledge of container orchestration on ECS or EKS. Benefits & Culture High-impact work on greenfield cloud products for global brands. Cohesive team culture focused on craftsmanship, learning, and innovation. Competitive salary, annual performance bonus, and AWS certification sponsorship. Skills: terraform,fastify,node.js,microservices,eks,ci/cd,api gateway,express.js,aws,restful apis,aws lambda,typescript,sqs,graphql,git,cloudformation,dynamodb,ecs,docker,lambda

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs.  Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 1 week ago

Apply

4.5 years

0 Lacs

Kochi, Kerala, India

On-site

Full stack Java Angular Location- Kochi Years of Exp -4.5+ Years Salary - 15 LPA Notice period- Immediate only Skills Required: Minimum of Bachelor’s Degree in Computer Science / Software Engineering or equivalent degree from a fournyear college or university with minimum 3.0 GPA with 2-4 years of related work experience Work experience with Java, JavaScript, Angular, TypeScript, HTML, CSS, Sass, XML, SQL Ability to create simple and well-designed solutions to complex software problems Dedication to excellence and championship work ethic Knowledge of internet client/server technologies and experience working building enterprise single page web applications Knowledge of PC and web based software testing in both client and server sides Team-player mindset with strong communication and problem solving skills What would put you above everyone else: Experience with the following Multi-layered Software Architectures Junit Agile Development Methodology Jasmine Spring MVC Relational databases (PostgreSQL, etc.) Apache Maven NoSQL databases (MongoDB, Cassandra, Apache Tomcat Amazon DynamoDB, Amazon Aurora Angular PostgreSQL, etc.) TypeScript • AWS Lambda Sass • Jira Node/npm • Bitbucket REST • Concourse

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Summary JOB DESCRIPTION We are seeking a talented and experienced Data Engineer to join our team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and systems to support analytics and data-driven decision-making. This role requires expertise in data processing, data modeling, and big data technologies. Responsibilities Key Responsibilities: Design and develop datapipelines to collect, transform, and load data into datalakes and datawarehouses . Optimize ETLworkflows to ensure data accuracy, reliability, and scalability. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Implement and manage cloud − baseddataplatforms (e.g., AWS , Azure , or GoogleCloudPlatform ). Develop datamodels to support analytics and reporting. Monitor and troubleshoot data systems to ensure high performance and minimal downtime. Ensure data quality and security through governance best practices. Document workflows, processes, and architecture to facilitate collaboration and scalability. Stay updated with emerging data engineering technologies and trends. Qualifications Required Skills and Qualifications: Strong proficiency in SQL and Python for data processing and transformation. Hands-on experience with bigdatatechnologies like ApacheSpark , Hadoop , or Kafka . Knowledge of datawarehousingconcepts and tools such as Snowflake , BigQuery , or Redshift . Experience with workfloworchestrationtools like ApacheAirflow or Prefect . Familiarity with cloudplatforms (AWS, Azure, GCP) and their data services. Understanding of datagovernance , security , and compliance best practices. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Preferred Qualifications Certification in cloudplatforms (AWS, Azure, or GCP). Experience with NoSQLdatabases like MongoDB , Cassandra , or DynamoDB . Familiarity with DevOpspractices and tools like Docker , Kubernetes , and Terraform . Exposure to machinelearningpipelines and tools like MLflow or Kubeflow . Knowledge of datavisualizationtools like PowerBI , Tableau , or Looker .

Posted 1 week ago

Apply

3.0 years

6 - 24 Lacs

Pune, Maharashtra, India

On-site

Backend Node.js Engineer AWS Industry Information Technology Services – Cloud Application Development and Managed AWS Solutions. Role & Responsibilities Design and code server-side modules in Node.js aligned to twelve-factor principles. Build event-driven microservices on AWS using Lambda, API Gateway, DynamoDB, and SQS. Optimise REST and GraphQL APIs for high throughput, low latency, and security best practices. Create automated test suites and integrate builds into CI/CD pipelines using CodePipeline and Jenkins. Instrument services with CloudWatch metrics, logs, and alarms to ensure 99.9 uptime. Collaborate with product, DevOps, and QA to deliver sprint commitments and mentor junior engineers. Skills & Qualifications Must-Have 3+ years professional Node.js back-end development. Hands-on AWS serverless stack Lambda, API Gateway, DynamoDB. Strong proficiency with Express.js or Fastify frameworks. Expertise in building and securing RESTful APIs. Working knowledge of Git, Docker, and CI/CD workflows. On-site availability at India delivery center. Preferred TypeScript experience in production. Exposure to GraphQL and event sourcing. Understanding of IaC using CloudFormation or Terraform. Experience with performance profiling and APM tools. Knowledge of container orchestration on ECS or EKS. Benefits & Culture High-impact work on greenfield cloud products for global brands. Cohesive team culture focused on craftsmanship, learning, and innovation. Competitive salary, annual performance bonus, and AWS certification sponsorship. Skills: terraform,fastify,node.js,microservices,eks,ci/cd,api gateway,express.js,aws,restful apis,aws lambda,typescript,sqs,graphql,git,cloudformation,dynamodb,ecs,docker,lambda

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description About Amazon.com: Amazon.com strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon.com continues to grow and evolve as a world-class e-commerce platform. Amazon's evolution from Web site to e-commerce partner to development platform is driven by the spirit of innovation that is part of the company's DNA. The world's brightest technology minds come to Amazon.com to research and develop technology that improves the lives of shoppers and sellers around the world. About Team The RBS team is an integral part of Amazon online product lifecycle and buying operations. The team is designed to ensure Amazon remains competitive in the online retail space with the best price, wide selection and good product information. The team’s primary role is to create and enhance retail selection on the worldwide Amazon online catalog. The tasks handled by this group have a direct impact on customer buying decisions and online user experience. Overview Of The Role An candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment. Responsibilities Include Works across team(s) and Ops organization at country, regional and/or cross regional level to drive improvements and enables to implement solutions for customer, cost savings in process workflow, systems configuration and performance metrics. Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field Proficiency in automation using Python Excellent oral and written communication skills Experience with SQL, ETL processes, or data transformation Preferred Qualifications Experience with scripting and automation tools Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions Understanding of cloud services, serverless architecture, and systems integration Key job responsibilities As a Business Intelligence Engineer in the team, you will collaborate closely with business partners, architect, design, implement, and BI projects & Automations. Responsibilities Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards. Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.) Development of dashboards and reports. Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations. Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases. Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues. Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Internal Job Description Retail Business Service, ARTS is a growing team that supports the Retail Efficiency and Paid Services business and tech teams. There is ample growth opportunity in this role for someone who exhibits Ownership and Insist on the Highest Standards, and has strong engineering and operational best practices experience. Basic Qualifications 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field. Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes. Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.) Expert-level SQL. Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.) Knowledge of AWS products such as Redshift, Quicksight, and Lambda. Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams. Preferred Qualifications Experience with data-specific programming languages/packages such as R or Python Pandas. Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR. Knowledge of machine learning techniques and concepts. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A3000872

Posted 1 week ago

Apply

50.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description The ideal candidate will have a strong background in AWS services, Azure DevOps, and various monitoring tools. You will be responsible for managing and optimizing our cloud infrastructure, building and maintaining CI/CD pipelines, and ensuring the reliability and performance of our applications Responsibilities Key Responsibilities: AWS Services Management: Handle AWS services including Cognito, DynamoDB, API Gateway, Lambda, EC2, S3, and CloudWatch. Serverless Framework: Utilize the Serverless Framework to build and manage serverless infrastructure for APIs. Azure DevOps: Build and maintain CI/CD pipelines using Azure DevOps. Monitoring Tools: Implement and manage monitoring tools such as Splunk and CloudWatch to ensure system reliability and performance. Scripting: Develop and maintain shell and Python scripts for automation and system management. Linux Administration: Manage and optimize Linux-based systems. Version Control: Use Git for version control and collaboration. Qualifications Bachelor's degree in Computer Science, Information Technology, or related field. Proven experience as a DevOps Engineer or similar role. Strong knowledge of AWS services and Azure DevOps. Proficiency in shell scripting and Linux administration. Experience with monitoring tools like Splunk and CloudWatch. Familiarity with version control systems, particularly Git. Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. Preferred Skills Experience with containerization technologies such as Docker and Kubernetes. Knowledge of infrastructure as code (IaC) tools like Terraform or CloudFormation. Understanding of security best practices in cloud environments. About Us For over 50 years, Verisk has been the leading data analytics and technology partner to the global insurance industry by delivering value to our clients through expertise and scale. We empower communities and businesses to make better decisions on risk, faster. At Verisk, you'll have the chance to use your voice and build a rewarding career that's as unique as you are, with work flexibility and the support, coaching, and training you need to succeed. For the eighth consecutive year, Verisk is proudly recognized as a Great Place to Work® for outstanding workplace culture in the US, fourth consecutive year in the UK, Spain, and India, and second consecutive year in Poland. We value learning, caring and results and make inclusivity and diversity a top priority. In addition to our Great Place to Work® Certification, we’ve been recognized by The Wall Street Journal as one of the Best-Managed Companies and by Forbes as a World’s Best Employer and Best Employer for Women, testaments to the value we place on workplace culture. We’re 7,000 people strong. We relentlessly and ethically pursue innovation. And we are looking for people like you to help us translate big data into big ideas. Join us and create an exceptional experience for yourself and a better tomorrow for future generations. Verisk Businesses Underwriting Solutions — provides underwriting and rating solutions for auto and property, general liability, and excess and surplus to assess and price risk with speed and precision Claims Solutions — supports end-to-end claims handling with analytic and automation tools that streamline workflow, improve claims management, and support better customer experiences Property Estimating Solutions — offers property estimation software and tools for professionals in estimating all phases of building and repair to make day-to-day workflows the most efficient Extreme Event Solutions — provides risk modeling solutions to help individuals, businesses, and society become more resilient to extreme events. Specialty Business Solutions — provides an integrated suite of software for full end-to-end management of insurance and reinsurance business, helping companies manage their businesses through efficiency, flexibility, and data governance Marketing Solutions — delivers data and insights to improve the reach, timing, relevance, and compliance of every consumer engagement Life Insurance Solutions – offers end-to-end, data insight-driven core capabilities for carriers, distribution, and direct customers across the entire policy lifecycle of life and annuities for both individual and group. Verisk Maplecroft — provides intelligence on sustainability, resilience, and ESG, helping people, business, and societies become stronger Verisk Analytics is an equal opportunity employer. All members of the Verisk Analytics family of companies are equal opportunity employers. We consider all qualified applicants for employment without regard to race, religion, color, national origin, citizenship, sex, gender identity and/or expression, sexual orientation, veteran's status, age or disability. Verisk’s minimum hiring age is 18 except in countries with a higher age limit subject to applicable law. https://www.verisk.com/company/careers/ Unsolicited resumes sent to Verisk, including unsolicited resumes sent to a Verisk business mailing address, fax machine or email address, or directly to Verisk employees, will be considered Verisk property. Verisk will NOT pay a fee for any placement resulting from the receipt of an unsolicited resume. Verisk Employee Privacy Notice

Posted 1 week ago

Apply

20.0 years

0 Lacs

India

On-site

Impelsys Overview Impelsys is a market leader in providing cutting-edge digital transformation solutions & services – leveraging digital, AI, ML, cloud, analytics, and emerging technologies. We deliver custom solutions that meet customers’ technology needs wherever they are in their digital lifecycle. With 20+ years of experience, we have helped our clients to build, deploy & streamline their digital infrastructure by providing innovative solutions & value-driven services to transform their business to thrive in a digital economy. We offer expertise in providing Products & Platforms, Enterprise Technology Services, and Learning & Content Services to drive business success. Some of our marquee customers include Elsevier, Wolters Kluwer, Pearson, American Heart Association, Disney, World Bank, International Monetary Fund, BBC, Encyclopedia Britannica, McGraw-Hill, Royal College of Nursing, and Wiley. Our technology stack is varied and cutting edge. We have moved from monolithic applications to distributed architecture to now, microservices based architecture. Our platform runs on Java, LAMP and AngularJS. Our mobile apps are native apps as well as apps built using React, Xamarin and Ionic. Our bespoke development services' TRM includes AngularJS, jQuery, Bootstrap, Cordova, Kafka, PNGINX, Propel, MongoDB, MySQL, DynamoDB, and Docker among others. Impelsys is a Great Place to Work certified company & has a global footprint of 1,100+ employees, with its delivery centers in New York, USA, Amsterdam, Porto and Bangalore & Mangalore in India. Overview: Contribute to our client’s publishing ecosystem by supporting, configuring, and developing content management systems to sustain the publishing environment. Knowledge in XML technologies, XSLT, XQuery, XPath and related technologies and Schematron, Content management, and full-stack systems is essential to support the development process. Collaboration with a team of internal and external resources to configure application software and databases, writing, testing, and deploying code to support end users is critical. A key function of the role includes translating end user requirements to deliver efficient solutions that align with business objectives. Essential Job Functions and Responsibilities: The job functions include, but are not limited to, the following: Work effectively within a small team to maximize productivity and efficiency by coordinating seamlessly across global time zones, collaborating with both internal and external team members Provide technical support for relational and XML-based content systems to manipulate data and support business objectives Manage integrations between RSuite, Mark Logic, and MySQL databases Gather and interpret Voice of the Customer (VoC) feedback to ensure our systems align with customer needs and develop solutions to further support end users Write clear technical specifications and comprehensive documentation Proficiently develop XQuery and XSLT code to enhance system functionality Maintain and extend DTDs/schemas/schematron, XSD Streamline testing, code review, and deployment processes using automation technologies such as Postman and Jenkins Deploy and test code across development, staging, and production environments Ensure change requests are implemented accurately and on schedule while keeping customers advised of on-going development priorities Conduct in-depth analysis of requirements and enhancement requests from end users and align requirements with business objectives Find and correct XML database inconsistencies and design and implement solutions to reduce degradation of data Implement medium to large system improvements utilizing XQuery and XSLT code to reduce technical debt Administer the MarkLogic, MySQL, and RSuite application environment on both Windows and Linux servers Demonstrate ownership and an ability to solve complex problems by researching and implementing solutions Embrace a continuous improvement mindset by researching new technologies and recommending solutions that enhance the content management publishing workflow Ability to work independently and as part of a team. Knowledge of web services and APIs. Linux administration Qualifications and Education: Any combination equivalent to, but not limited to, the following: Three to five years of working with content management systems and publishing workflows. Solid understanding and minimum three years of experience working with XML, XQuery, and XSLT. Proficiency in metadata modeling within a content management system. Comfortable with Windows and Linux server administration. Exposure to any of the following technologies is a plus: MarkLogic, RSuite, Java, Docker, Nifi, JSON, Javascript and frontend technologies like Angular Comfortable using XML-based tools and editors, including Schematron, XForms, and oXygen. Knowledge of scripting languages, databases, as well as declarative and object-oriented programming. Experience with DevOps tools, specifically using Git, as well as automated deployment/testing methodologies such as Jenkins. Ability to engage with stakeholders and translate their requirements into technical solutions. Bachelor's degree or equivalent experience in Information Technology, Computer Sciences, or a related field. Language, Analytical Skills and Person Specifications Any combination equivalent to, but not limited to, the following: Effective communications skills, both oral and written, are required. Must be effective at understanding and communicating with an array of stakeholders: project management, programmers and tech staff, upper management, other [client name] staff, external contractors, vendors, clients, and customers. Excellent Leadership and Teamwork. Working effectively with internal and external team members at various levels to achieve results through cooperative, goal-oriented approach Problem-solving and Analytical skills. Must be able to effectively analyze and trouble shoot issues, work with others to overcome obstacles, and identify and quickly deploy solutions. Multitasking. Ability to manage multiple projects, switching quickly from task to task, as needed Results Focus and Accountability. Achieving results within project schedules and deadlines, setting challenging goals, prioritizing tasks, accepting accountability, and providing leadership.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies