Home
Jobs

2074 Dynamodb Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 8.0 years

0 - 0 Lacs

Gorwa, Vadodara, Gujarat

On-site

About Smart Node Smart Node is one of India’s leading brands in wireless home automation, transforming homes, offices, and hotels into smart spaces. With over 8 years of innovation, we are on a mission to simplify lives through cutting-edge automation technology made in India. Join our passionate and fast-growing team to be part of the next big revolution in smart living. Role Overview We are looking for a Cloud Backend Developer with strong backend capabilities and a deep understanding of cloud computing to support new product integrations and improve cloud-side application logic. This is a core development role focused on building APIs, backend services, cloud integrations, and data handling for smart IoT products. You will work closely with our CTO and other engineering teams to shape the digital brain of our automation ecosystem. Key Responsibilities * Build and maintain backend services and microservices for cloud-based IoT applications * Develop RESTful APIs for device-to-cloud and cloud-to-app communication * Integrate third-party devices into our platform * Handle user authentication, cloud session management, and media control flow * Work on data handling, event triggers, and notification services * Collaborate with mobile and firmware teams to ensure seamless product integration * Write clean, secure, and scalable code with proper documentation Must-Have Skills * Strong proficiency in Node.js and Python for backend development * Deep understanding of cloud computing concepts (serverless, pub/sub, etc.) * Experience with cloud services like AWS (Lambda, S3, API Gateway, DynamoDB), GCP, or Azure * Strong experience in API design, JWT/Auth flows, and asynchronous event handling * Familiarity with IoT protocols (MQTT, WebSocket) * Solid knowledge of MongoDB, SQL, or other modern databases * Understanding of media streaming logic (RTSP/WebRTC is a plus) Nice-to-Have * Experience with IoT platforms or home automation ecosystems * Exposure to video stream handling through cloud * Knowledge of OAuth2, SSO, and third-party authentication systems * Exposure to AI/ML model integration into cloud or server systems * Experience with LangChain, vector DBs, or AI agents Reporting To CTO Why Join Smart Node? * Be part of India’s fast-growing smart home automation industry * Work directly with the leadership team * Work in a fast-paced, innovation-driven startup culture * Opportunity to work on cross-functional tech with hardware, firmware, and mobile teams * A fast-paced, product-focused environment with rapid iterations and live deployments * “You build, You Own” philosophy. * See your work impact real products, customers, and projects. Job Type: Full-time Pay: ₹35,000.00 - ₹70,000.00 per month Benefits: Provident Fund Location Type: In-person Schedule: Day shift Work Location: In person Speak with the employer +91 9023725596

Posted 1 week ago

Apply

9.0 - 12.0 years

0 Lacs

India

On-site

Job Summary We are looking for a highly skilled Technical Architect with expertise in AWS, Generative AI, AI/ML, and scalable production-level architectures. The ideal candidate should have experience handling multiple clients, leading technical teams, and designing end-to-end cloud-based AI solutions with an overall experience of 9-12 years. This role involves architecting AI/ML/GenAI-driven applications, ensuring best practices in cloud deployment, security, and scalability while collaborating with cross-functional teams. Key Responsibilities Technical Leadership & Architecture Design and implement scalable, secure, and high-performance architectures on AWS for AI/ML applications. Architect multi-tenant, enterprise-grade AI/ML solutions using AWS services like SageMaker, Bedrock, Lambda, API Gateway, DynamoDB, ECS, S3, OpenSearch, and Step Functions. Lead full lifecycle development of AI/ML/GenAI solutions—from PoC to production—ensuring reliability and performance. Define and implement best practices for MLOps, DataOps, and DevOps on AWS. AI/ML & Generative AI Expertise Design Conversational AI, RAG (Retrieval-Augmented Generation), and Generative AI architectures using models like Claude (Anthropic), Mistral, Llama, and Titan. Optimize LLM inference pipelines, embeddings, vector search, and hybrid retrieval strategies for AI-based applications. Drive ML model training, deployment, and monitoring using AWS SageMaker and AI/ML pipelines. Cloud & Infrastructure Management Architect event-driven, serverless, and microservices architectures for AI/ML applications. Ensure high availability, disaster recovery, and cost optimization in cloud deployments. Implement IAM, VPC, security best practices, and compliance. Team & Client Engagement Lead and mentor a team of ML engineers, Python Developer and Cloud Engineers. Collaborate with business stakeholders, product teams, and multiple clients to define requirements and deliver AI/ML/GenAI-driven solutions. Conduct technical workshops, training sessions, and knowledge-sharing initiatives. Multi-Client & Business Strategy Manage multiple client engagements, delivering AI/ML/GenAI solutions tailored to their business needs. Define AI/ML/GenAI roadmaps, proof-of-concept strategies, and go-to-market AI solutions. Stay updated on cutting-edge AI advancements and drive innovation in AI/ML offerings. Key Skills & Technologies Cloud & DevOps AWS Services: Bedrock, SageMaker, Lambda, API Gateway, DynamoDB, S3, ECS, Fargate, OpenSearch, RDS MLOps: SageMaker Pipelines, CI/CD (CodePipeline, GitHub Actions, Terraform, CDK) Security: IAM, VPC, CloudTrail, GuardDuty, KMS, Cognito AI/ML & GenAI LLMs & Generative AI: Bedrock (Claude, Mistral, Titan), OpenAI, Llama ML Frameworks: TensorFlow, PyTorch, LangChain, Hugging Face Vector DBs: OpenSearch, Pinecone, FAISS RAG Pipelines, Prompt Engineering, Fine-tuning Software Architecture & Scalability Serverless & Microservices Architecture API Design & GraphQL Event-Driven Systems (SNS, SQS, EventBridge, Step Functions) Performance Optimization & Auto Scali

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

we are currently looking for Python Developers with strong hands-on experience in Python and SQL. In addition, candidates must have development experience in AWS , not just deployment with proficiency in at least two or more of the following services (Lambda, SNS, SQS, S3, Glue, Athena, API Gateway, EC2, Deployment, CloudFormation, CloudFront, EventBridge) Mandate Skill set – Python, SQL, Development in AWS – (Django Not required) Experience – 5+years Notice Period – Immediate Work Location – Remote We are seeking a Lead Python Developer to join our dynamic team. The ideal candidate will have a strong background in Python programming. Sound understanding on we application development, with a focus on utilizing AWS services for building scalable and efficient solutions. Responsible for delivering senior-level innovative, compelling, coherent software solutions for our consumer, internal operations, and value chain constituents across a wide variety of enterprise applications through the creation of discrete business services and their supporting components. Job Description / Duties & Responsibilities Take shared ownership of the product. Communicates effectively both verbally and in writing. Takes direction from team lead and upper management. Ability to work with little to no supervision while performing duties. Works collaboratively in a small team Excels in a rapid iteration environment with short turnaround times. Deals positively with high levels of uncertainty, ambiguity, and shifting priorities. Accepts a wide variety of tasks and pitches wherever needed. Constructively presents, discuss and debates alternatives Job Specification / Skills and Competencies Design, develop and deliver solutions that meet business line and enterprise requirements. Lead a team of Python developers, providing technical guidance, mentorship, and support in project execution. Participates in rapid prototyping and POC development efforts. Advances overall enterprise technical architecture and implementation best practices. Assists in efforts to develop and refine functional and non-functional requirements. Participates in iteration and release planning. Performs functional and non-functional testing. Informs efforts to develop and refine functional and non-functional requirements. Demonstrates knowledge of, adherence to, monitoring and responsibility for compliance with state and federal regulations and laws as they pertain to this position. Strong ability to produce high-quality, properly functioning deliverables the first time. Delivers work product according to established deadlines. Estimates tasks with a level of granularity and accuracy commensurate with the information provided. Architect, design, and implement high-performance and scalable Python back-end applications. Proficient in Python programming language to develop backend services and APIs. Experience with any web frameworks such as Fast API/Flask/Django for building RESTful APIs. Exposure in Utility domain is an advantage (Metering Services). Experience in AWS services such as API Gateway, Lambda, Step functions and S3. Knowledge in Implementing authentication and authorization mechanisms using AWS Cognito and other relevant services. Good understanding on databases Including PostgreSQL, MongoDB, AWS Aurora, DynamoDB. Experience in automated CI/CD implementation using terraform is required. Deep understanding of one or more source/version control systems (GIT/Bitbucket). Develop branching and merging strategies. Working understanding of Web API, REST, JSON etc. Working understanding of unit testing creation. Bachelor’s Degree is required, and/or a minimum of 5+ years of related work experience. To adhere to the Information Security Management policies and procedures. Skills: aws aurora,dynamodb,mongodb,flask,json,python,s3,postgresql,sql,api gateway,terraform,lambda,fast api,aws,git

Posted 1 week ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

Remote

About Zehntech Zehntech is one of the fast-growing IT solution and product development organization in central India; Zehntech offers a wide range of software development, maintenance, and support services to global clients from Indore Development center. We believe that an organization’s real strength comes from its people. We are expanding our team and looking for talented and ambitious Mobile Application Devel opment Lead (Flutter + Node) . Zehntech provides excellent benefits, learning opportunities and career growth in software SaaS product development, BigData, and IOT. You will be working with the team to develop and maintain high quality mobile applications and NodeJS APIs, and microservices. If you’re passionate about mobile platforms, API development, and translating code into user-friendly apps, we would like to meet you. As a Mobile Application Development Lead (Flutter + Node) , you will own the architecture, design, and delivery of complex, cross-platform mobile applications using Flutter. You will collaborate with cross-functional teams, manage delivery timelines, guide developers, and ensure the highest standards of code quality and app performance. In addition to mobile development, you will take ownership of backend API development using Node.js, ensuring secure, scalable, and efficient server-side integrations. Experience: 5+ Years Responsibilities: Lead the end-to-end mobile app development lifecycle using Flutter and Dart. Design, develop, and maintain scalable backend services and RESTful APIs using Node.js and Express. Define and enforce architecture standards and design principles across mobile and backend (e.g., MVVM, Clean Architecture). Model and manage relational data using PostgreSQL, optimizing schema design, indexing, and query performance. Collaborate with product managers, designers, and stakeholders to translate business requirements into robust technical solutions across the full stack. Design and optimize scalable, maintainable, and testable codebase for both frontend and backend systems. Evaluate and integrate platform-specific features using native Android/iOS modules via platform channels. Implement secure authentication and authorization flows using JWT, OAuth, or similar technologies. Architect and manage microservices or serverless components using AWS services (e.g., Lambda, S3, RDS, CloudWatch). Integrate third-party services and external APIs, including Firebase, analytics, messaging, and storage. Implement CI/CD pipelines, version control strategies, and automated deployment systems across mobile and backend environments. Ensure performance tuning, memory optimization, and smooth user experiences even in data-heavy apps. Handle deep linking, offline sync strategies, local storage, and advanced Firebase integrations. Drive quality assurance through code audits, automated testing, and user feedback incorporation. Review pull requests, guide team members, and maintain coding best practices. Actively mentor junior developers and conduct technical training for the mobile team. Stay updated with the latest trends in Flutter, Node.js, PostgreSQL, and AWS, and recommend improvements accordingly. Requirements: Must Have: 5+ years of overall experience in mobile application development and Node.js. Minimum 5 years of hands-on experience with Flutter and Node.js. Deep understanding of state management approaches (Riverpod, Bloc, Provider). Experience with Flutter Web. Proficient in Firebase Suite: Firestore, Hosting, Analytics, Messaging, Remote Config. Strong command over REST APIs, JSON parsing, authentication methods, and security standards. Expertise in CI/CD tools (Jenkins, AWS CodeBuild, GitHub Actions) for automated build/release workflows. Knowledge of offline-first architectures, background services, isolates, and performance debugging. Proficiency in Git branching strategies, release planning, and cross-team code collaboration. Strong backend development experience with Node.js and Express.js. Solid understanding of PostgreSQL database design, optimization, and query performance. Experience working with AWS services like Lambda, EC2, S3, CloudFront, RDS and CloudWatch for scalable infrastructure. Experience with serverless architectures and managing microservices in cloud environments. Comfortable working with deployment automation, logging, and performance monitoring in cloud-based systems. Good to Have: Experience with AWS services: ECR, ECS, Step Functions, DynamoDB, WAF. Experience with native development (Java/Kotlin, Swift/Obj-C) for Android and iOS. Familiarity with writing and maintaining custom Flutter packages and plugins. Experience with GraphQL APIs or WebSocket. Knowledge of Unit Testing, Integration Testing, and Performance Benchmarking. Prior experience in publishing and managing apps on the Play Store and App Store. Contribution to open-source Flutter projects or plugin development. What you will love about us: Great company culture | Work that stays at work | Preparing for future | Learning & Development | 5 Days working | Certification Allowance | Best in Class Holiday Policies

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Greetings from People-Prime!!.. About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title: Java Developer. ·Location: Pune,Hyderabad(Hybrid) · Experience: 5+ · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Job description Strong Hands-on experience in Java , Hands on experience on AWS cloud architecture (e.g., EC2, Lambda, S3, DynamoDB, RDS, API Gateway, EventBridge, SQS, SNS, Fargate etc). Good to have - Spring, Spring boot development, Typescript and good understanding of serverless computing. Detailed JD, We are looking for talented experienced Senior software engineer with expertise in AWS cloud services, Typescript and Java development for our engineering team. Responsibilities: • Implementing cloud applications using AWS services, Typescript and Java. • Write clean, maintainable and efficient code while adhering to best practices and coding standards. • Work closely with product manager and engineers in to define and refine requirements. • Provide technical guidance and mentorship to junior engineers in team. • Troubleshoot and resolve complex technical issues and performance bottlenecks. • Create and maintain technical documentation for code and processes. • Stay up-to-date with industry trends and emerging technologies to continuously improve our development practices. Mandatory Skills: • 5+ years of software development experiences with focus on AWS cloud development and distributed applications development with Java & J2EE. • 1+ years of experience in AWS development using typescript. If not worked on typescript, willing to learn typescript because as per Principal standards typescript is the preferred language for AWS development. • Hands on experience and deploying applications on AWS cloud infrastructure(e.g., EC2, Lambda, S3, DynamoDB, RDS, API Gateway, EventBridge, SQS, SNS, Fargate etc). • Strong Hands on experience in Java/J2EE, Spring, Spring boot development and good understanding of serverless computing. • Experience with REST API and Java Shared Libraries. Good to have: • AWS Cloud practitioner, AWS Certified Developer or AWS certified solutions architect is plus. Requirements: • Strong knowledge on Java Development/Versioning Tools like IntelliJ/Git/Maven. • Installation, Configuration and Integration of tools for creating the required development environment. • Experience on handling Install failures, install updates, supporting local issues is a plus. • Understanding of application server technology. • String analytical and problem solving skills with keen attention to detail. • Excellent verbal and written communication skills with the ability to articulate complex technical concepts to various audiences. • Experience working on agile development environments and familiarity with CI/CD pipelines. • Consistently raises the bar by going beyond day-to-day performance expectations. Qualifications: • Bachelors degree in engineering and related field.

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

Remote

About the Company At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. About the Role Engineering | India | Remote | Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Build and ship features and capabilities daily in highly scalable, cross-geo distributed environment Be part of an amazing open and collaborative work environment with other experienced engineers, architects, product managers, and designers Review code with best practices of readability, testing patterns, documentation, reliability, security, and performance considerations in mind Mentor and level up the skills of your teammates by sharing your expertise in formal and informal knowledge sharing sessions Ensure full visibility, error reporting, and monitoring of high performing backend services Participate in Agile software development including daily stand-ups, sprint planning, team retrospectives, show and tell demo sessions Qualifications 8+ years of experience building and developing backend applications Bachelor's or Master's degree with a preference for Computer Science degree Experience crafting and implementing highly scalable and performant RESTful micro-services Proficiency in any modern object-oriented programming language (e.g., Java, Kotlin, Go, Scala, Python, etc.) Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra) Real passion for collaboration and strong interpersonal and communication skills Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure) Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality Required Skills Proficiency in any modern object-oriented programming language (e.g., Java, Kotlin, Go, Scala, Python, etc.) Preferred Skills Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra) Pay range and compensation package Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. Equal Opportunity Statement We never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. ```

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

On-site

We seek a Senior Backend Software Engineer with robust Quality Assurance experience to join our team working on payments solutions in e-commerce. You will work closely with the product owner, area architect, and your team members to clarify business needs and technical requirements and define how to support them best. In close collaboration with other teams, your team will introduce new features and improvements to the process to provide a better experience for more than 11 million of our customers! As our Senior Backend Software Engineer and QA expert: with your Team and Product Owner, you will work in a challenging Agile environment in close collaboration with other teams. you will participate in the team’s technical/architectural discussions and decisions. you will develop and continuously deliver applications for one of the most successful pet e-commerce platforms in Europe. you will participate in the complete software development life cycle from discovery through coding, testing, deployment, and maintenance. daily, you will learn and grow your skills, striving for mastery using state-of-the-art technologies and practices such as AWS, Microservices, Docker and much more! Assess, define, implement and maintain, with the support of the team and of the QA Center of Practice team, the testing approach toward complete automated testing and continuous validation of your applications. Project Overview: There is an urgent initiative of integration with Klarna that requires a team of 3 engineers to work side by side with client's Payment team. MUST HAVE Qualifications • Excellent English verbal and written communication skills • 6+ years of experience with Java 11+ and Spring framework. • 2+ years of TDD/BDD practical application. • 3+ years of working within Agile practices and knowledge of Agile values & principles. • Focus on automated testing and good experience with different levels of tests (unit tests, integration tests, end-to-end tests) • You can understand the architectural landscape and technically investigate and implement new features independently. • Passionate about writing clean, modular, testable code designed with architectural principles in mind and proper use of design patterns • Strong experience with DevOps tools and practices (container orchestration, CI, monitoring and alerting, AWS & Kubernetes) • Experience working with Microservices. • Experience with relational/non-relational databases. Nice to have • Experience integrating with Klarna • Recent and hands-on experience with end-to-end payment processes and systems and integration with PayPal via providers, especially Adyen • Understanding frontend technologies like React, NPM, Javascript, and Angular is a plus. Your responsibilities • Participate in solution investigation, estimations, planning, and alignment with other teams; • Design, implement, deliver and support backend solutions (restful web services) using micro-services architecture. • Work in close collaboration with QA CoP (Center of Practice) teams to integrate quality assurance tooling and processes within your team’s application. • Promote and implement test automation (e.g: unit tests, integration tests, e2e tests and performance tests) • Build and maintain CI/CD pipelines for continuous integration, development, testing and deployment. • Deploy applications on the cloud using technologies such as Docker, Kubernetes, AWS and Terraform. • Work closely with the team in an agile and collaborative environment. This will involve code reviews, pair programming, knowledge sharing, and incident coordination. • Maintain existing applications and reduce technical debt. Why this position: Here are several aspects of our team and work environment that contribute to a positive and fulfilling experience: • We work in self-organized teams following Scrum methodology, respecting, and valuing everyone´s opinion. • We learn from each other and share knowledge through pair programming, code reviews, and many training opportunities. • We deliver value by developing new features, maintaining existing products, and improving all the infrastructure we need, so we would also like to have a colleague who is cross-functional. • We are committed to delivering high-quality products and believe that continuous delivery, clean code, and a DevOps mindset are key to achieving this goal. • Our team values a friendly and collaborative environment and greatly encourages open communication and teamwork. We believe in a team approach and do not assign blame; instead, we work together to build and maintain the system. • We embrace diversity, having colleagues from over 50 countries. This means our working language is English. Technologies we leverage on: • Kotlin, Java 11+, Spring framework (Boot, Hibernate) • Oracle, PostgreSQL • CI/CD with Jenkins pipeline • InfluxDB, Grafana, Sensu, ELK stack • infrastructure as a code, one-click deployment, C4 diagrams • Mesos/Marathon, Docker, Kubernetes • Amazon Web Services and cloud deployments (S3, SNS, SQS, RDS, DynamoDB, etc.), using tools such as Terraform or AWS CLI • Git, Scrum, Pair Programming, Peer Reviewing

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the Team: The backend engineering team develops the digital backbone of the bank that drive all user experiences. They create robust, scalable, and secure backend systems. They are responsible for designing, developing, deploying, and monitoring all backend services in production. The team develops and maintains a strategic technology roadmap and ensures the application team implements best practices to ensure optimal performance, scalability, and availability of our systems. The team works very closely with the all the application development teams to ensure that all solutions are aligned with the business and technical requirements, and to enhance the overall quality and performance of the systems Get to know the Role: We are seeking talented & passionate Backend Engineer to join our team. You will have opportunities to work on multiple backend service clusters as well as participating in machine learning pipelines. It is very important that our team member take initiatives to identify problems, and have the right mindset and skill sets to solve them. The day-to-day activities/Responsibilities: Design and write with the cutting edge GO language to improve the availability, scalability, latency, and efficiency of Digibank’s range of services Work with engineering team to explore and create new design / architectures geared towards scale and performance Participate in code and design reviews to maintain our high development standards Engage in service capacity and demand planning, software performance analysis, tuning and optimization Collaborate with product and experience teams to define and prototype feature specifications Work closely with infrastructure team in building and scaling back-end services as well as performing root cause analysis investigations Design, build, analyze and fix large-scale systems Learn full stack performance tuning and optimization Debug and modify complex, production software The must haves/Qualificiations: A Degree in Computer Science, Software Engineering, Information Technology or related fields with strong Computer Science fundamentals in algorithms and data structures 4-8 years of experience in software engineering in a distributed systems environment Possess excellent communication, sharp analytical abilities with proven design skills, able to think critically of the current system in terms of growth and stability You can be a good coder in any language (C++, C, Java, Scala, Rust, Haskell, OCaml, Erlang, Python, Ruby, PHP, Node.JS, C# etc.), but willing to work on Golang Our Tech Stack: Our core services tech stack consists of Golang with Redis, MySQL, DynamoDB, Elasticsearch data stores as well as HAProxy load balancers. They all run on the AWS cloud infrastructure with auto-scaling abilities. Our mobile app platform coverage includes native iOS and Android, written in Swift and RxJava. Our Command Center front-end is built on Rails, HTML5, CSS and Javascript. We use GitHub for our code repository and we adhere to the basic Continuous Delivery tenets utilising a host of tools to support our release pipeline and code quality. These include Travis CI, New Relic, PullReview, Code Climate, Papertrail, Gemnasium, JFrog and Jenkins.

Posted 1 week ago

Apply

10.0 years

0 Lacs

India

Remote

We're Hiring: Senior Backend Engineer – Python & Microservices (Remote | IC Role) Are you a seasoned backend engineer passionate about building scalable, cloud-native systems? We're looking for a Senior Backend Engineer (IC) with 10+ years of experience to join our remote team and help architect and develop high-performance enterprise platforms. In this hands-on role, you’ll design and build robust microservices using Python (Flask, FastAPI, Django) , scalable RESTful APIs , and event-driven systems powered by Kafka, SQS, or RabbitMQ . You'll work across the stack—from NoSQL and SQL databases (MongoDB, PostgreSQL, DynamoDB) to CI/CD automation using Terraform, GitHub Actions, and Jenkins . You’ll collaborate with cross-functional teams, lead architecture discussions, own full service lifecycles, and implement cloud-native solutions on AWS, Kubernetes, and Docker . Familiarity with monitoring and observability tools like Datadog, CloudWatch, and Grafana is important, along with experience in secure API design , including OAuth2 and rate limiting. Bonus points for experience integrating AI/ML, chat platforms, or ticketing systems. Location: 100% Remote Employment Type: Full-time | Individual Contributor Tech Stack: Python, AWS, Docker, Microservices, CI/CD, SQL/NoSQL, Kafka

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category: Infrastructure/Cloud Main location: India, Karnataka, Bangalore Position ID: J0625-0730 Employment Type: Full Time Position Description: Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Job Title: AWS Cloud DevOps with API Position: Senior Systems Engineer Experience: 6- 9 Years Category: Senior Systems Engineer Shift: US Main location: India, Karnataka, Bangalore Position ID: J0625-0730 Employment Type: Full Time Education Qualification: Bachelor’s degree in computer science or related field or higher with minimum 4 years of relevant experience. Position Description: The Skills that are Key to this role API Development: Knowledgeable in API development, lifecycle management, and gateways like Envoy. Strong understanding in API testing tools Cloud Expertise: Proficient in AWS and its various services such as EKS, S3, DynamoDB, EC2, Route 53, Lambda, etc. Ability to automate with various scripting languages (Python, Shell scripting, GO…) Understanding of infrastructure as code tools (IAM, ARM, Terraform, Chef, …) Solid understanding of Cloud Computing and DevOps concepts including CI/CD pipelines Hands-on Kubernetes skills and knowledge. Understanding of Kubernetes cluster rehydration Hands on experience with one or more observability tools (Prometheus, Grafana, ELK/OpenSearch, OpenTelemetry, Datadog, etc…) Experienced in Instrumentation with systems skills on building and operating, monitoring, logging, alerting services of distributed systems at scale Proven experience in implementing advanced observability practices and techniques at scale. Proven experience in maintaining scalability and resiliency of complex environment. Ability to triage, execute root cause analysis, and be decisive under pressure Experience managing and interpreting large datasets using query languages and visualization tools Proficient communication skills with an ability to reach both technical and non-technical audience Ability to learn new software, method and practices and bringing them to our developers Ability to work with a variety of individuals and groups, both in person and virtually, in a constructive and collaborative manner and build and maintain effective relationships Proven experience performing chaos testing to build confidence in the system's capability to withstand turbulent conditions in production On call support and experience Understanding of Agile Methodology Your future duties and responsibilities: Behavioral: Analytical Skills and Research capabilities Ability to evaluate and propose best-of-breed tools and engineering best-practices Deeply self-motivated with the ability to work independently, coordinating activities within cross-regional and multi-functional teams A passion for excellence, innovation, and teamwork; eager to learn and adapt every day Proven track record to quickly learn, adapt and thrive in a fast paced, dynamic and deadline driven environment Excellent Communication Skills Preferred: Experience in Building API’s Cloud Expertise Kubernetes Skills and Knowledge Observability Skills and Knowledge Terraform/OpenTofu CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodation for people with disabilities in accordance with provincial legislation. Please let us know if you require reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Life at CGI: It is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons Come join our team, one of the largest IT and business consulting services firms in the world Skills: Kubernetes Terraform Google Cloud Platform What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Company Description Rainmakers, is a fast growing and innovation led company based in Noida. We are committed to helping businesses succeed in today's fast-paced and competitive marketplace by providing cutting-edge technology solutions. This Role is based in Pune. Role Description This is a full-time on-site role for a Lead Java Developer for a Client of Rainmakers. The Lead Java Developer will be responsible for software development, microservices architecture, programming, utilizing the Spring Framework, and Java development tasks on a day-to-day basis. Key Responsibilities: Design, develop, and maintain high-quality Java APIs to seamlessly integrate with mobile applications. Ability to tackle and solve complex technical problems. Work across Optus Technology stacks, forging strong relationships with squad members, peers across domains, and external vendors/third parties. Participate in the full software development lifecycle, including requirements analysis, design, coding, testing, and deployment. Proactively contribute to the continuous improvement of processes, tooling, and standards. Lead the squad’s technical strategy, set the technical direction, and ensure the quality of delivery. Qualifications Comprehensive knowledge of Spring Framework, including Spring Boot 3.x, Spring MVC, Spring Security, Spring OAuth2, Spring AOP, and Spring Data. Experience with microservices and Docker. Strong hands-on experience in design and architecture in the microservices space. Extensive Java 8/11/17 and/or Kotlin commercial experience. Comfortable working with a wide range of open-source tools and IDEs. Proficient in Linux, Unix and Mac systems. Commercial experience with cloud services and API gateways such as Apigee Edge/ ApigeeX Experience with Spring WebFlux and Project Reactor. Knowledge of AWS infrastructure or other cloud platforms (Azure/GCP). Experience with NoSQL databases (DynamoDB, CosmoDB) Understanding of security practices, OWASP and PCI DSS compliance. Experience in Java application performance tuning. Generalist experience across the full stack, including Java backend APIs, ReactJS, Android. Knowledge of Kotlin/NodeJS is a bonus. Team leadership experiences is bonus.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description IES Prime is building a team to take Prime experience of customers to the next level by building capabilities that are relevant for Prime as well as non-Prime customers in IN and other EMs. Our Development team plays a pivotal role in this program, with the mission to build a comprehensive solution for the India Prime business. This is a rare opportunity to be part of a team that will be responsible for building a successful, sustainable and strategic business for Amazon Prime and to expand the coverage of recurring payments for Prime in India and take it to new emerging markets. The candidate will be instrumental in shaping the product direction and will be actively involved in defining key product features that impact the business. You will work with Sr. and Principal Engineers at Amazon Prime to evolve the design and architecture of the products owned by this team. You will be responsible to set up and hold a high software quality bar besides providing technical direction to a highly technical team of Software Engineers. As part of this team you will work to ensure Amazon.in Prime experience is seamless and has the best shopping experience. It's a great opportunity to develop and enhance experiences for Mobile devices first. You will work on analyzing the latency across the various Amazon.in pages using RedShift, DynamoDB, S3, Java, and Spark. You will get the opportunity to code on almost all key pages on retail website building features and improving business metrics. You will also contribute reducing latency for customers by reducing the bytes on wire and adapting the UX based on network bandwidth. You will be part of a team that obsesses about the performance of our customer's experience and enjoy flexibility to pursue what makes sense. Come enjoy an exploratory and research oriented team of Cowboys working in a fast paced environment, who are always eager to take on big challenges. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

4.0 - 9.0 years

8 - 18 Lacs

Bengaluru

Work from Office

Job Title: Backend Developer (FARM Stack Python, FastAPI/Django, MongoDB) Location: South Bengaluru, Karnataka, India Employment Type: Full-Time Experience Required: 4 to 9 years Work Mode: Work from Office only (No Work from Home or Hybrid option) We are seeking a skilled and experienced Backend Developer to join a fast-paced, mission-driven technology team that is transforming how Indians discover and own properties. You will be part of a platform built using the FARM stack (FastAPI/Django, React, MongoDB), contributing to the design and development of scalable backend architectures, cloud-native systems, and microservices. You will work on high-impact solutions using AWS services, Redis for performance optimization, and OpenSearch for advanced data search. This role is ideal for developers passionate about backend engineering, clean architecture, and meaningful technology innovation. Responsibilities: Build and maintain backend services using Python, FastAPI or Django Architect and implement scalable systems using microservices Design and optimize MongoDB schemas and queries Work with AWS (Lambda, API Gateway, DynamoDB, S3, SQS, SNS) Integrate Redis for caching and session management Implement OpenSearch and vector search for advanced search use cases Write and maintain unit tests following TDD principles using tools like pytest Create and maintain Swagger documentation for APIs Use Git for version control and follow best practices for branching and collaboration Contribute to Agile ceremonies including sprint planning and retrospectives Required Skills: 4 to 9 years of backend development experience in Python Strong experience with FastAPI or Django frameworks Deep understanding of MongoDB and NoSQL schema design Experience building microservices and distributed systems Hands-on experience with AWS cloud and serverless architecture Familiarity with Redis, OpenSearch, and vector-based search Proficiency in unit testing, Git workflows, Swagger, and CI/CD pipelines Experience working in Agile teams Preferred Skills: Docker and Kubernetes CI/CD tools like Jenkins, GitLab CI, or CircleCI Knowledge of API security best practices Experience working with large datasets and high availability systems Please Note: While initial HR and technical rounds will be conducted online, attending a final offline (in-person) interview in Bengaluru is mandatory for shortlisted candidates. The client will not confirm selection or issue an offer letter without this in-person interaction. If you are certain you cannot travel to Bengaluru for the offline interview, we kindly request you not to proceed with the online rounds, as the selection process cannot be completed without a face-to-face meeting.

Posted 1 week ago

Apply

5.0 - 31.0 years

9 - 15 Lacs

Bengaluru/Bangalore

On-site

Job Title: NoSQL Database Administrator (DBA )Department: IT / Data Management Job Purpose: The NoSQL Database Administrator will be responsible for designing, deploying, securing, and optimizing NoSQL databases to ensure high availability, reliability, and scalability of mission-critical applications. The role involves close collaboration with developers, architects, and security teams, especially in compliance-driven environments such as UIDAI. Key Responsibilities: Collaborate with developers and solution architects to design and implement efficient and scalable NoSQL database schemas. Ensure database normalization, denormalization where appropriate, and implement indexing strategies to optimize performance. Evaluate and deploy replication architectures to support high availability and fault tolerance. Monitor and analyze database performance using tools like NoSQL Enterprise Monitor and custom monitoring scripts. Troubleshoot performance bottlenecks and optimize queries using query analysis, index tuning, and rewriting techniques. Fine-tune NoSQL server parameters, buffer pools, caches, and system configurations to improve throughput and minimize latency. Implement and manage Role-Based Access Control (RBAC), authentication, authorization, and auditing to maintain data integrity, confidentiality, and compliance. Act as a liaison with UIDAI-appointed GRCP and security audit agencies, ensuring all security audits are conducted timely, and provide the necessary documentation and artifacts to address risks and non-conformities. Participate in disaster recovery planning, backup management, and failover testing. Key Skills & Qualifications: Educational Qualifications: Bachelor’s or Master’s Degree in Computer Science, Information Technology, or a related field. Technical Skills: Proficiency in NoSQL databases such as MongoDB, Cassandra, Couchbase, DynamoDB, or similar. Strong knowledge of database schema design, data modeling, and performance optimization. Experience in setting up replication, sharding, clustering, and backup strategies. Familiarity with performance monitoring tools and writing custom scripts for health checks. Hands-on experience with database security, RBAC, encryption, and auditing mechanisms. Strong troubleshooting skills related to query optimization and server configurations. Compliance & Security: Experience with data privacy regulations and security standards, particularly in compliance-driven sectors like UIDAI. Ability to coordinate with government and regulatory security audit teams. Behavioral Skills: Excellent communication and stakeholder management. Strong analytical, problem-solving, and documentation skills. Proactive and detail-oriented with a focus on system reliability and security. Key Interfaces: Internal: Developers, Solution Architects, DevOps, Security Teams, Project Managers. External: UIDAI-appointed GRCP, third-party auditors, security audit agencies. Key Challenges: Maintaining optimal performance and uptime in a high-demand, compliance-driven environment. Ensuring security, scalability, and availability of large-scale NoSQL deployments. Keeping up with evolving data security standards and audit requirements.

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

HI Folks Please check the JD and share your updated resume to my email naresh@sapphiresoftwaresolutions.com and ping me on whatsapp (+91 970-529-6474) along with your resume Sr. AWS Devops Engineer Remote Offshore in India 1 opening Night Shift ASAP (can get interviews this week) Role: DevOps Engineer Roles & Responsibilities: Involved in migrating on-premise applications to AWS cloud, utilizing services like EKS, EC2, S3, RDS, IAM, VPC, Lambda, Security Groups, EBS, Auto scaling groups, SNS, ALB, Route53, and Cloud Formation templates. Developed and deployed AWS Cloud Formation templates to launch infrastructure to Dev, QA, Pre-Production and Production Environments in AWS Cloud. Created EKS cluster using Cloud formation templates by deploying Code pipeline and TeamCity pipelines. Configured 'Kubectl' to interact with Kubernetes infrastructure and used AWS Cloud Formation Templates (CFT) to launch a cluster of worker nodes on Amazon EC2 instances. Created namespaces in EKS to organize resource deployments. Upgraded Kube-proxy to resolve compliance issues. Installed, configured, and updated metric server, Kubernetes dashboard, Ingress, Datadog agents, reloader, Istio, Splunk as part of Infra set up. Updated the CloudFormation templates to upgrade the EC2 instances type and increase the EKS nodes count. Created Application Load balancer and Mapped ALB with Route53 DNS. Created IAM policy's and attached policy to the IAM role. Created TeamCity pipelines to set up infrastructure and deployed microservices into EKS clusters in various Environments. Configured Route53 using Lambda function with weighted routing policy and routed traffic to different regions by switching the weights using Lambda function. Disaster recovery environments was setup for Dev, QA, Pre-Production and Production. Performed DR testing by bring down the services and routing traffic to DR region. Using CloudFormation scripts created DynamoDB tables and imported data from S3 to DynamoDB table. Scaled instances by dynamically allocating the Memory to Pods to meet the requirements. Resolved Critical & High security issues raised by compliance team to meet the standards. Validated firewall connectivity in multiple environments and ensured that traffic is flowing without interruptions. Worked with Datadog team to troubleshoot/resolve missing agent logs and updated Datadog agents. Maintained JIRA for updating project defects and tasks ensuring the successful completion of tasks in sprint.

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Job Summary: We are seeking a highly motivated and experienced Senior Data Engineer to join our team. This role requires a deep curiosity about our business and a passion for technology and innovation. You will be responsible for designing and developing robust, scalable data engineering solutions that drive our business intelligence and data-driven decision-making processes. If you thrive in a dynamic environment and have a strong desire to deliver top-notch data solutions, we want to hear from you. Key Responsibilities: Collaborate with agile teams to design and develop cutting-edge data engineering solutions. Build and maintain distributed, low-latency, and reliable data pipelines ensuring high availability and timely delivery of data. Design and implement optimized data engineering solutions for Big Data workloads to handle increasing data volumes and complexities. Develop high-performance real-time data ingestion solutions for streaming workloads. Adhere to best practices and established design patterns across all data engineering initiatives. Ensure code quality through elegant design, efficient coding, and performance optimization. Focus on data quality and consistency by implementing monitoring processes and systems. Produce detailed design and test documentation, including Data Flow Diagrams, Technical Design Specs, and Source to Target Mapping documents. Perform data analysis to troubleshoot and resolve data-related issues. Automate data engineering pipelines and data validation processes to eliminate manual interventions. Implement data security and privacy measures, including access controls, key management, and encryption techniques. Stay updated on technology trends, experimenting with new tools, and educating team members. Collaborate with analytics and business teams to improve data models and enhance data accessibility. Communicate effectively with both technical and non-technical stakeholders. Qualifications: Education: Bachelors degree in Computer Science, Computer Engineering, or a related field. Experience: Minimum of 5+ years in architecting, designing, and building data engineering solutions and data platforms. Proven experience in building Lakehouse or Data Warehouses on platforms like Databricks or Snowflake. Expertise in designing and building highly optimized batch/streaming data pipelines using Databricks. Proficiency with data acquisition and transformation tools such as Fivetran and DBT. Strong experience in building efficient data engineering pipelines using Python and PySpark. Experience with distributed data processing frameworks such as Apache Hadoop, Apache Spark, or Flink. Familiarity with real-time data stream processing using tools like Apache Kafka, Kinesis, or Spark Structured Streaming. Experience with various AWS services, including S3, EC2, EMR, Lambda, RDS, DynamoDB, Redshift, and Glue Catalog. Expertise in advanced SQL programming and performance tuning. Key Skills: Strong problem-solving abilities and perseverance in the face of ambiguity. Excellent emotional intelligence and interpersonal skills. Ability to build and maintain productive relationships with internal and external stakeholders. A self-starter mentality with a focus on growth and quick learning. Passion for operational products and creating outstanding employee experiences.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description We are seeking a skilled Backend Python Developer with expertise in API development, asynchronous programming, and AWS services. You will have hands-on experience with AIOboto3 and an understanding of building scalable, high-performance backend systems.You will be reporting to a Senior Manager.You will work in the Hybrid mode from Hyderabad 2 days a week WFO. Responsibilities Design, develop, and maintain robust RESTful APIs using Python. Implement asynchronous programming patterns (async/await) for high-concurrency applications. Integrate and manage AWS services (S3, Lambda, DynamoDB, etc.) using AIOboto3 and Boto3. Optimize backend performance and scalability. Collaborate with frontend developers, DevOps, and product teams to provide end-to-end solutions. Write clean, maintainable, and well-documented code. Participate in code reviews and contribute to best practices. Requirements 3+ years of experience in backend development with Python. Proficiency in asynchronous programming (asyncio, async/await). Hands-on experience with AIOboto3 and AWS SDKs. Experience with RESTful API design and implementation. Experience with AWS services (EC2, S3, Lambda, DynamoDB, etc.). Familiarity with CI/CD pipelines and containerization (Docker). Knowledge of database systems (SQL and NoSQL). Preferred Experience with Flask/FastAPI or similar async frameworks. Familiarity with serverless architectures. Qualifications BTech Computer Science or Equivalent, Around 3+ years of Experience. Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters; DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's strong people first approach is award winning; Great Place To Work™ in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Description Data Engineer Responsibilities : Deliver end-to-end data and analytics capabilities, including data ingest, data transformation, data science, and data visualization in collaboration with Data and Analytics stakeholder groups Design and deploy databases and data pipelines to support analytics projects Develop scalable and fault-tolerant workflows Clearly document issues, solutions, findings and recommendations to be shared internally & externally Learn and apply tools and technologies proficiently, including: Languages: Python, PySpark, ANSI SQL, Python ML libraries Frameworks/Platform: Spark, Snowflake, Airflow, Hadoop , Kafka Cloud Computing: AWS Tools/Products: PyCharm, Jupyter, Tableau, PowerBI Performance optimization for queries and dashboards Develop and deliver clear, compelling briefings to internal and external stakeholders on findings, recommendations, and solutions Analyze client data & systems to determine whether requirements can be met Test and validate data pipelines, transformations, datasets, reports, and dashboards built by team Develop and communicate solutions architectures and present solutions to both business and technical stakeholders Provide end user support to other data engineers and analysts Candidate Requirements Expert experience in the following[Should have/Good to have]: SQL, Python, PySpark, Python ML libraries. Other programming languages (R, Scala, SAS, Java, etc.) are a plus Data and analytics technologies including SQL/NoSQL/Graph databases, ETL, and BI Knowledge of CI/CD and related tools such as Gitlab, AWS CodeCommit etc. AWS services including EMR, Glue, Athena, Batch, Lambda CloudWatch, DynamoDB, EC2, CloudFormation, IAM and EDS Exposure to Snowflake and Airflow. Solid scripting skills (e.g., bash/shell scripts, Python) Proven work experience in the following: Data streaming technologies Big Data technologies including, Hadoop, Spark, Hive, Teradata, etc. Linux command-line operations Networking knowledge (OSI network layers, TCP/IP, virtualization) Candidate should be able to lead the team, communicate with business, gather and interpret business requirements Experience with agile delivery methodologies using Jira or similar tools Experience working with remote teams AWS Solutions Architect / Developer / Data Analytics Specialty certifications, Professional certification is a plus Bachelor Degree in Computer Science relevant field, Masters Degree is a plus

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Description Overview : At ChiStats , we’re looking for a Technical Lead to drive high-impact engineering outcomes for a forward-thinking client in the Insurance & Data Science sector . You'll join a team that thrives on innovation, solving real-world problems using cutting-edge Python stacks, scalable APIs, modern databases , and cloud-native in. If you thrive in lean, fast-moving environments where ownership matters and ideas turn into action this is your playground. Responsibilities - Lead and mentor a development team, ensuring timely delivery, code quality, and skill growth Manage customer communication, understand requirements, provide technical clarity, and ensure satisfaction Develop and maintain scalable codebases, including new framework design, enhancements, and production-grade pipelines Collaborate with cross-functional teams (DevOps, QA, Product) to align development goals with business objectives Ensure code quality through reviews, automated testing, and adherence to best practices Identify and resolve performance bottlenecks, bugs, and technical challenges in live system Document architecture, workflows, and key decisions, ensuring smooth knowledge transfer and maintainability Requirements Bachelor’s/Master’s degree in Computer Science, Engineering, or related field. Excellent communication and leadership skills are a must for this role. Effectively conveying ideas and creative concepts to clients, ensuring clear communication and alignment with their vision and objectives. 6+ years of professional experience in software development with a focus on Python. Proven experience in full-stack development, including front-end and back-end technologies. Proven experience in client handling and managing a tech team. Experience with server-side frameworks and libraries (e.g., Django, Flask). Proficiency in cloud services (preferably AWS) and containerization tools (Docker, Kubernetes). Solid understanding of database technologies, both relational (MySQL, PostgreSQL) and non-relational (MongoDB, DynamoDB). Experience in developing CI/CD pipeline, data pipelines with best practices. Strong problem-solving abilities with a proven capacity for critical and creative thinking. Good understanding of web technologies, including HTML, CSS, JavaScript, and related frameworks (Node.js, AngularJS, ReactJS) Benefits Company Description Chistats is a data science company helping clients leverage data to accelerate and augment business value. At the core, our team of passionate data scientists, love solving complex problems for clients and partners. We can do so through a solid research and development foundation (#5 individual patents and #20 publications and growing). At one spectrum, we've helped Asia's largest Oil & Gas company in remote monitoring of the pumps and generating early warning signals before the asset failures. On another spectrum, we are working with the largest Defense company in India. On a strategic project with the Indian Navy in the area of signal processing. As part of recent recognitions, Chistats is part of the NASSCOM, CoE for IoT and AI incubation center. And, Chistats is also a winner of DRDO, Dare to Dream Innovation Contest 2.0. Chistats is at the forefront of leveraging data science and artificial intelligence to deliver impactful business solutions Why ChiStats? At ChiStats , we believe our people are our greatest strength. That’s why we offer a holistic benefits package designed to support your wellbeing, professional growth, and financial security so you can thrive both at work and in life. Flexible & Remote Work Options We understand that life happens and we respect individual circumstances. Whether you're navigating a personal transition or simply value work-life balance, we offer the flexibility to help you succeed on your terms. Financial Wellbeing We care about your future as much as your present. Our benefits include: Provident Fund contributions Comprehensive Healthcare Cover Recognition That Matters Our ChiStats Honors Platform is designed to celebrate achievements, milestones, and day-to-day excellence because recognition should be timely, meaningful, and part of our culture. Growth Through Learning – Elevate We invest in your continuous learning through our internal training platform, Elevate , which identifies personalized development needs and offers targeted upskilling opportunities across tech, tools, and soft skills.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Sun Life Global Solutions (SLGS) Established in the Philippines in 1991 and in India in 2006, Sun Life Global Solutions, (formerly Asia Service Centres), a microcosm of Sun Life, is poised to harness the regions’ potential in a significant way - from India and the Philippines to the world. We are architecting and executing a BOLDER vision: being a Digital and Innovation Hub, shaping the Business, driving Transformation and superior Client experience by providing expert Technology, Business and Knowledge Services and advanced Solutions. We help our clients achieve lifetime financial security and live healthier lives – our core purpose and mission. Drawing on our collaborative and inclusive culture, we are reckoned as a ‘Great Place to Work’, ‘Top 100 Best Places to Work for Women’ and stand among the ‘Top 11 Global Business Services Companies’ across India and the Philippines. The technology function at Sun Life Global Solutions is geared towards growing our existing business, deepening our client understanding, managing new-age technology systems, and demonstrating thought leadership. We are committed to building greater domain expertise and engineering ability, delivering end to end solutions for our clients, and taking a lead in intelligent automation. Tech services at Sun Life Global Solutions have evolved in areas such as application development and management, Support, Testing, Digital, Data Engineering and Analytics, Infrastructure Services and Project Management. We are constantly expanding our strength in Information technology and are looking for fresh talents who can bring ideas and values aligning with our Digital strategy Key Responsibility : Design and develop end-to-end contact center solutions using Salesforce and Amazon Connect.  Collaborate with business stakeholders to understand their contact center requirements and translate them into technical solutions.  Lead the architecture and design of complex contact center systems, ensuring they meet business needs and align with industry best practices.  Provide technical leadership and guidance to development teams throughout the project lifecycle.  Ensure the scalability, security, and performance of contact center solutions.  Stay updated with the latest trends and technologies in Salesforce, Amazon Connect and contact center solutions. Key Requirements : Extensive experience with Salesforce and Amazon Connect.  Strong understanding of cloud computing, data integration, and API management.  Proven experience in using Agile approach for development with frequent sprints for business benefits realization.  Experience in leading the design and/or development of solutions with a virtual team in remote locations.  Strong appreciation and proven achievement in selecting appropriate tools for platform delivery.  Proficiency in Salesforce development and customization (Apex, Visualforce, Lightning Components).  Expertise in Amazon Connect configuration and integration.  Expertise in configuring and maintaining Amazon Connect contact center environments. Experience to Develop and implement call flows, routing strategies, and IVR configurations.  Handson in development and implementation of call flows, routing strategies, and IVR configurations.  Integrate Amazon Connect with AWS services such as Lambda, RDS, DynamoDB, S3, Polly, CloudWatch, Lex, GenAI services like Bedrock , Sage maker etc..  Troubleshoot and resolve issues related to Amazon Connect, telephony, and system integrations. Can assist in scripting and automation to improve system efficiency.  Work with business stakeholders to understand requirements and deliver technical solutions.  Familiarity with and enthusiasm for DevOps, CI/CD, and SRE best practices  Familiarity with API management and development (REST, SOAP, GraphQL).  Excellent communication and presentation skills, able to engage conversations at all levels of senior management.

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Now Hiring: Senior Python Developer (Immediate Joiners Only) Location : Trivandrum / Kochi / Remote Experience : Minimum 6+ years overall, with 4+ years of relevant experience in Python, PostgreSQL, and AWS Budget : Up to 22 LPA Notice Period : Immediate only We are seeking a highly skilled Senior Python Developer with a strong foundation in PostgreSQL and hands-on experience with AWS services such as Lambda, API Gateway, Step Functions, and Aurora. Key Responsibilities : Design, develop, and deliver scalable backend solutions using Python and AWS Lead and mentor a team of Python developers Drive best practices in web application development and CI/CD pipelines Participate in solution architecture, prototyping, and requirement analysis Ensure high-quality code through thorough testing and peer reviews Technical Requirements : Proficiency in Python and frameworks like FastAPI, Flask, or Django Strong expertise in PostgreSQL, PL/pgSQL, and performance optimization Solid experience in AWS services: Lambda, API Gateway, Aurora, Step Functions, S3 Exposure to authentication using AWS Cognito Experience with infrastructure as code (Terraform) and CI/CD workflows Familiarity with version control systems like Git or Bitbucket Nice to Have : Exposure to the utilities domain (e.g., Metering Services) Knowledge of MongoDB, DynamoDB, or Oracle Strong understanding of REST APIs, JSON, and unit testing Educational Qualification : Bachelor’s Degree in Computer Science or related field with relevant work experience This is a great opportunity to work in a dynamic and collaborative environment, building cutting-edge applications that scale. If you’re ready for your next challenge and meet the criteria, we’d love to hear from you.

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Key Responsibilities: Design, implement, and manage cloud-native databases using AWS services such as Amazon RDS, Aurora, DynamoDB, Elastic Cache, and Open Search. Implement/Configure Database Migration Solutions – like AWS DMS, SCT (Schema Conversion Tool). Monitor & Optimize database and SQL query performance. Perform backup and restore (within & cross account). Troubleshoot and resolve database-related issues, ensuring high availability and reliability. Develop and maintain IaC solutions using Terraform, AWS Cloud Development Kit (CDK), and other tools. Automate deployment pipelines for Infrastructure provisioning & configuration while ensuring security, scalability with best practices. Implement CI/CD pipelines using Jenkins, AWS code pipeline & Argo CD to streamline application deployment. Use Terraform, Lambda, Python, TypeScript for writing reusable IaC modules and scripts to manage infrastructure. Experience in working with Observability tools like Dynatrace, Splunk. Collaborate with project/development teams to ensure seamless integration between applications and infrastructure. Monitor and maintain cloud infrastructure for optimal performance and cost-efficiency. Assist in disaster recovery planning and ensure compliance with organizational policies. Stay updated with new AWS offerings and integrate them to enhance existing systems. Primary Skills: In-depth knowledge of AWS database services: Aurora (MySQL/PostgreSQL), DynamoDB, Elastic Cache, RDS, and Open Search. Experience in designing and managing distributed and scalable databases in the cloud. Knowledge & experience ITIL tools and process. Good understanding of incident, change & problem management. Secondary Skill sets expected: Proficiency in IaC tools: Terraform, AWS CDK, AWS CloudFormation. Programming knowledge in TypeScript, Python & use of Lambda coding to support IaC Tools. Experience with CI/CD tools like Jenkins & Argo CD. Solid understanding of version control systems (e.g., Git) and automation practices. SKILLS Amazon RDS, Aurora, DynamoDB, Elastic Cache, and Open Search. Implement/Configure Database Migration Solutions – like AWS DMS, SCT (Schema Conversion Tool). Monitor & Optimize database and SQL query performance. Perform backup and restore (within & cross account). Troubleshoot and resolve database-related issues, ensuring high availability and reliability. Develop and maintain IaC solutions using Terraform, AWS Cloud Development Kit (CDK), and other tools. Automate deployment pipelines for Infrastructure provisioning & configuration while ensuring security, scalability with best practices. Implement CI/CD pipelines using Jenkins, AWS code pipeline & Argo CD to streamline application deployment. Use Terraform, Lambda, Python, TypeScript for writing reusable IaC modules and scripts to manage infrastructure. Experience in working with Observability tools like Dynatrace, Splunk.

Posted 1 week ago

Apply

5.0 - 9.0 years

7 - 11 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Mandatory Skills Java Development: At least 5 years of experience in Java development. Frameworks: Proficiency in Spring Boot, Spring Security, and Microservices architecture. Cloud: Hands-on experience with AWS. Databases: Familiarity with MongoDB or DynamoDB. APIs: Expertise in building and integrating REST APIs. Additional Skills Java Libraries/Frameworks: Knowledge of popular libraries like JPA, Spring, Kafka, etc. Software Development Overview: Clear understanding of all layers of software development. SQL and NoSQL Databases: Experience with both database types. Cloud and DevOps: Familiarity with tools like Kubernetes, Docker, Terraform, and Concourse. Agility: Open to picking up innovative technologies as required. Soft Skills: Excellent communication, analytical, and goal-oriented mindset. Educational Qualifications Degree: B.Tech in Computer Science or equivalent. Alternative: Relevant professional experience to handle challenges effectively. How to Apply Interested candidates should share their resumes with the following details:

Posted 1 week ago

Apply

5.0 - 9.0 years

4 - 9 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Job Description: Senior Java Developer Experience Level: 4 to 6 years Notice Period: 0 to 30 days Work Mode: Work From Office (5 days a week) Locations: Hyderabad, Ahmedabad, Indore, Jaipur,Bengaluru Positions Open: 10 Mandatory Skills Java Development: At least 5 years of experience in Java development. Frameworks: Proficiency in Spring Boot, Spring Security, and Microservices architecture. Cloud: Hands-on experience with AWS. Databases: Familiarity with MongoDB or DynamoDB. APIs: Expertise in building and integrating REST APIs. Additional Skills Java Libraries/Frameworks: Knowledge of popular libraries like JPA, Spring, Kafka, etc. Software Development Overview: Clear understanding of all layers of software development. SQL and NoSQL Databases: Experience with both database types. Cloud and DevOps: Familiarity with tools like Kubernetes, Docker, Terraform, and Concourse. Agility: Open to picking up innovative technologies as required. Soft Skills: Excellent communication, analytical, and goal-oriented mindset. Educational Qualifications Degree: B.Tech in Computer Science or equivalent. Alternative: Relevant professional experience to handle challenges effectively. How to Apply Interested candidates should share their resumes with the following details: Current CTC Expected CTC Preferred Location: Hyderabad, Ahmedabad, Indore, Jaipur Notice Period Contact Information Email: neetu.raj@supremeconsultingservices.com, surya.jaipal@dattamsa.org Phone/WhatsApp: 9032956160

Posted 1 week ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Key Responsibilities: Design and implement robust backend systems using C#/.NET. Develop and maintain microservices architecture. Utilize event-driven architecture to enhance system responsiveness and scalability. Manage and optimize AWS infrastructure, including SQS, SNS, Lambda, and DynamoDB. Work with both relational and non-relational databases to ensure data integrity and performance. Implement and maintain CI/CD pipelines to streamline development and deployment processes. Collaborate with Agile teams to deliver high-quality software solutions. Integrate and manage Apache Kafka for real-time data processing. Qualifications: At least 7+ years in the below: Proven experience as a Principal Engineer or similar role with a strong backend focus. Expertise in C#/.NET development. In-depth knowledge of microservices and event-driven architecture. Extensive experience with AWS infrastructure (SQS, SNS, Lambda, DynamoDB). Proficiency in working with both relational and non-relational databases. Strong understanding of CI/CD pipelines and Agile methodology. Hands-on experience with Apache Kafka. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies