Home
Jobs

1759 Dynamodb Jobs - Page 39

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary We are looking for a Senior Python Developer with 5–8 years of professional experience in backend development, including strong proficiency with NoSQL databases. The ideal candidate will contribute to building robust, scalable, and high performance systems and APIs, while ensuring reliability and maintainability of backend services. Key Responsibilities • Design and implement backend services and RESTful APIs using Python. • Work with NoSQL databases like MongoDB, Cassandra, or DynamoDB for high performance data handling. • Collaborate with cross-functional teams including frontend developers, QA, DevOps, and product managers. • Write clean, reusable, and testable code with appropriate unit/integration tests. • Experience with frameworks such as FastAPI, Flask, or Django. • Solid understanding of REST APIs, asynchronous programming, and microservices architecture. • Monitor, troubleshoot, and resolve backend issues in production environment. Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are looking for a highly experienced Lead Software Engineer with a strong background in Python and AWS to join our team. As a Lead Software Engineer, you will be responsible for designing, developing, and maintaining software applications that meet customer needs and requirements. You will also be responsible for leading a team of developers and collaborating with cross-functional teams to ensure successful project delivery. Responsibilities Design, develop, and maintain software applications that meet customer needs and requirements Lead a team of developers and collaborate with cross-functional teams to ensure successful project delivery Create technical designs and documentation to support development and maintenance of software applications Participate in code reviews and ensure code quality and standards are met Design and build APIs for seamless communication between different components Present and organize demo session to demonstrate solutions Provide technical leadership and mentoring to team members Requirements 8-12 years of experience in software engineering Strong expertise in Python core, including decorators and other language features Experience with AWS services, including Lambda, DynamoDB, CloudFormation, and IAM Expertise in designing and building RESTful APIs for seamless communication between different components Proficiency with Microservices Architecture and Containerization, including Docker, AWS ECS, and ECR Experience in presenting and organizing demo sessions to demonstrate solutions Strong communication and collaboration skills to work with cross-functional teams Ability to take responsibility and ownership over scope of work and get things done B2+ English level proficiency Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About the Role: We’re looking for a motivated and passionate Python Developer Intern/Fresher to join our dynamic team. You will work on developing backend systems, APIs, and cloud-based services using modern Python frameworks and AWS tools. This is a great opportunity to gain hands-on experience in full-stack cloud-native application development. Responsibilities: Develop and maintain RESTful APIs using Django and Flask . Write clean, efficient, and scalable Python code for backend systems. Integrate backend services with PostgreSQL databases. Assist in deploying applications using AWS Lambda and API Gateway . Collaborate with frontend developers, product managers, and designers to deliver features. Work with CI/CD pipelines and version control (Git). Write unit and integration tests to ensure software quality. Troubleshoot and debug applications in development and production environments. Must-Have Skills: Strong knowledge of Python programming fundamentals. Familiarity with Django and/or Flask frameworks. Understanding of RESTful API design principles. Basic experience with PostgreSQL or any relational database. Exposure to AWS Lambda , API Gateway , and CloudWatch . Understanding of JSON, HTTP, and request/response lifecycle. Ability to write clean and maintainable code. Nice-to-Have Skills: Basic experience with: Docker or virtual environments Serverless framework or deployment automation IAM roles, S3, DynamoDB , or other AWS services Familiarity with Git, GitHub, and Agile development practices. Interest in scalable architecture and cloud-native solutions. Educational Background: Pursuing or recently completed a Bachelor’s/Master’s in Computer Science , Information Technology, or related fields. What You’ll Gain: Real-world experience working with production-level codebases and cloud platforms. Mentorship and guidance from experienced software engineers. Exposure to a collaborative startup/tech company culture. Opportunity for a pre-placement offer (PPO) based on performance (for interns). Stipend: INR 10,000 - 15,000 Internship Duration: 2 Months How to Apply: Send your resume, GitHub profile (if available), and any relevant project work to careers@meraken.com with the subject: Application for Python Developer Intern/Fresher Role . Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Are you ready for the next step in your engineering career? Would you enjoy working on our cutting-edge products? About The Team The Product Information Manager team oversees the organization's Product Information Management system, ensuring accurate, consistent, and accessible product data across business units. Their work enables efficient data integration and supports strategic business initiatives, helping the company leverage product data for improved sales and marketing of packaged products. About The Role This role involves designing, developing, and delivering advanced solutions for the Propel Product Information Management (PIM) application built on Salesforce. The role requires a combination of technical expertise, problem-solving skills, and leadership to ensure the delivery of high-quality, scalable solutions while providing production support and engaging with key stakeholders. Responsibilities Design, develop, and enhance Propel PIM application features on Salesforce, ensuring scalability and performance. Write high-quality, maintainable code and provide mentorship to junior engineers. Perform code reviews to ensure adherence to best practices and technical standards. Monitor and resolve production issues, ensuring minimal disruption and timely resolution. Conduct root cause analyses and implement preventive measures to enhance system reliability. Work closely with product managers and stakeholders to gather requirements and translate them into actionable technical designs. Conduct demos for stakeholders to showcase new features and gather feedback for improvements. Provide technical guidance and communicate solutions effectively to both technical and non-technical audiences. Operate in Agile environments and collaborate with cross-functional teams to deliver high-quality solutions on schedule. Keep abreast of Salesforce updates, AWS advancements, and PIM-related technologies to incorporate new tools and methodologies into development processes. Requirements Minimum three to four years of software engineering experience, with a strong focus on Salesforce development and enterprise-grade software solutions. BS in Engineering, Computer Science, or equivalent experience. Proven experience working on complex systems and delivering solutions in collaboration with product teams and stakeholders. Experience handling production support issues and resolving critical application challenges under tight deadlines. Advanced development experience with Apex, Lightning Web Components (LWC), Visualforce, and Salesforce integrations. Strong understanding of Salesforce configuration and administration tools (e.g., Flows, Process Builder, Validation Rules, Custom Objects). Experience with AppExchange applications and extending the Salesforce platform. Expertise in Apex, Java, Python, JavaScript, and SQL. Familiarity with HTML, XML, and other relevant front-end and back-end technologies Experience with AWS services such as Lambda, S3, DynamoDB, and API Gateway. Hands-on experience with integration tools, APIs (REST/SOAP), and middleware platforms. Expertise in Agile methodologies, test-driven development (TDD), and CI/CD pipelines. Proficiency in using version control systems like Git and deployment automation tools. Strong debugging and troubleshooting skills for handling production issues. Excellent communication skills for demos, technical discussions, and stakeholder engagements. Work in a Way that Works for You We promote a healthy work/life balance with numerous wellbeing initiatives, shared parental leave, study assistance, and sabbaticals to help you meet your immediate responsibilities and long-term goals. Working for You Comprehensive Health Insurance for you, your immediate family, and parents. Enhanced Health Insurance Options at competitive rates. Group Life Insurance for financial security. Group Accident Insurance for extra protection. Flexible Working Arrangement for a harmonious work-life balance. Employee Assistance Program for personal and work-related support. Medical Screening for your well-being. Modern Family Benefits include maternity, paternity, and adoption support. Long-Service Awards recognize dedication and commitment. New Baby Gift celebrating parenthood. Various Paid Time Off options including Casual Leave, Sick Leave, Privilege Leave, Compassionate Leave, Special Sick Leave, and Gazetted Public Holidays. About The Business LexisNexis Legal & Professional® provides legal, regulatory, and business information and analytics that help customers increase their productivity, improve decision-making, achieve better outcomes, and advance the rule of law around the world. As a digital pioneer, the company was the first to bring legal and business information online with its Lexis® and Nexis® services.________________________________________ Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Overview Cvent is a leading meetings, events, and hospitality technology provider with more than 4,800 employees and ~22,000 customers worldwide, including 53% of the Fortune 500. Founded in 1999, Cvent delivers a comprehensive event marketing and management platform for marketers and event professionals and offers software solutions to hotels, special event venues and destinations to help them grow their group/MICE and corporate travel business. Our technology brings millions of people together at events around the world. In short, we’re transforming the meetings and events industry through innovative technology that powers the human connection. The DNA of Cvent is our people, and our culture has an emphasis on fostering intrapreneurship - a system that encourages Cventers to think and act like individual entrepreneurs and empowers them to take action, embrace risk, and make decisions as if they had founded the company themselves. At Cvent, we value the diverse perspectives that each individual brings. Whether working with a team of colleagues or with clients, we ensure that we foster a culture that celebrates differences and builds on shared connections. In This Role, You Will Work on Internet scale applications, where performance, reliability, scalability and security are critical design goals - not after-thoughts. Create intuitive, interactive and easy-to-use web applications using rich client-side and REST based server-side code. Implement the nuts and bolts of Microservices Architecture, Service-Oriented Architecture (SOA) and Event-Driven Architecture (EDA) in real-life applications. Gain experience with different database technologies, ranging from traditional relational to the latest NoSQL products such as Couchbase, AWS DynamoDB. Collaborate with some of the best engineers in the industry to work on complex Software as a Service (SaaS) based applications Primary Skills Here's What You Need: 4 to 7 years of Software Development experience in developing and shipping software Excellent troubleshooting skills Proven ability to work in a fast paced, agile environment and result oriented culture Hands-on programming experience with Java including Object Oriented Design Experience with RESTful Web Services and API development using Spring/Dropwizard or any other framework Experience in contributing to the architecture and design (Design Patterns, Non-Functional Requirements (NFRs) including Performance, Scalability, Reliability, Security) Experience with one or more of the databases: SQL Server, MySQL, PostgreSQL, Oracle, Couchbase, Cassandra, AWS DynamoDB or other NoSQL technologies Experience of working with Queuing technologies such as RabbitMQ/Kafka/Active MQ Preferred Skills Experience in full stack development ranging from front-end user interfaces to backend systems Experience/knowledge into JavaScript + Angular/React Js/Typescript, Graph Query Language (GQL) Experience of working with Elasticsearch/Solr Experience with Cloud Computing platforms like AWS/GCP/Azure Cloud Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Data Engineer – C10/Officer (India) The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 3 weeks ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Staff Site Reliability Engineer Bengaluru, India Okta s Workforce Identity Cloud Security Engineering group is looking for an experienced and passionate Staff Site Reliability Engineer to join a team focused on designing and developing Security solutions to harden our cloud infrastructure. We embrace innovation and pave the way to transform bright ideas into excellent security solutions that help run large-scale, critical infrastructure. We encourage you to prescribe defense-in-depth measures, industry security standards and enforce the principle of least privilege to help take our Security posture to the next level. Our Infrastructure Security team has a niche skill-set that balances Security domain expertise with the ability to design, implement, rollout infrastructure across multiple cloud environments without adding friction to product functionality or performance. We are responsible for the ever-growing need to improve our customer safety and privacy by providing security services that are coupled with the core Okta product. This is a high-impact role in a security-centric, fast-paced organization that is poised for massive growth and success. You will act as a liaison between the Security org and the Engineering org to build technical leverage and influence the security roadmap. You will focus on engineering security aspects of the systems used across our services. Join us and be part of a company that is about to change the cloud computing landscape forever. As a Staff Engineer, you should be able to identify gaps, propose innovative solutions, and contribute to roadmaps while driving alignment across multiple teams within the organization. Additionally, you should serve as a role model, providing technical mentorship to junior team members and fostering a culture of learning and growth. You will work on: Designing, building, running, and monitoring Okta's production infrastructure Be an evangelist for security best practices and also lead initiatives/projects to strengthen our security posture for critical infrastructure Responding to production incidents and determining how we can prevent them in the future Triaging and troubleshooting complex production issues to ensure reliability and performance Identifying and automating manual processes Continuously evolving our monitoring tools and platform Promoting and applying best practices for building scalable and reliable services across engineeringDeveloping and maintaining technical documentation, runbooks, and procedures Supporting a 24x7 online environment as part of an on-call rotation Be a technical SME for a team that designs and builds Okta's production infrastructure, focusing on security at scale in the cloud. You are an ideal candidate if you: Are always willing to go the extra mile: see a problem, fix the problem. Are passionate about encouraging the development of engineering peers and leading by example. Have experience automating, securing, and running large-scale production IAM and containerized services in AWS (EC2, ECS, KMS, Kinesis, RDS), GCP (GKE, GCE) or other cloud providers. Have deep knowledge of CI/CD principles, Linux fundamentals, OS hardening, networking concepts, and IP protocols. Have a deep understanding and familiarity with configuration management tools like Chef and Terraform. Have expert-level abilities in operational tooling languages such as Ruby, Python, Go and shell, and use of source control. Experience with industry-standard security tools like Nessus, Qualys, OSQuery, Splunk, etc. Experience with Public Key Infrastructure (PKI) and secrets management Bonus points for: Experience conducting threat assessments, and assessing vulnerabilities in a high-availability setting. Understand MySQL, including replication and clustering strategies, and are familiar with data stores such as DynamoDB, Redis, and Elasticsearch. Minimum Required Knowledge, Skills, Abilities, and Qualities: 6+ years of experience architecting and running complex AWS or other cloud networking infrastructure resources 6+ years of experience with Chef and Terraform Unflappable troubleshooting skills Proven experience in collaborating across teams to deliver complex horizontal projects Strong Linux understanding and experience. Strong security background and knowledge. BS In computer science (or equivalent experience).

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Bachelor's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Summary Rust Support Engineer Candidate required to have Knowledge of Rust programming language Candidate should have Strong handson experience with AWS serverless technologiesLambda API Gateway DynamoDB S3 etc Good to have Experience with Predict Spring or any other Cloud POS solutions Candidate should have experience in Troubleshooting issues in a production environment Strong verbal and written communication skills Responsibilities 6 years of experience in production support Identify diagnose and resolve issues in RUST production environment Implement monitoring and logging solutions to proactively detect and address potential problems Develop highperformance secure and efficient applications using Rust Write clean maintainable and welldocumented code in Rust Communicate effectively with team members stakeholders and clients to understand requirements and provide updates Document technical specifications processes and solutions clearly and conciselyParticipate in code reviews team meetings and knowledgesharing sessions to foster a culture of continuous improvement Good to have Has worked on any cloud POS Integrate and enhance pointofsale functionalities using Predict Spring or other Cloud POS solutions Collaborate with crossfunctional teams to ensure seamless integration and operation of POS systems Show more Show less

Posted 3 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Manager Information Systems What you will do In this vital role you will responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Lead the design, configure and deployment of the validation systems (such as Veeva Validation Management, ALM, and KNEAT) ensuring scalability, maintainability, and performance optimization. Implement standard methodologies, conduct code review and provide technical guidance, mentorship to junior developers. Take ownership of complex software projects from conception to deployment. Manage software delivery scope, risk, and timeline Collaborate closely with business collaborators to discuss requirements, ensuring alignment between technical capabilities and business objectives, while effectively communicating any technical limitations or constraints. Contribute to both front-end and back-end development using cloud technology. Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations. Ensure all documents are compliant with 21 CFR Part 11, Annex 11, and other relevant regulations. Identify and resolve technical challenges effectively. Stay updated with the latest trends and advancements. Work with Product Owners, Service Owners and/or delivery teams to ensure that delivery matches commitments, acting as a partner concern point and facilitating communication when service commitments are not met. Work closely with multi-functional teams, including product management, design, and QA, to deliver high-quality software on time. Develop talent, motivate the team, delegate effectively, champion diversity within the team and act as a role model of servant leadership. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4 to 6 years of related field experience OR Bachelors degree and 6 to 8 years of related field experience OR Diploma and 10 to 12 years of related field experience Solid technical background, including understanding software development processes, databases, and cloud-based systems. Experience configuring the SaaS systems such as Veeva or ALM. Experience with Document management system, Service now suite, ALM, JIRA, Veeva Platform, GenAI. Agile/Scrum experience with demonstrated success managing product backlogs and delivering iterative product improvements. Preferred Qualifications: Understanding of Veeva Quality Vault/ALM/KNEAT. Curiosity of modern technology domain and learning agility Experience with the following technologies Veeva Vault, MuleSoft, AWS (Amazon Web Services) Services (DynamoDB, EC2, S3, etc.), Application Programming Interface (API) integration and Structured Query Language (SQL) will be a big plus. Excellent communication skills, with the ability to convey complex technical concepts. As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, well support your journey every step of the way.

Posted 3 weeks ago

Apply

0.0 - 5.0 years

3 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

Associate IS Bus Sys Analyst What you will do In this vital role you will we are seeking a highly skilled Associate IS Bus Sys Analyst to join our team. You will be responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Maintain existing code and configurationSupport and maintain SaaS applications Development & DeploymentDevelop, test, and deploy code based on designs created with the guidance of senior team members. Implement solutions following standard methodologies for code structure and efficiency. DocumentationGenerate clear and concise code documentation for new and existing features to ensure smooth handovers and easy future reference. Collaborative DesignWork closely with team members and collaborators to understand project requirements and translate them into functional technical designs. Code Reviews & Quality AssuranceParticipate in peer code reviews, providing feedback on adherence to standard methodologies, and ensuring high code quality and maintainability. Testing & DebuggingAssist in writing unit and integration tests to validate new features and functionalities. Support fix and debugging efforts for existing systems to resolve bugs and performance issues. Perform application support and administration tasks such as periodic review, manage incident response and resolution, and security reviews. Continuous LearningStay up-to-date with the newest technologies and standard methodologies, with a focus on expanding knowledge in cloud services, automation, and secure software development. What we expect of you Must-Have Skills: Solid technical background, including understanding software development processes, databases, and cloud-based systems. Experience configuring SaaS applications (such as Veeva). Experience working with databases (SQL/NoSQL). Strong foundational knowledge of testing methodologies. Good-to-Have Skills: Understanding of Veeva Quality Vault/ALM/KNEAT. Curiosity of modern technology domain and learning agility Experience with the following technologies Veeva Vault, MuleSoft, AWS (Amazon Web Services) Services (DynamoDB, EC2, S3, etc.), Application Programming Interface (API) integration and Structured Query Language (SQL) will be a big plus. Superb communication skills, with the ability to convey complex technical concepts. Qualification: Bachelors Degree and 0to 3 years of experience in software development processes, databases, and cloud-based systems Diploma and 4 to 7 years of experience in software development processes, databases, and cloud-based systems

Posted 3 weeks ago

Apply

1.0 - 6.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do As a Business Intelligence Engineer, you will solve unique and complex problems at a rapid pace, utilizing the latest technologies to create solutions that are highly scalable. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable solutions and responding to requests for rapid releases of analytical outcomes. Design, develop, and maintain interactive dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Cognos, others). Analyse datasets to identify trends, patterns, and insights that inform business strategy and decision-making. Partner with leaders and stakeholders across Finance, Sales, Customer Success, Marketing, Product, and other departments to understand their data and reporting requirements. Stay abreast of the latest trends and technologies in business intelligence and data analytics, inclusive of AI use in BI. Elicit and document clear and comprehensive business requirements for BI solutions, translating business needs into technical specifications and solutions. Collaborate with Data Engineers to ensure efficient up-system transformations and create data models/views that will hydrate accurate and reliable BI reporting. Contribute to data quality and governance efforts to ensure the accuracy and consistency of BI data. What we expect of you Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: 1+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications: Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets AWS Developer certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Java developer Location: Remote India Must have-Java + Angular + HTML5, CSS, Agile implementation Experience: 6+ years Qualifications: Minimum of Bachelor's Degree in Computer Science / Software Engineering or equivalent degree from a four- year college or university with minimum 3.0 GPA with 2-4 years of related work experience Work experience with Java, JavaScript, Angular, TypeScript, HTML, CSS, Sass, XML, SQL Ability to create simple and well-designed solutions to complex software problems Dedication to excellence and championship work ethic Knowledge of internet client/server technologies and experience working building enterprise single page web applications • Knowledge of PC and web based software testing in both client and server sides Team-player mindset with strong communication and problem solving skills What would put you above everyone else: Experience with the following • Multi-layered Software Architectures • Agile Development Methodology • Spring MVC • Apache Maven • Apache Tomcat • Angular • TypeScript • Sass • Node/npm • REST • Junit • Jasmine • Relational databases (PostgreSQL, etc.) NoSQL databases (MongoDB, Cassandra, Amazon DynamoDB, Amazon Aurora PostgreSQL, etc.) • AWS Lambda • Jira • Bitbucket • Concourse Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬: Assist in the design, development, and deployment of cloud-native applications using AWS services (e.g., Lambda, S3, DynamoDB, EC2, CloudFormation). Work under senior developers/architects to implement solutions aligned with cloud best practices. Write clean, scalable, and secure code in Python/JavaScript/Java (based on project requirements). Contribute to CI/CD pipelines using tools like Code Pipeline, Code Build, and GitHub Actions. Monitor and troubleshoot cloud workloads using AWS CloudWatch, X-Ray, and logging tools. Participate in cloud migration, data pipeline development, or ML model deployment (based on your specialization). Document processes, code, and architectural decisions. 𝐑𝐞𝐪𝐮𝐢𝐫𝐞𝐦𝐞𝐧𝐭𝐬: B.Tech/M.Tech in Computer Science, IT, Engineering, or related field. AWS Specialty Certification (required), any one or more of the following Security – Specialty Data Analytics – Specialty Machine Learning – Specialty Database – Specialty Strong understanding of AWS core services and basic hands-on experience (projects, labs, internships). Proficiency in at least one programming language (e.g., Python, JavaScript, Java). Familiarity with Git and agile development practices. Eagerness to learn, collaborate, and work in a cloud-native environment. 𝐆𝐨𝐨𝐝 𝐭𝐨 𝐇𝐚𝐯𝐞: Experience with AWS Free Tier or sandbox projects. Knowledge of Terraform or AWS CDK. Exposure to containerization (Docker, ECS, or EKS). Participation in cloud-related hackathons or open-source projects. Skills:- AWS Lambda, AWS CloudFormation, AWS Simple Notification Service (SNS), AWS RDS, Python, Java, CI/CD, Docker and DevOps Show more Show less

Posted 3 weeks ago

Apply

2.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

🚀 We’re Hiring! Full Stack Developers / Senior Developers (2-8 yrs exp) 👩‍💻👨‍💻 👉 Work with cutting-edge cloud & serverless tech 🌩️ 👉 Build products that empower our internal & external sales teams with real-time insights 📊 📝 Role: Full Stack Developer / Senior Developer 📍 Location: India (Remote / Hybrid / Onsite based on role) 👥 Experience: 2-8 years 🎯 What You’ll Do: 💻 Build and deploy end-to-end scalable software solutions 🌐 Develop highly responsive web apps using modern stacks 🔄 Optimize applications for scalability and performance 🚀 Implement cloud-first, serverless solutions on AWS 🔍 Participate in code reviews , mentor team members 🐞 Troubleshoot and resolve complex bugs quickly 🎨 Build frameworks that can be adopted by other SaaS apps 🤝 Collaborate with Product Managers, Architects & SMEs 🛠️ Tech Stack You’ll Work With: Mandatory: ✅ Node.js / Python / Go ✅ MySQL / MongoDB / DynamoDB / Redis ✅ RabbitMQ / Kafka ✅ Serverless stack on cloud (AWS) ✅ Kubernetes / Docker Additional: ✨ WebSockets, Messaging Queues (SQS, Kafka), HTML/CSS/JavaScript ✨ Angular (Bonus) ✨ GitHub Copilot or AI code generation tools (Bonus) 🌟 What We Expect: ✅ Strong grasp of Data Structures & Algorithms ✅ Expertise in both Functional & Object-Oriented Programming ✅ Experience in CI/CD Automations ✅ Passion for writing clean, high-quality, testable code ✅ Proactive mindset with continuous improvement spirit ✅ Strong problem-solving & analytical skills 🚀 Why Join Us? 🌍 Work on impactful products used by internal and external partners 🤝 Collaborate with a high-performing tech team 📈 Enjoy career growth , mentorship, and learning opportunities 🧠 Be part of a culture of continuous feedback & innovation ☁️ Opportunity to work with the latest Cloud & Serverless tech 📢 Ready to take the next step in your Full Stack journey? 👉 Apply now or tag someone who would be a great fit! 👉 DM us if you’d like to know more. #FullStackDeveloper #NodeJS #Python #GoLang #Kubernetes #Docker #Serverless #AWSCloud #MongoDB #DynamoDB #Kafka #RabbitMQ #HiringNow #TechCareersIndia #DeveloperJobs #JoinOurTeam 🚀 Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs.  Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Crunchyroll Founded by fans, Crunchyroll delivers the art and culture of anime to a passionate community. We super-serve over 100 million anime and manga fans across 200+ countries and territories, and help them connect with the stories and characters they crave. Whether that experience is online or in-person, streaming video, theatrical, games, merchandise, events and more, it’s powered by the anime content we all love. Join our team, and help us shape the future of anime! About the role: As a Senior Software Engineer on our User Experience Engineering support team, you will contribute to the design, development and optimization of our internal UX support tools. You will take ownership of key features and improvements, ensuring high-quality code and performance. You'll collaborate with Engineering, Program Management, Product, and QA teams across the globe to help shape our technology roadmap and achieve our goals. You'll be a part of an international team of 100+ client engineers, where your contributions will help maintain Crunchyroll's position as the premiere Anime streaming service. Responsibilities: Design, develop, and maintain both frontend and backend components of our user experience tools. Lead design and architectural discussions and make critical decisions regarding system design. Write clean, efficient, and well-documented code. Conduct code reviews and provide constructive feedback to team members. Troubleshoot and resolve complex technical issues. Mentor and provide guidance to junior and mid-level engineers. Collaborate with product managers, designers, and other stakeholders to define project requirements. Ensure the application's performance, scalability, and security. Required Skills: 6+ years of experience in software development. Extensive experience with JavaScript and TypeScript. Proven expertise in front-end, back-end, or full-stack development. Experience with backend development using Node.js and serverless architectures. Proficiency in writing unit and integration tests. Nice to Have: Experience with AWS services (Lambda, DynamoDB, S3, API Gateway, CloudFront). Knowledge of serverless architectures. Knowledge of Go programming language. Experience with SDET practices. About Our Values We want to be everything for someone rather than something for everyone and we do this by living and modeling our values in all that we do. We value Courage. We believe that when we overcome fear, we enable our best selves. Curiosity. We are curious, which is the gateway to empathy, inclusion, and understanding. Kaizen. We have a growth mindset committed to constant forward progress. Service. We serve our community with humility, enabling joy and belonging for others. Our commitment to diversity and inclusion Our mission of helping people belong reflects our commitment to diversity & inclusion. It's just the way we do business. We are an equal opportunity employer and value diversity at Crunchyroll. Pursuant to applicable law, we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Crunchyroll, LLC is an independently operated joint venture between US-based Sony Pictures Entertainment, and Japan's Aniplex, a subsidiary of Sony Music Entertainment (Japan) Inc., both subsidiaries of Tokyo-based Sony Group Corporation. Questions about Crunchyroll’s hiring process? Please check out our Hiring FAQs: https://help.crunchyroll.com/hc/en-us/articles/360040471712-Crunchyroll-Hiring-FAQs Please refer to our Candidate Privacy Policy for more information about how we process your personal information, and your data protection rights: https://tbcdn.talentbrew.com/company/22978/v1_0/docs/spe-jobs-privacy-policy-update-for-crpa-dec-21-22.pdf Please beware of recent scams to online job seekers. Those applying to our job openings will only be contacted directly from @crunchyroll.com email account. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

As an Engineer Senior, Cloud Development, you'll be a key member of our Product Development Division who will focus on cloud services and IoT aspects of Shure products. As a growing team, we will consider engineers of various experience levels who are ready to change the future of audio! Responsibilities Design, develop and test software to be deployed in a cloud environment for managing Shure Cloud ecosystem applications Work as part of a cross-functional development team to design and implement cutting edge audio products and technologies Estimate, organize, and document tasks Practice established software development methodologies and principles. Continuous integration approach, focusing on separation of concerns, reusability, maintainability, minimized complexity, high cohesion and low coupling Model designs using UML and related methods; develop use cases to model real-time systems behavior Review the design and code developed by peer engineers More senior engineers will mentor junior and entry-level staff members Other duties as assigned Qualifications Demonstrated understanding of cloud software architectures A minimum of 5 years of experience is required. Demonstrated understanding of software design, analysis and programming using Node.js and Typescript/JavaScript SOA Experience defining, implementing, integrating and testing RESTful APIs SOA Experience on API integration is must Strong Experience working UI technologies. Angular preferred. Experience with CI/CD methodoligies. Experience with AWS services (APIGateway, DynamoDb, S3, Kinesis) Experience developing software in a serverless cloud environment (AWS preferable) Demonstrated expertise with debugging / performance profiling Experience with software version control and release Demonstrated attention to detail and ability to analyze complex interdependent variables Demonstrated verbal and written communication skills Ability to work effectively within a team environment and lead junior engineers Quality consciousness Minimum 3 years of cloud software development experience (AWS preferred) Preferably BS degree in Computer Science, Computer Engineering , or Electrical Engineering Who We Are Shure’s mission is to be the most trusted audio brand worldwide – and for nearly a century, our Core Values have aligned us to be just that. Founded in 1925, we are a leading global manufacturer of audio equipment known for quality, reliability, and durability. We engineer microphones, headphones, wireless audio systems, conferencing systems, and more. And quality doesn’t stop at our products. Our talented teams strive for perfection and innovate every chance they get. We offer an Associate-first culture, flexible work arrangements, and opportunity for all. Shure is headquartered in United States. We have more than 35 regional sales offices, engineering hubs, and manufacturing facilities throughout the Americas, EMEA, and Asia. THE MIX MATTERS Don’t check off every box in the job requirements? No problem! We recognize that every professional journey is unique and are committed to providing an equitable candidate experience for all prospective Shure Associates. If you’re excited about this role, believe you’ve got the skills to be successful, and share our passion for creating an inclusive, diverse, equitable, and accessible work environment, then apply! Show more Show less

Posted 3 weeks ago

Apply

5.0 - 8.0 years

8 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Position Overview The role of the QA & Testing Lead Analyst will provide critical support in system development across broader Pharmacy Quality Engineering organization, influencing Operations and Technology Product Management. This role will provide expertise in the engineering, design, installation and start-up of automated systems. As a member of our team, you will work in a high performance, high frequency, enterprise technology environment. This role works closely with client, IT management and staff to identify automated solutions, new or modified systems, reuse of existing machinery/equipment, or integration of purchased solutions or a combination of the available alternatives. The Automation Engineer Lead Analyst supports the organization in conceiving, planning, and delivering initiatives and uses deep professional knowledge and acumen to advise functional leaders. Responsibilities: Write test strategy and test case documents derived from user stories for one or more features. Test cases should include positive and negative scenarios, test data setup/configuration and expected results. Create test data files containing valid and invalid records to thoroughly test system logic and verify system flow. Collaborate with users to plan and execute end-to-end testing and user acceptance testing (UAT). Ensure successful completion and documentation of system tests, resolving any issues encountered. Contribute to other testing activities such as stress, load, and performance testing where required. Utilize Enterprise Zephyr to document and implement comprehensive test cases covering various application capabilities. Work on Test Automation design and test automation framework and scripts development using Java, Selenium, Test NG, Cucumber, Playwright with TypeScript, JavaScript. Utilize SQL skills to validate changes in backend database systems, including Oracle, MongoDB, and PostgreSQL. Provide estimates for testing effort-based user stories as part of sprint planning. Contribute and participate in other Agile scrum activities such as daily standups, backlog grooming, demos, and retrospectives. Ensure the best possible performance, quality, and responsiveness of the applications Help maintain code quality, organization, and automation Able to work on projects individually and directly with clients. Qualifications Required Skills: Testing types System Integration Testing Functional Testing Regression Testing E2E Testing Performance Testing experience (preferred) Technical Experience in testing UI, API and batch testing Test Automation design and hands on test automation framework and scripts development using Java, Selenium, Test NG, Cucumber, Playwright with TypeScript, JavaScript Strong SQL queries experience Integrating test automation in CI/CD pipeline Oracle, MongoDB, and PostgreSQL Enterprise Zephyr, Jira experience Technology Stack: Web and API applications QA Tester, Zephyr testing experience, excellent documentation skills, strong analytical skills, strong SQL experience. Test Automation design and hands on test automation framework and scripts development using Java, Selenium, Test NG, Cucumber, Playwright with TypeScript, JavaScript. Proven experience as a QA Tester or similar role, preferably in web and API application testing. Expertise in using Zephyr for test management and JIRA in Agile team environments. Strong analytical skills with the ability to quickly grasp complex system requirements. Proficiency in SQL with solid RDBMS querying capabilities, particularly Oracle, MongoDB, and PostgreSQL. Testing and triaging of defects and issues. Knowledge of defect tracking / task tools such as Jira and Confluence. Knowledge of build automation and deployment tools such as Jenkins as well as source code repository tools such as Git. Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong influencing/negotiation skills. Strong interpersonal/relationship skills. Familiarity with agile methodology. Experience within Agile development environment (Sprint planning, demos and retrospectives, and other sprint ceremonies). Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Required Experience & Education: College or University degree in Computer Science or a related discipline Minimum 5-8 years of work experience in software automation testing and quality engineering Preferred Experience / Qualifications: Healthcare domain knowledge Mobile Automation experience Knowledge in JavaScript and TypeScript programming languages Exposure to AWS services (DynamoDB, S3 Buckets, Lambdas, etc.) Excellent written and verbal communication skills Solid analytical skills, highly organized, self-motivated and a quick learner Flexible and willing to accept change in priorities as necessary

Posted 3 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Zenoti provides an all-in-one, cloud-based software solution for the beauty and wellness industry. Our solution allows users to seamlessly manage every aspect of the business in a comprehensive mobile solution: online appointment bookings, POS, CRM, employee management, inventory management, built-in marketing programs and more. Zenoti helps clients streamline their systems and reduce costs, while simultaneously improving customer retention and spending. Our platform is engineered for reliability and scale and harnesses the power of enterprise-level technology for businesses of all sizes Zenoti powers more than 30,000 salons, spas, medspas and fitness studios in over 50 countries. This includes a vast portfolio of global brands, such as European Wax Center, Hand & Stone, Massage Heights, Rush Hair & Beauty, Sono Bello, Profile by Sanford, Hair Cuttery, CorePower Yoga and TONI&GUY. Our recent accomplishments include surpassing a $1 billion unicorn valuation, being named Next Tech Titan by GeekWire, raising an $80 million investment from TPG, ranking as the 316th fastest-growing company in North America on Deloitte’s 2020 Technology Fast 500™. We are also proud to be recognized as a Great Place to Work CertifiedTM for 2021-2022 as this reaffirms our commitment to empowering people to feel good and find their greatness. To learn more about Zenoti visit: https://www.zenoti.com What's the Opportunity? We're looking for a Senior Software Engineer (Python/AWS) to join our innovative team! This is a fantastic chance to make a significant impact by designing and building highly scalable and reliable software systems on AWS. You'll be working on services that make Zenoti a go-to SaaS provider for the beauty, wellness and fitness industry. If you're passionate about elegant software design, cloud-native solutions, and embracing AI-powered development tools like Cursor IDE and GitHub Copilot to boost your productivity, this role is for you! What will I be doing? As a Senior Software Engineer, you'll be a key player in our engineering lifecycle. You'll: Design and architect robust, scalable, and maintainable software systems, translating business needs into technical designs. Build with Python: Write, test, and deploy high-quality Python code for our production systems, focusing on clean, efficient, and well-documented solutions. Leverage AWS: Work hands-on with various AWS services (e.g., EC2, S3, Lambda, API Gateway, DynamoDB, RDS) to build and manage our cloud infrastructure. Boost productivity with AI Tools: Actively utilize AI-powered development tools like Cursor IDE, GitHub Copilot, and other intelligent assistants to accelerate development and improve code quality. Optimize and troubleshoot: Identify and resolve complex technical challenges, optimize system performance, and ensure the overall health of our applications. Collaborate: Work closely with product managers and fellow engineers in an Agile environment, contributing to our collective growth. What skills do I need? Bachelor's degree in Computer Science or IT, 4 to 6 years of overall experience as an AI Engineer or Data Scientist. Having minimum of 4 years experience in Python and AWS. Exposure in designing and building scalable solutions using Cloud technologies. Benefits Attractive Compensation Comprehensive medical coverage for yourself and your immediate family An environment where wellbeing is high on priority – access to regular yoga, meditation, breathwork, nutrition counseling, stress management, inclusion of family for most benefit awareness building sessions Opportunities to be a part of a community and give back: Social activities are part of our culture; You can look forward to regular engagement, social work, community give-back initiatives" Zenoti provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Company Overview At Oportun, we are on a mission to foster financial inclusion for all by providing affordable and responsible lending solutions to underserved communities. As a purpose-driven financial technology company, we believe in empowering our customers with access to responsible credit that can positively transform their lives. Our relentless commitment to innovation and data-driven practices has positioned us as a leader in the industry, and we are actively seeking exceptional individuals to join our team as Senior Software Engineer to play a critical role in driving positive change. Position Overview We are seeking a highly skilled Platform Engineer with expertise in building self-serve platforms that combine real-time ML deployment and advanced data engineering capabilities. This role requires a blend of cloud-native platform engineering, data pipeline development, and deployment expertise. The ideal candidate will have a strong background in implementing data workflows, building platforms to enable self-serve for ML pipelines while enabling seamless deployments. Responsibilities Platform Engineering Design and build self-serve platforms that support real-time ML deployment and robust data engineering workflows. Create APIs and backend services using Python and FastAPI to manage and monitor ML workflows and data pipelines. Real-Time ML Deployment Implement platforms for real-time ML inference using tools like AWS SageMaker and Databricks. Enable model versioning, monitoring, and lifecycle management with observability tools such as New Relic. Data Engineering Build and optimise ETL/ELT pipelines for data preprocessing, transformation, and storage using PySpark and Pandas. Develop and manage feature stores to ensure consistent, high-quality data for ML model training and deployment. Design scalable, distributed data pipelines on platforms like AWS, integrating tools such as DynamoDB, PostgreSQL, MongoDB, and MariaDB. CI/CD and Automation Use CI/CD pipelines using Jenkins, GitHub Actions, and other tools for automated deployments and testing. Automate data validation and monitoring processes to ensure high-quality and consistent data workflows. Documentation and Collaboration Create and maintain detailed technical documentation, including high-level and low-level architecture designs. Collaborate with cross-functional teams to gather requirements and deliver solutions that align with business goals. Participate in Agile processes such as sprint planning, daily standups, and retrospectives using tools like Jira. Experience Required Qualifications 5-10 years experience in IT 5-8 years experience in platform backend engineering 1 year experience in DevOps & data engineering roles. Hands-on experience with real-time ML model deployment and data engineering workflows. Technical Skills Strong expertise in Python and experience with Pandas, PySpark, and FastAPI. Proficiency in container orchestration tools such as Kubernetes (K8s) and Docker. Advanced knowledge of AWS services like SageMaker, Lambda, DynamoDB, EC2, and S3. Proven experience building and optimizing distributed data pipelines using Databricks and PySpark. Solid understanding of databases such as MongoDB, DynamoDB, MariaDB, and PostgreSQL. Proficiency with CI/CD tools like Jenkins, GitHub Actions, and related automation frameworks. Hands-on experience with observability tools like New Relic for monitoring and troubleshooting. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

🚀 About Adalat AI Adalat AI is re-architecting India’s courts with an end-to-end justice tech stack. Our speech-to-text, OCR and case-flow tools already serve 10 states and 15 % of all courtrooms . Backed by global foundations and built by alumni of Harvard, MIT, Oxford and IIIT-H , we ship technology that accelerates justice the way UPI accelerated payments. 👩‍💻 About the Role We’re looking for a Lead Backend Engineer to own the core services that power real-time courtroom transcription, ML pipelines and secure document workflows. You’ll work side-by-side with our founders, ship fast, and set engineering standards for a stack that must scale to 10 000+ courtrooms, eight hours a day . 🔧 What You’ll Do Design, build and operate Golang micro-services, event-driven workflows and serverless functions Model complex legal data and build APIs consumed by web, mobile and ML teams Create cost-efficient, horizontally scalable systems that survive patchy bandwidth and high concurrency Partner with ML researchers to productionise ASR, semantic-search and feedback-loop pipelines Champion observability (Prometheus/NewRelic), CI/CD and zero-downtime deploys Mentor engineers and pitch PoCs for the next big product bets 🛠 Our Tech Golang · GCP · EventBridge / Kafka · PostgreSQL & DynamoDB · Kubernetes & Helm · Docker · GitHub Actions · GraphQL Prometheus / NewRelic ✅ You’re Probably a Fit If You Have 5–8 years building distributed systems in production Speak fluent Go (or similar) and know your way around queues, caches and sharded DBs Have shipped on Kubernetes, automated with CI/CD and debugged with proper metrics + traces Care about clean domain models, security, and the craft of writing maintainable code Bonus : GraphQL, full-text search, streaming pipelines, encryption at rest/in-transit 🌱 In Your First Year You’ll Roll out a backend that supports 10 000+ courtrooms running 8–10 hours daily Stand up ML infra for voice-first editing, semantic search and continuous model retraining Help courts create, manage and share highly-classified legal documents at national scale 🎁 Perks Remote-first, flexible hours & unlimited PTO Generous maternity / paternity leave Autonomy, ownership and a no-ego team of builders Direct access to Harvard/MIT/Oxford networks & L&D stipends 📬 How to Apply Email careers@adalat.ai with the subject “Backend Engineer (Lead)” —include a résumé or GitHub and a short note on the hardest system you’ve ever scaled. We know impostor syndrome is real—if you’re close on the reqs, please apply. Show more Show less

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

A career in our Advisory Acceleration Centre is the natural extension of PwC’s leading class global delivery capabilities. We provide premium, cost effective, high quality services that support process quality and delivery capability in support for client engagements. Years of Experience: Candidates with 4-8 years of hands on experience Position Requirements Must Have : Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions Strong expertise in end-to-end implementation of Cloud data engineering solutions like Enterprise Data lake, Data hub in AWS Proficient in Lambda or Kappa Architectures Should be aware of Data Management concepts and Data Modelling Strong AWS hands-on expertise with a programming background preferably Python/Scala Good knowledge of Big Data frameworks and related technologies - Experience in Hadoop and Spark is mandatory Strong experience in AWS compute services like AWS EMR, Glue and storage services like S3, Redshift & Dynamodb Good experience with any one of the AWS Streaming Services like AWS Kinesis, AWS SQS and AWS MSK Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql and Spark Streaming Strong understanding of DBT ELT Tool, and usage of DBT macros etc Good knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build and Code Commit Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules Good knowledge in AWS Security and AWS Key management Strong understanding of Cloud data migration processes, methods and project lifecycle Good analytical & problem-solving skills Good communication and presentation skills Education : Any Graduate. Good analytical & problem-solving skills Good communication and presentation skills Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role Overview: The Cloud Security Engineer will have a profound impact on security operations, engineering, and architecture domains. The overall objective of this role is to support various teams such as product security, security operations, security engineering and various business initiatives and projects in building a secure cloud infrastructure in line with industry best practices. The Cloud Security Engineer will also implement a highly automated approach to monitoring and detecting incidents, as well as responding to them timely and effectively. Responsibilities: Advise internal customers on best practices in design and implementation of secure cloud systems Conduct reviews of various cloud platforms, services, and business initiatives to assess cyber risk Conduct Cloud Security Posture Management (CSPM) activities Design, develop, and implement security solutions to prevent exposure of cloud resources. Design, develop, and implement security requirements for cloud-based systems to meet business requirements with appropriate security controls Maintain, monitor, and deploy security baselines and automation solutions for hybrid cloud identity platform. Design and develop cloud-specific security procedures, standards, and policies. Provide support with security incidents, helping the Threat Management team prioritize and remediate appropriately Support requirements around SOC2 compliance alongside addressing the project requirements for the AWS platform and Lumino Lead continuous improvement and engineering maturity across cloud solutions. Location: Pune, India Education: Bachelor's degree in computer science, information systems and/or equivalent formal training or work experience. Nice to have Licenses/ Certifications: CISSP or equivalent security-related industry certifications AWS Certified Security - Specialty and/or AWS Associate or higher certification Certified Cloud Zero Trust (CCZT) Professional Certification Certified Cloud Security Professional (CCSP) HIPAA compliance-related certifications (e.g., Certified HIPAA Professional - CHP) Experience: Overall 10+ years of experience in IT, with at least 5 years focused on AWS security 5+ years of experience as an Information Security Administrator or Engineer 3+ years of experience in Cloud Security Architecture and/or Engineering. 2+ years of Application Security/Secure Software Development. Strong understanding of different cloud architecture models, hosting, and deployment models. Strong experience implementing security monitoring, logging, and alerting Practical knowledge of AWS foundation services related to compute, network, storage, content delivery, administration and security, deployment and management, automation technologies Strong knowledge of cloud security best practices and AWS Well-Architected Framework, especially the Security Pillar Familiarity with using AWS Cloud Services (EC2, DynamoDB, API Gateway, RDS, Lambda, CloudFront, CloudFormation, CloudWatch, Route 53, WAF, GuardDuty, Security Groups, AWS IAM, etc) Solid understanding of HIPAA regulations, as well as other compliance frameworks such as SOC 2, PCI-DSS, and GDPR Experience working with cloud security and governance tools, cloud access security brokers (CASBs), and server virtualization technologies. Experience with assessment, development, implementation, optimization, and documentation of a comprehensive and broad set of security technologies and processes (secure software development (Application Security), data protection, cryptography, key management, identity and access management (IAM), network security) within SaaS, IaaS, PaaS, and other cloud environments Basic experience with Azure Experience with deployment orchestration, automation, and security configuration management (Jenkins, Puppet, Chef, Cloudformation, Terraform, Ansible) would be a great plus Experience with services programming (AWS Lambda, Docker, etc.) would be a great plus NICE TO HAVE: Understanding of M365 suit and Azure security mechanisms Competences: Excellent written and oral communication skills. Excellent customer service skills and problem resolution. Experience in being able to manage and prioritize multiple tasks in an effective manner. Experience with service-oriented architecture for cloud-based services. Understanding of distributed denial of service attack intelligence gathering, concepts, mitigation tools, and techniques. Understanding of mobility security device and application risk and threat assessment. Understanding of nation and non-nation state actors, hacktivist groups, advanced threats, and the "kill chain" methodology. Familiarity with secure coding best practices. Strong communication & organizational skills, ability to multi-task, strong attention to details, excellent problem solving and follow-up skills required Travel Requirements: 10% Stay connected and receive alerts for jobs like this by joining our talent community. We're more than just a company - we're a community! Follow us on LinkedIn to see how we support and empower our employees and patients every day. Check our Glassdoor page for a glimpse behind the scenes and a sneak peek into You. Unlimited. , life, culture, and benefits at S+N. Explore our new website and learn more about our mission, our team, and the opportunities we offer. Show more Show less

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies