Home
Jobs

1993 Dynamodb Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

About Cybage: Founded in 1995, Cybage Software Pvt. Ltd is a technology consulting organization specialized in outsourced product engineering services. As a leader in the Technology and product engineering space, Cybage works with some of the world’s best independent software vendors. Our solutions are focused on modern technologies and are enabled by a scientific, data driven system called Decision Mines™ for Digital Excellence. This unique model de-risks our approach, provides better predictability, and ensures a better value per unit cost for our clients. An ISO 27001 certified company based in Pune, India that is partnered with more than 200 global software houses of fine repute. The array of services includes Product Engineering (OPD), Enterprise Business Solutions, Value Added Services, and Idea Incubation Services. Cybage specializes in the implementation of the Offshore Development Center (ODC) model. You will get an opportunity to be a part of a highly skilled talent pool of more than 8000 employees. Apart from Pune, we have our operations hub in GNR and Hyderabad as well and we have also marked our presence in North America, Canada, UK, Europe, Japan, Australia, and Singapore. We provide seamless services and dependable deliveries to our clients from diverse industry verticals such as Media and Entertainment, Travel and Hospitality, Online Retail, Healthcare, SCM and Technology. JD: • Providing 24x7 application support for web and java based applications • Resolving issues related to applications reported by users • Resolving functionality and performance issues • Working on creating the DB queries • Identify the bugs and report to the development team and apply the workaround Mandatory skills: • 3-5 Years of experience in production support supporting Java applications • Experience in providing support to end users on application related issues • Working knowledge on JAVA(7, 8, 11) and Remote debugging skills • Exposure on java web application development • Working experience on Linux and Shell scripting • Experience on code repository tools • Experience on databases like Oracle, DynamoDB, postgresql • Knowledge on micro services architecture • knowledge on Travel and hospitality domain. • knowledge on Weblogic, tomee, tomcat, activemq, jasper report servers, messaging services Good to have: • AWS cloud technology • Good to have Jenkins/Devops background. • Good knowledge on configuration management. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Company : Our client is a trusted global innovator of IT and business services. They help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe. · Job Title: Amazon Connect · Location: Bangalore · Experience: 3+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills: 6 years of experience in IT Support or Cloud Operations roles for Senior Dev 4 years of experience in IT Support or Cloud Operations roles for Senior Dev Handson experience with Amazon Connect AWS Lambda and DynamoDB Deep understanding of contact flows queue management and telephony configurations Experience in WebRTC SIP trunking and troubleshooting VoIPrelated issues Strong knowledge of AWS CloudWatch Amazon Connect Metrics and monitoring tools Experience with Salesforce integrations Familiarity with ITIL processes especially in incident and problem management Strong communication skills and ability to collaborate with crossfunctional teams Diagnose and resolve technical issues Utilize product logs diagnostics and indepth product knowledge to identify and fix intricate technical problems Conduct root cause analysis Investigate the underlying causes of technical issues and document the findings comprehensively Proactive maintenance tasks Engage in regular maintenance activities to ensure optimal system performance Monitor system performance Keep a close watch on system performance to detect and address any anomalies promptly Develop and update procedures Create and revise standard operating procedures and troubleshooting guides to streamline processes Communicate with stakeholders Provide clear and timely updates to stakeholders about the status of issues next steps and resolutions Good to Have AWS Certifications eg AWS Certified Cloud Practitioner or Solutions Architect Associate Exposure to CICD pipelines scripting Python Bash and infrastructure as code Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Company They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About the client Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title: JAVA + AWS Location: Hyderabad/ Pune Experience : 4+ yrs Job Type: Contract to hire. Senior software engineer(AWS/Java) We are looking for talented experienced Senior software engineer with expertise in AWS cloud services, Typescript and Java development for our engineering team. Responsibilities Implementing cloud applications using AWS services, Typescript and Java. Write clean, maintainable and efficient code while adhering to best practices and coding standards. Work closely with product manager and engineers in to define and refine requirements. Provide technical guidance and mentorship to junior engineers in team. Troubleshoot and resolve complex technical issues and performance bottlenecks. Create and maintain technical documentation for code and processes. Stay up-to-date with industry trends and emerging technologies to continuously improve our development practices. Qualifications Bachelors degree in engineering and related field Required Skills 5+ years of software development experiences with focus on AWS cloud development and distributed applications development with Java & J2EE. 1+ years of experience in AWS development using typescript. If not worked on typescript, willing to learn typescript because as per Principal standards typescript is the preferred language for AWS development. Hands on experience and deploying applications on AWS cloud infrastructure(e.g., EC2, Lambda, S3, DynamoDB, RDS, API Gateway, EventBridge, SQS, SNS, Fargate etc). Strong Hands on experience in Java/J2EE, Spring, Spring boot development and good understanding of serverless computing. Experience with REST API and Java Shared Libraries. Preferred Skills AWS Cloud practitioner, AWS Certified Developer or AWS certified solutions architect is plus. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

India

Remote

Job Title: Senior Backend Engineer – Python & Microservices Location: Remote Experience Required: 8–10+ years 🚀 About the Role: We’re looking for a Senior Backend Engineer (Python & Microservices) to join a high-impact engineering team focused on building scalable internal tools and enterprise SaaS platforms. You'll play a key role in designing cloud-native services, leading microservices architecture, and collaborating closely with cross-functional teams in a fully remote environment. 🔧 Responsibilities: Design and build scalable microservices using Python (Flask, FastAPI, Django) Develop production-grade RESTful APIs and background job systems Architect modular systems and drive microservice decomposition Manage SQL & NoSQL data models (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Implement distributed data pipelines using Kafka, RabbitMQ, and SQS Apply best practices in rate limiting, security, performance optimisation, logging, and observability (Grafana, Datadog, CloudWatch) Deploy services in cloud environments (AWS preferred, Azure/GCP acceptable) using Docker, Kubernetes, and EKS Contribute to CI/CD and Infrastructure as Code (Jenkins, Terraform, GitHub Actions) ✅ Requirements: 8–10+ years of hands-on backend development experience Strong proficiency in Python (Flask, FastAPI, Django, etc.) Solid experience with microservices and containerised environments (Docker, Kubernetes, EKS) Expertise in REST API design, rate limiting, and performance tuning Familiarity with SQL & NoSQL (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Experience with cloud platforms (AWS preferred; Azure/GCP also considered) CI/CD and IaC knowledge (GitHub Actions, Jenkins, Terraform) Exposure to distributed systems and event-based architectures (Kafka, SQS) Excellent written and verbal communication skills 🎯 Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science or a related field Certifications in Cloud Architecture or System Design Experience integrating with tools like Zendesk, Openfire, or similar chat/ticketing platforms Show more Show less

Posted 2 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

Kolkata, West Bengal

On-site

About the Role We’re building cloud-native products that have to run fast today and scale smoothly tomorrow. To keep pace, we’re searching for a smart, efficient engineer who lives and breathes Node.js, designs crisp microservices, and knows how to make AWS Lambda sing . You’ll join a tight-knit team that prizes clean code, thoughtful architecture, and knowledge-sharing. What You’ll Do Design & build scalable backend services in Node.js, following microservices best practices Own end‑to‑end development of serverless functions (AWS Lambda), RESTful APIs, and event‑driven workflows Architect data models in DynamoDB and optimize them for performance and cost Deploy & manage workloads on AWS (ECS, API Gateway, CloudWatch, IAM, etc.) using IaC tooling Document your APIs with Swagger/OpenAPI and champion clear, up‑to‑date docs across the team Collaborate closely with frontend (MERN) engineers, QA, and DevOps to ship features that delight users Mentor junior developers—review code, share patterns, and help raise the technical bar Required Skills: 4+ years of professional backend development with Node.js, DynamoDB, and REST APIs 2+ years of working with Microservices Architecture Strong experience with AWS Cloud Services (Lambda, ECS, API Gateway, etc.). Experience with API documentation tools Swagger Job Type: Full-time Benefits: Paid sick time Schedule: Day shift Experience: backend development with Node.js, DynamoDB, and REST APIs: 3 years (Preferred) Location: Kolkata, West Bengal (Required) Work Location: In person

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

TEKsystems is seeking a Senior AWS + Data Engineer to join our dynamic team. The ideal candidate should have expertise Data engineer + Hadoop + Scala/Python with AWS services. This role involves designing, developing, and maintaining scalable and reliable software solutions. Job Title: Data Engineer – Spark/Scala (Batch Processing) Location: Manyata- Hybrid Experience: 7+yrs Type: Full-Time Mandatory Skills: 7-10 years’ experience in design, architecture or development in Analytics and Data Warehousing. Experience in building end-to-end solutions with the Big data platform, Spark or scala programming. 5 years of Solid experience in ETL pipeline building with spark or sclala programming framework with knowledge in developing UNIX Shell Script, Oracle SQL/ PL-SQL. Experience in Big data platform for ETL development with AWS cloud platform. Proficiency in AWS cloud services, specifically EC2, S3, Lambda, Athena, Kinesis, Redshift, Glue , EMR, DynamoDB, IAM, Secret Manager, Step functions, SQS, SNS, Cloud Watch. Excellent skills in Python-based framework development are mandatory. Should have experience with Oracle SQL database programming, SQL performance tuning, and relational model analysis. Extensive experience with Teradata data warehouses and Cloudera Hadoop. Proficient across Enterprise Analytics/BI/DW/ETL technologies such as Teradata Control Framework, Tableau, OBIEE, SAS, Apache Spark, Hive Analytics & BI Architecture appreciation and broad experience across all technology disciplines. Experience in working within a Data Delivery Life Cycle framework & Agile Methodology. Extensive experience in large enterprise environments handling large volume of datasets with High SLAs Good knowledge in developing UNIX scripts, Oracle SQL/PL-SQL, and Autosys JIL Scripts. Well versed in AI Powered Engineering tools like Cline, GitHub Copilo Please send the resumes to nvaseemuddin@teksystems.com or kebhat@teksystems.com Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana

On-site

Engineer III, Database Engineering Gurgaon, India; Hyderabad, India Information Technology 316332 Job Description About The Role: Grade Level (for internal use): 10 Role: As a Senior Database Engineer, you will work on multiple datasets that will enable S&P CapitalIQ Pro to serve-up value-added Ratings, Research and related information to the Institutional clients. The Team: Our team is responsible for the gathering data from multiple sources spread across the globe using different mechanism (ETL/GG/SQL Rep/Informatica/Data Pipeline) and convert them to a common format which can be used by Client facing UI tools and other Data providing Applications. This application is the backbone of many of S&P applications and is critical to our client needs. You will get to work on wide range of technologies and tools like Oracle/SQL/.Net/Informatica/Kafka/Sonic. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. We craft strategic implementations by using the broader capacity of the data and product. Do you want to be part of a team that executes cross-business solutions within S&P Global? Impact: Our Team is responsible to deliver essential and business critical data with applied intelligence to power the market of the future. This enables our customer to make decisions with conviction. Contribute significantly to the growth of the firm by- Developing innovative functionality in existing and new products Supporting and maintaining high revenue productionized products Achieve the above intelligently and economically using best practices Career: This is the place to hone your existing Database skills while having the chance to become exposed to fresh technologies. As an experienced member of the team, you will have the opportunity to mentor and coach developers who have recently graduated and collaborate with developers, business analysts and product managers who are experts in their domain. Your skills: You should be able to demonstrate that you have an outstanding knowledge and hands-on experience in the below areas: Complete SDLC: architecture, design, development and support of tech solutions Play a key role in the development team to build high-quality, high-performance, scalable code Engineer components, and common services based on standard corporate development models, languages and tools Produce technical design documents and conduct technical walkthroughs Collaborate effectively with technical and non-technical stakeholders Be part of a culture to continuously improve the technical design and code base Document and demonstrate solutions using technical design docs, diagrams and stubbed code Our Hiring Manager says: I’m looking for a person that gets excited about technology and motivated by seeing how our individual contribution and team work to the world class web products affect the workflow of thousands of clients resulting in revenue for the company. Qualifications Required: Bachelor’s degree in computer science, Information Systems or Engineering. 7+ years of experience on Transactional Databases like SQL server, Oracle, PostgreSQL and other NoSQL databases like Amazon DynamoDB, MongoDB Strong Database development skills on SQL Server, Oracle Strong knowledge of Database architecture, Data Modeling and Data warehouse. Knowledge on object-oriented design, and design patterns. Familiar with various design and architectural patterns Strong development experience with Microsoft SQL Server Experience in cloud native development and AWS is a big plus Experience with Kafka/Sonic Broker messaging systems Nice to have: Experience in developing data pipelines using Java or C# is a significant advantage. Strong knowledge around ETL Tools – Informatica, SSIS Exposure with Informatica is an advantage. Familiarity with Agile and Scrum models Working Knowledge of VSTS. Working knowledge of AWS cloud is an added advantage. Understanding of fundamental design principles for building a scalable system. Understanding of financial markets and asset classes like Equity, Commodity, Fixed Income, Options, Index/Benchmarks is desirable. Additionally, experience with Scala, Python and Spark applications is a plus. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 316332 Posted On: 2025-06-16 Location: Gurgaon, Haryana, India

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Have you ever ordered a product on Amazon and when that box with the smile arrived, wondered how it got to you so fast? Wondered where it came from and how much it cost Amazon? If so, Amazon’s Supply Chain Optimization Technology (SCOT) organization is for you. At SCOT, we solve deep technical problems and build innovative solutions in a fast-paced environment working with smart & passionate team members. (Learn more about SCOT: http://bit.ly/amazon-scot) Key job responsibilities Analyze and synthesize large data streams across multiple systems/inputs. • Work with Product Managers to understand customer behaviors, spot system defects, and benchmark our ability to serve our customers, improving a wide range of internal products that impact inventory health. • Develop business insights basis data extraction, data analytics, trend deduction & Pattern recognition. • Present these business insights to senior management/executives. • Create advanced dashboard to help a large group of teams to consume insights make changes to business process and track progress. • Build analytical models that can help improve business outcomes at scale enhancing current system abilities. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Kerala, India

On-site

Senior Data Engineer – AWS Expert (Lead/Associate Architect Level) 📍 Location: Trivandrum or Kochi (On-site/Hybrid) Experience:10+ Years (Relevant exp in AWS- 5+ is mandatory) About the Role We’re hiring a Senior Data Engineer with deep expertise in AWS services , strong hands-on experience in data ingestion, quality, and API development , and the leadership skills to operate at a Lead or Associate Architect level . This role demands a high level of technical ownership , especially in architecting scalable, reliable data pipelines and robust API integrations. You’ll collaborate with cross-functional teams across geographies, so a willingness to work night shifts overlapping with US hours (till 10 AM IST) is essential. Key Responsibilities Data Engineering Leadership : Design and implement scalable, end-to-end data ingestion and processing frameworks using AWS. AWS Architecture : Hands-on development using AWS Glue, Lambda, EMR, Step Functions, S3, ECS , and other AWS services. Data Quality & Validation : Build automated checks, validation layers, and monitoring for ensuring data accuracy and integrity. API Development : Develop secure, high-performance REST APIs for internal and external data integration. Collaboration : Work closely with product, analytics, and DevOps teams across geographies. Participate in Agile ceremonies and CI/CD pipelines using tools like GitLab. What We’re Looking For Experience : 5+ years in Data Engineering, with a proven track record in designing scalable AWS-based data systems. Technical Mastery : Proficient in Python/PySpark, SQL, and building big data pipelines. AWS Expert : Deep knowledge of core AWS services used for data ingestion and processing. API Expertise : Experience designing and managing scalable APIs. Leadership Qualities : Ability to work independently, lead discussions, and drive technical decisions. Preferred Qualifications Experience with Kinesis, Firehose, SQS , and data lakehouse architectures . Exposure to tools like Apache Iceberg , Aurora , Redshift , and DynamoDB . Prior experience in distributed, multi-cluster environments. Working Hours US Time Zone Overlap Required : Must be available to work night shifts overlapping with US hours (up to 10:00 AM IST). Work Location Trivandrum or Kochi – On-site or hybrid options available for the right candidate. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Company Description Pixbox Labs is a collective of senior freelancers who love building beautiful, performant web applications from front to back. We specialize in design-driven, full-stack development — partnering with ambitious teams to craft digital products that feel as good as they look. Role Description This is a contract remote role for a Backend Developer. The Backend Developer will be responsible for implementing server-side logic, ensuring high performance and responsiveness to requests from the front-end, managing the interchange of data between the server and users, integrating front-end elements, and optimizing applications for speed and scalability. Tasks include database design, deployment, and continuous integration/delivery (CI/CD). Tech stack FastAPI, SQLAlchemy Database: SQLite, PostgreSQL, DynamoDB Storage: AWS S3 Auth: JWT Qualifications Proficiency in backend programming languages such as Python (FastAPI) Experience with database technologies like MySQL, PostgreSQL, MongoDB, or others Skills in designing RESTful APIs and working with web services Knowledge in server, network, and hosting environment Understanding of security compliance Experience with version control systems, such as Git Strong analytical and problem-solving skills Good communication and teamwork skills Ability to work independently and remotely Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Skills: Python, Django, Lambda, PySpark, CRON, MySQL, Amazon Web Services (AWS), We are hiring a Python Developer for one of our clients. Job Title: Python Developer Experience: 5+ Years Job Type: 6 Months Contract + ext Location: Bangalore (Hybrid) Notice Period: Immediate Joiner Only Job Description Python development, backend experience. Strong knowledge of AWS services (Glue, Lambda, DynamoDB, S3, PySpark). Excellent debugging skills to resolve production issues. Experience with MySQL, NoSQL databases. Optional Skills Experience with Django and CRON jobs. Familiarity with data lakes, big data tools, and CI/CD. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Skills: Python, SQL, Django, AWS Lambda, CRON jobs, CI/CD, data lakes, Job Title: Python Developer Experience: 5+ Years Job Type: 6 Months Contract + ext Location: Bangalore (Hybrid) Notice Period: Immediate Joiner Only Job Description Python development, backend experience. Strong knowledge of AWS services (Glue, Lambda, DynamoDB, S3, PySpark). Excellent debugging skills to resolve production issues. Experience with MySQL, NoSQL databases. Optional Skills Experience with Django and CRON jobs. Familiarity with data lakes, big data tools, and CI/CD. If Interested can share your resume at heena@aliqan.com Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Skills: Python Development, Django, CRON, MySQL, NoSQL, Python, SQL, Glue, We are hiring a Python Developer for one of our clients. Job Title: Python Developer Experience: 5+ Years Job Type: 6 Months Contract + ext Location: Bangalore (Hybrid) Notice Period: Immediate Joiner Only Job Description Python development, backend experience. Strong knowledge of AWS services (Glue, Lambda, DynamoDB, S3, PySpark). Excellent debugging skills to resolve production issues. Experience with MySQL, NoSQL databases. Optional Skills Experience with Django and CRON jobs. Familiarity with data lakes, big data tools, and CI/CD. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Skills: Python, Django, Lambda, PySpark, CRON, MySQL, Amazon Web Services (AWS), We are hiring a Python Developer for one of our clients. Job Title: Python Developer Experience: 5+ Years Job Type: 6 Months Contract + ext Location: Bangalore (Hybrid) Notice Period: Immediate Joiner Only Job Description Python development, backend experience. Strong knowledge of AWS services (Glue, Lambda, DynamoDB, S3, PySpark). Excellent debugging skills to resolve production issues. Experience with MySQL, NoSQL databases. Optional Skills Experience with Django and CRON jobs. Familiarity with data lakes, big data tools, and CI/CD. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

India

Remote

Job Title: Senior Data Engineer – Data Quality, Ingestion & API Development Work Location: Trivandrum / Kochi (Remote or Onsite) Job Type: Full-time Experience Level: Senior Level Industry: IT Services | Cloud & Data Engineering Shift: US Overlapping Hours Notice Period: Immediate joiners only Preferred Candidates: Based in Kerala, Tamil Nadu, or Karnataka About the Role We are seeking a highly skilled Senior Data Engineer to lead the development of robust data ingestion frameworks , enforce data quality , and create high-performance APIs using a modern AWS-based tech stack. This role requires a minimum of 10 years of experience , including 5+ years specifically working on AWS platforms , and demands strong hands-on skills in Python, PySpark, AWS Glue, Lambda, CI/CD, DynamoDB, and EMR . This is a high-impact engineering position for professionals who can thrive in a fast-paced, cloud-first environment , working in US overlapping hours . Key Responsibilities Data Ingestion & Engineering Architect and develop scalable ETL/ELT pipelines using AWS Glue, Lambda, EMR, and Step Functions. Integrate diverse data sources into secure and optimized ingestion frameworks. Data Quality & Monitoring Implement automated data validation, error handling, and quality checks to ensure integrity. Set up end-to-end monitoring, logging , and alerting systems for real-time issue resolution. API Development Design, build, and document secure, high-performance RESTful APIs for seamless integrations. Collaboration & Agile Development Work cross-functionally with stakeholders, data scientists, and DevOps teams. Actively participate in Agile ceremonies , code reviews, and CI/CD pipeline management (GitLab preferred). Mandatory Skills & Experience Total Experience: 10+ years in Data Engineering roles AWS Expertise: Minimum 5 years of experience with AWS services Core Skills: Python, PySpark AWS Glue, Lambda, EMR CI/CD pipelines (GitLab or equivalent) DynamoDB, S3, Step Functions Strong understanding of data quality frameworks and API-driven architectures Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field Familiarity with data lakehouse architectures Exposure to Kinesis, Firehose, SQS is a plus Additional Information Work Location: Trivandrum / Kochi (Hybrid or Onsite) Shift: US Overlapping Hours Notice Period: Immediate Joiners Only Preferred Location: Kerala, Tamil Nadu, Karnataka Employment Type: Full-Time Ready to build scalable, cloud-native data platforms that power real-time insights? Apply now and become part of a dynamic, high-impact engineering team. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Overview Job Description About Business Unit SaaSOps manages post-production support and the overall experience of Epsilon PeopleCloud products for our global clients. This function is responsible for product support, incident management, managed operations and the automation of processes. The team has successfully incubated and mainstreamed Site Reliability Engineering (SRE) as a practice, to ensure reliable product operations on a global scale. Plus, the team is actively pioneering the adoption of AI in operations (AIOps) and recently launched AI-driven self-service capabilities to enhance operational efficiency and improve client experiences. Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. Responsibilities What you will do: (Roles and responsibilities) Collaborate with other software developers, business analysts and software architects to plan, design, develop, test, and maintain web-based business applications built on Microsoft and other Similar frameworks and Technologies. Excellent skills and hands on experience in developing frontend applications along with middleware and backend Maintain high standards of software quality within the team by establishing best practices and processes. Ability to think creatively to push beyond the boundaries of existing practices and mindsets. Use knowledge to create new and improve existing processes in terms of design and performance. Package and support deployment of releases. Participate , plan and execute in team building activities fun activities. Qualifications Essential skills & experience: Bachelor’s degree in Computer Science or a related field or have equivalent experience. 3-4 years of experience in Software Engineering. Demonstrated experience driving delivery through strong delivery practices, across complex programs of work. Strong communication skills Must be detail-oriented and can manage multiple tasks simultaneously. Willingness to learn new skills and apply them for developing new-age applications. Experience with web development technologies and frameworks including .Net Framework, REST APIs, MVC Working knowledge of database technology such as SQL, Oracle , DynamoDB. Basic Oracle SQL and PL/SQL is a must. Proficiency in HTML, CSS, JavaScript, and J-query Unit Testing (NUnit). Cloud (AWS/Azure). Knowledge of version control tools like GitHub, VSTS etc. is a must. Agile Development, Dev Ops (CI/CD). C# and .Net Framework, Web APIs Debugging, Troubleshooting skills Drive things independently with minimal supervision. Development web APIs in JSON and troubleshooting them during Production issues Desirable Skills & Experience API testing through Postman or Ready API Responsive web (Bootstrap). Experience with Unix/Linux command-line and bash shell is good to have. Experience in AWS, Redshift or equivalent databases, Lambda functions, Snowflake DB types. Proficient in Unix Shell scripting and Python. Knowledge of AWS EC2, S3, AMI etc. Security scan and Vulnerabilities fix for web applications. Kibana tool validations and analysis Personal Attributes Professionalism and integrity Self-starter Excellent command of verbal and written English Well organized, with the ability to coordinate development across multiple team members Commitment to continuous learning and team/individual growth Ability to quickly adapt to changing tech landscape. Analysis and problem solving skill Additional Information Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we’ve provided marketers from the world’s leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon’s comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world. Epsilon Has a Core Set Of 5 Values That Define Our Culture And Guide Us To Create Value For Our Clients, Our People And Consumers. We Are Seeking Candidates That Align With Our Company Values, Demonstrate Them And Make Them Meaningful In Their Day-to-day Work Act with integrity. We are transparent and have the courage to do the right thing. Work together to win together. We believe collaboration is the catalyst that unlocks our full potential. Innovate with purpose. We shape the market with big ideas that drive big outcomes. Respect all voices. We embrace differences and foster a culture of connection and belonging. Empower with accountability. We trust each other to own and deliver on common goals. Because You Matter YOUniverse. A work-world with you at the heart of it! At Epsilon, we believe people make the place. And everything we do is designed with you in mind. That’s why our work-world, aptly named ‘YOUniverse’ is focused on creating a nurturing environment that elevates your growth, wellbeing and work-life harmony. So, come be part of a people-centric workspace where care for you is at the core of all we do. Take a trip to YOUniverse and explore our unique benefits, here. Epsilon is an Equal Opportunity Employer. Epsilon is committed to promoting diversity, inclusion, and equal employment opportunities by using reasonable efforts to attract, recruit, engage and retain qualified individuals of all ethnicities and backgrounds, including, but not limited to, women, people of color, LGBTQ individuals, people with disabilities and any other underrepresented groups, traits or characteristics. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 9 Lacs

Mumbai

On-site

JOB DESCRIPTION About the Advanced Analytics teamThe central Advanced Analytics team at the Abbott Established Pharma Division’s (EPD) headquarters in Basel helps define and lead the transformation towards becoming a global, data-driven company with the help of data and advanced technologies (e.g., Machine Learning, Deep Learning, Generative AI, Computer Vision). To us, Advanced Analytics is an important lever to reach our business targets, now and in the future; It helps differentiate ourselves from our competition and ensure sustainable revenue growth at optimal margins. Hence the central AA team is an integral part of the Strategy Management Office at EPD that has a very close link and regular interactions with the EPD Senior Leadership Team.Primary Job Function: With the above requirements in mind, EPD is looking to fill a role of a Analytics Developer reporting to the Head of AA Product Development. The Analytics Developer will be responsible for developing, and continuously refining enterprise grade analytics products and applications leveraging AWS cloud platform. This role involves leading Advanced Analytics Product initiatives, ensuring delivery of robust cloud based products, and driving innovation to support the business's advanced analytics needs.Core Job Responsibilities: Build and maintain responsive web applications using React.js or cross-platform mobile apps using React Native.Design and implement backend services using AWS Lambda, API Gateway, DynamoDB, RDS, S3, and other AWS services.Integrate third-party APIs and manage authentication/authorization (e.g., Cognito, OAuth).Ensure high performance, scalability, and security of applications.Collaborate with designers, product managers, and other developers to deliver high-quality features.Write clean, maintainable, and testable code.Participate in code reviews, testing, and deployment processes.Monitor and troubleshoot production issues using AWS tools (e.g., CloudWatch, X-Ray).Supervisory/Management Responsibilities: Direct Reports: None.Indirect Reports: None.Position Accountability/Scope: The Analytics Developer is accountable for delivering targeted business impact per initiative in collaboration with key stakeholders. This role involves significant responsibility for the development and management of strategic AI/AA products and applications, enabling faster, better, and more informed decision-making within the business.Minimum Education: Bachelor's/Master's degree in a relevant field (e.g., Computer Science, Engineering, etc.).Minimum Experience: At least 4-6 years of as a Full Stack Developer or similar role.Proficiency in React Native and React.js.Strong knowledge of JavaScript and TypeScript.Experience with Node.js, Express, or NestJS for backend development.Hands-on experience with AWS services: Lambda, API Gateway, DynamoDB, RDS, S3, Cognito, etc.Familiarity with CI/CD pipelines (e.g., AWS CodePipeline, GitHub Actions).Experience with RESTful APIs and/or GraphQL.Knowledge of mobile app deployment (App Store, Google Play).Strong understanding of cloud architecture and serverless computing.Strong programming skills in Python.Preferred Additional Skills: Experience in building machine learning algorithms and GenAI applications.AWS certification (e.g., AWS Certified Developer or Solutions Architect).Experience with containerization (e.g., Docker, ECS, EKS).Familiarity with Infrastructure as Code (e.g., AWS CDK, CloudFormation, Terraform).Experience with version control systems like CodeCommit.Familiarity with serverless architecture and services like AWS Lambda.Understanding of security best practices and implementation in cloud environments.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

4 - 8 Lacs

Pune

On-site

HMH is a learning technology company committed to delivering connected solutions that engage learners, empower educators and improve student outcomes. As a leading provider of K–12 core curriculum, supplemental and intervention solutions, and professional learning services, HMH partners with educators and school districts to uncover solutions that unlock students’ potential and extend teachers’ capabilities. HMH serves more than 50 million students and 4 million educators in 150 countries. HMH Technology India Pvt. Ltd. is our technology and innovation arm in India focused on developing novel products and solutions using cutting-edge technology to better serve our clients globally. HMH aims to help employees grow as people, and not just as professionals. Software Engineering at HMH is focused on building fantastic software to meet the challenges facing teachers and learners, enabling and supporting a wide range of next generation learning experiences. We design and build custom applications and services used by millions. We are creating teams full of innovative, eager software professionals to build the products that will transform our industry. We are staffing small, self-contained development teams with people who love solving problems, building high quality products and services. We use a wide range of technologies and are building up a next generation microservices platform that can make our learning tools and content available to all our customers. If you want to make a difference in the lives of students and teachers and understand what it takes to deliver high quality software, we would love to talk to you about this opportunity. Technology Stack You'll work with technologies such as Java, Spring Boot, Kafka, Aurora, Mesos, Jenkins etc. This will be a hands-on coding role working as part of a cross-functional team alongside other developers, designers and quality engineers, within an agile development environment. We’re working on the development of our next generation learning platform and solutions utilizing the latest in server and web technologies. Responsibilities: Build high-quality, clean, scalable, and reusable code by enforcing best practices around software engineering architecture and processes (Code Reviews, Unit testing, etc.) on the team. Work with the product owners to understand detailed requirements and own your code from design, implementation, test automation and delivery of high-quality product to our users. Identify ways to improve data reliability, efficiency, and quality. Perform development tasks from design specifications. Construct and verify (unit test) software components to meet design specifications. Perform quality assurance functions by collaborating with the cross-team members to identify and resolve software defects. Participate in production support and on-call rotation for the services owned by the team. Adhere to standards, such as security patterns, logging patterns, etc. Collaborate with cross-functional team members/vendors in different geographical locations to ensure successful delivery of the product features Have ownership over the things you build, help shape the product and technical vision, direction, and how we iterate. Work closely with your teammates for improved stability, reliability, and quality. Perform other duties as assigned to ensure the success of the team and the entire organization. Run numerous experiments in a fast-paced, analytical culture so you can quickly learn and adapt your work. Build and maintain CI/CD pipelines for services owned by team by following secure development practices. Skills & Experience: 3 to 6 years' experience in a relevant software development role Excellent object-oriented design & programming skills, including the application of design patterns and avoidance of anti-patterns. Strong Cloud platform skills: AWS Lambda, Terraform, SNS, SQS, RDS, Kinesis, DynamoDB etc. Experience building large-scale, enterprise applications with ReactJS/AngularJS. Proficient with front-end technologies, such as HTML, CSS, JavaScript preferred. Experience working in a collaborative team of application developers and source code repositories. Deep knowledge of more than one programming language like Node.js/Java. Demonstrable knowledge of AWS and Data Platform experience: Lambda, Dynamodb, RDS, S3, Kinesis, Snowflake. Demonstrated ability to follow through with all tasks, promises and commitments. Ability to communicate and work effectively within priorities. Ability to work under tight timelines in a fast-paced environment. Understanding software development methodologies and principles. Ability to solve large scale complex problems. Working experience of modern Agile software development methodologies (i.e. Kanban, Scrum, Test Driven Development) HMH Technology Private Limited is an Equal Opportunity Employer and considers applicants for all positions without regard to race, colour, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. We are committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation.

Posted 2 weeks ago

Apply

2.0 years

3 - 6 Lacs

India

Remote

About the Role We are seeking a self-motivated and deadline-driven Backend Developer with a strong foundation in Golang or Python , backend architecture, and cloud infrastructure (preferably AWS ). You’ll be building and optimizing scalable APIs, microservices, and data pipelines for AI-integrated platforms. This role requires a passion for AI/ML technologies and an ability to collaborate in fast-paced, innovative environments. Key Responsibilities Design, develop, and maintain high-performance backend services using Golang or Python Integrate AI/ML modules via APIs and prompt engineering where applicable Develop and optimize RESTful APIs , GraphQL endpoints , and server-side logic Deploy and manage services in AWS (Lambda, EC2, S3, RDS, etc.) Participate in architecture decisions, code reviews, and CI/CD workflows Collaborate with front-end, data, and AI teams to ensure cohesive development Troubleshoot, test, and maintain backend code to ensure strong optimization and functionality Adhere to project deadlines and milestones with minimal supervision Required Skills 2+ years experience in backend development (Golang or Python) Strong understanding of microservices architecture , REST APIs , and event-driven systems Experience with AWS services (Lambda, S3, EC2, DynamoDB, etc.) Basic understanding of prompt engineering and integration with LLMs or AI APIs Familiarity with PostgreSQL / MySQL / NoSQL databases Proficient with Git , Docker , and CI/CD pipelines Passion for emerging AI technologies and automation Excellent communication and time management skills Good to Have Experience with AI frameworks (OpenAI, LangChain, Hugging Face, etc.) Exposure to serverless architecture Background in data engineering or DevOps What We Offer Competitive salary and performance bonuses Work on cutting-edge AI/automation products Remote/hybrid flexibility Fast-paced, innovative, and mission-driven environment Job Type: Full-time Pay: ₹300,000.00 - ₹600,000.00 per year Benefits: Paid time off Work from home Location Type: In-person Schedule: Monday to Friday UK shift Weekend availability Experience: Back-end development: 3 years (Required) Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 07/07/2025

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle —from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , ing , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services . Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle —from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , ing , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services . Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

On-site

Hiring Senior Full Stack Java Software Developer to join our team at P Square Solutions (part of Neology Inc www.neology.com) Number of Open Positions – 1 Experience - 6+ years Industry - IT Product & Services Employment Type - Full-time Work Location - Smart City, Kochi, Kerala Shift timing based on projects – typically day/evening shift. Role Description We are hiring for a skilled Senior Software Developer with a Java/Spring/Angular background to help us design, create, extend, and maintain the core of our alternate Toll payment software platform. Duties & Responsibilities Write well designed, efficient, testable code Develop functionality using an AWS-based architecture Develop and adhere to SDLC best practices Design, implement, and maintain high-volume, low-latency java-based applications Help architect the overall system Help troubleshoot issues at the microservice and/or integration level. Required Qualifications 6+ years’ experience in software development, specifically developing applications using Java platforms Database experience using MS SQL Server, My SQL and DynamoDB Proficiency with Linux based environments, including shell scripting Experience with Angular (latest versions), TypeScript and JavaScript, HTML5, and CSS Good knowledge of data-intensive applications Expertise with relevant technologies such as Spring Boot Experience with performance optimization Solid practical experience developing enterprise software applications Experience with microservices Able to quickly fix bugs and solve problems Excellent ownership and communication skills Experience working in an agile team Candidates must be eligible to work in the USA as a direct hire and must have the appropriate visa/work permit credentials Show more Show less

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana

On-site

Lead, Platform Engineering Gurgaon, India Information Technology 309234 Job Description About The Role: Grade Level (for internal use): 11 Department overview: PVR DevOps is a global team that provides specialised technical builds across a suite of products. DevOps members work closely with the Development, Testing and Client Services teams to build and develop applications using the latest technologies to ensure the highest availability and resilience of all services. Our work helps ensure that PVR continues to provide high quality service and maintain client satisfaction. Position summary S&P Global is seeking a highly motivated engineer to join our PVR DevOps team in Noida. DevOps is a rapidly growing team at the heart of ensuring the availability and correct operation of our valuations, market and trade data applications. The team prides itself on its flexibility and technical diversity to maintain service availability and contribute improvements through design and development. Duties & accountabilities The role of Principal DevOps Engineer is primarily focused on building functional systems that improve our customer experience. Responsibilities include: Creating infrastructure and environments to support our platforms and applications using Terraform and related technologies to ensure all our environments are controlled and consistent. Playing a leading role in implementing DevOps technologies and processes e.g. Containerisation, CI/CD, infrastructure as code, metrics, monitoring etc. Automating always Supporting, monitoring, maintaining and improving our infrastructure and the live running of our applications Maintaining the health of cloud accounts for security, cost and best practices Acting as a mentor for junior team members Providing assistance to other functional areas such as development, test and Client services Knowledge, Skills & Experience: Required: Strong background in Linux/Unix Administration in IaaS / PaaS / SaaS models Deployment, maintenance and support of enterprise applications into AWS including (but not limited to) Route53, ELB, VPC, EC2, S3, ECS, SQS Deep understanding of Terraform and similar ‘Infrastructure as Code’ technologies Good understanding of cloud security models and best practices Good understanding of modern CI/CD methods and approaches Strong experience with SQL and NoSQL databases such MySQL, PostgreSQL, DB/2, MongoDB, DynamoDB Experience with automation/configuration management using toolsets such as Chef, Puppet or equivalent Experience of enterprise systems deployed as micro-services through code pipelines utilizing containerisation (Docker) Ability to use a wide variety of open source technologies and tools Working knowledge, understanding and ability to write scripts using languages including Bash, Python and an ability to understand Java, JavaScript and PHP Working knowledge of development Languages – Java (preferred) Knowledge of best practices and IT operations in an always-up, always-available service Experience with systems and IT operations operating within an ISO27001 environment Can manage regular system patching, including critical security upgrades and will seek strategies to automate these process Can work with and manage key suppliers in the production environments (security suppliers, suppliers providing infrastructure monitoring and others) Expertise in system monitoring and alerting strategies and can draw on their experience to drive improvements in system monitoring with automation, third party tools and frameworks and validating external suppliers Personal competencies Personal Impact Confident individual – able to represent the team at various levels Strong analytical and problem-solving skills Demonstrated ability to work independently with minimal supervision Ability to prioritise and multi-task balancing technical, business and other drivers Highly organized with very good attention to detail Methodical, organized problem solving skills – analytical nature Takes ownership of issues and drives through the resolution. Flexible and willing to adapt to changing situations in a fast moving environment Communication Demonstrates a global mindset, respects cultural differences and is open to new ideas and approaches Able to build relationships with all teams, identifying and focusing on their needs Ability to communicate effectively at business and technical level is essential. Experience working in a global-team Teamwork An effective team player and strong collaborator across technology and all relevant areas of the business. Enthusiastic with a drive to succeed. Thrives in a pressurized environment with a “can do” attitude Must be able to work under own initiative About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 309234 Posted On: 2025-06-15 Location: Gurgaon, Haryana, India

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Senior Data Engineer – AWS Expert (Lead/Associate Architect Level) 📍 Location: Trivandrum or Kochi (On-site/Hybrid) Experience:10+ Years (Relevant exp in AWS- 5+ is mandatory) About the Role We’re hiring a Senior Data Engineer with deep expertise in AWS services, strong hands-on experience in data ingestion, quality, and API development, and the leadership skills to operate at a Lead or Associate Architect level. This role demands a high level of technical ownership, especially in architecting scalable, reliable data pipelines and robust API integrations. You’ll collaborate with cross-functional teams across geographies, so a willingness to work night shifts overlapping with US hours (till 10 AM IST) is essential. Key Responsibilities Data Engineering Leadership: Design and implement scalable, end-to-end data ingestion and processing frameworks using AWS. AWS Architecture: Hands-on development using AWS Glue, Lambda, EMR, Step Functions, S3, ECS, and other AWS services. Data Quality & Validation: Build automated checks, validation layers, and monitoring for ensuring data accuracy and integrity. API Development: Develop secure, high-performance REST APIs for internal and external data integration. Collaboration: Work closely with product, analytics, and DevOps teams across geographies. Participate in Agile ceremonies and CI/CD pipelines using tools like GitLab. What We’re Looking For Experience: 5+ years in Data Engineering, with a proven track record in designing scalable AWS-based data systems. Technical Mastery: Proficient in Python/PySpark, SQL, and building big data pipelines. AWS Expert: Deep knowledge of core AWS services used for data ingestion and processing. API Expertise: Experience designing and managing scalable APIs. Leadership Qualities: Ability to work independently, lead discussions, and drive technical decisions. Preferred Qualifications Experience with Kinesis, Firehose, SQS, and data lakehouse architectures. Exposure to tools like Apache Iceberg, Aurora, Redshift, and DynamoDB. Prior experience in distributed, multi-cluster environments. Working Hours US Time Zone Overlap Required: Must be available to work night shifts overlapping with US hours (up to 10:00 AM IST). Work Location Trivandrum or Kochi – On-site or hybrid options available for the right candidate. Show more Show less

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title : Senior Technical Architect Location : Chennai (Hybrid) Exp : 12 - 20Years Skills : React.js, Next.js, Node.js, Nest.js, AWS, Devops, RDBMS, SQL About the Role: We are looking for a Senior Fullstack Engineer to join a fast-growing product company, driving the development of scalable, high-impact applications. This role is perfect for an engineer with a startup mindset , who thrives in a fast-paced environment, takes ownership, and is passionate about building world-class products. As a key member of our engineering team, you will work across the full stack —from backend APIs to dynamic frontends—using Node.js, React.js, Nest.js, Next.js and AWS to develop highly scalable and resilient applications. Key Responsibilities: Design, develop, and maintain end-to-end web applications with a focus on scalability, performance, and security. Build and optimize backend services using Node.js and Nest.js , ensuring robust API design and efficient data handling. Develop modern, responsive frontend interfaces using React.js, Next.js and component-driven architecture. Deploy and manage cloud infrastructure on AWS , ensuring high availability and fault tolerance. Collaborate closely with product managers, designers, and fellow engineers to deliver high-impact features . Champion best practices in code quality, testing, CI/CD, and DevOps . Continuously improve system architecture to support rapid growth and scalability . Required Qualifications: 4+ years of experience in Technical Architect as a Fullstack Engineer , with strong expertise in Node.js, React.js, and Nest.js, Next.js Deep understanding of API development, microservices, and event-driven architectures . Experience with AWS services (Lambda, DynamoDB, S3, ECS, etc.). Strong product mindset, capable of translating business needs into technical solutions. Startup experience or a self-starter attitude , comfortable with ambiguity and fast iteration. Strong grasp of performance optimization, database design, and system architecture . Excellent communication skills, with the ability to work in a collaborative, cross-functional team . Nice-to-Have: FinTech experience is a strong plus. Experience with GraphQL, WebSockets, or real-time applications . Prior work in a high-growth startup environment . Familiarity with Docker, Kubernetes, and Terraform . Strong understanding of DevOps and CI/CD best practices Show more Show less

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies