Jobs
Interviews

547 Lambda Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

50 - 60 Lacs

Bengaluru

Work from Office

Job Title: AI/ML Architect GenAI, LLMs & Enterprise Automation Location: Bangalore Experience: 8+ years (including 4+ years in AI/ML architecture on cloud platforms) Role Summary We are seeking an experienced AI/ML Architect to define and lead the design, development, and scaling of GenAI-driven solutions across our learning and enterprise platforms. This is a senior technical leadership role where you will work closely with the CTO and product leadership to architect intelligent systems powered by LLMs, RAG pipelines, and multi-agent orchestration. You will own the AI solution architecture end-to-endfrom model selection and training frameworks to infrastructure, automation, and observability. The ideal candidate will have deep expertise in GenAI systems and a strong grasp of production-grade deployment practices across the stack. Must-Have Skills AI/ML solution architecture experience with production-grade systems Strong background in LLM fine-tuning (SFT, LoRA, PEFT) and RAG frameworks Experience with vector databases (FAISS, Pinecone) and embedding generation Proficiency in LangChain, LangGraph , LangFlow, and prompt engineering Deep cloud experience (AWS: Bedrock, ECS, Lambda, S3, IAM) Infra automation using Terraform, CI/CD via GitHub Actions or CodePipeline Backend API architecture using FastAPI or Node.js Monitoring & observability using Langfuse, LangWatch, OpenTelemetry Python, Bash scripting, and low-code/no-code tools (e.g., n8n) Bonus Skills Hands-on with multi-agent orchestration frameworks (CrewAI, AutoGen) Experience integrating AI/chatbots into web, mobile, or LMS platforms Familiarity with enterprise security, data governance, and compliance frameworks Exposure to real-time analytics and event-driven architecture You’ll Be Responsible For Defining the AI/ML architecture strategy and roadmap Leading design and development of GenAI-powered products and services Architecting scalable, modular, and automated AI systems Driving experimentation with new models, APIs, and frameworks Ensuring robust integration between model, infra, and app layers Providing technical guidance and mentorship to engineering teams Enabling production-grade performance, monitoring, and governance

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

Hyderabad, Bengaluru

Hybrid

Role & responsibilities AWS Lambda, AWS EC2, AWS S3, RESTful APIs, Java, REST API

Posted 2 weeks ago

Apply

6.0 - 11.0 years

20 - 30 Lacs

Bhopal, Hyderabad, Pune

Hybrid

Hello Greetings from NewVision Software!! We are hiring on an immediate basis for the role of Senior / Lead Python Developer + AWS | NewVision Software | Pune, Hyderabad & Bhopal Location | Fulltime Looking for professionals who can join us Immediately or within 15 days is preferred. Please find the job details and description below. NewVision Software PUNE HQ OFFICE 701 &702, Pentagon Tower, P1, Magarpatta City, Hadapsar, Pune, Maharashtra - 411028, India NewVision Software The Hive Corporate Capital, Financial District, Nanakaramguda, Telangana - 500032 NewVision Software IT Plaza, E-8, Bawadiya Kalan Main Rd, near Aura Mall, Gulmohar, Fortune Pride, Shahpura, Bhopal, Madhya Pradesh - 462039 Senior Python and AWS Developer Role Overview: We are looking for a skilled senior Python Developer with strong background in AWS cloud services to join our team. The ideal candidate will be responsible for designing, developing, and maintaining robust backend systems, ensuring high performance and responsiveness to requests from the front end. Responsibilities : Develop, test, and maintain scalable web applications using Python and Django. Design and manage relational databases with PostgreSQL, including schema design and optimization. Build RESTful APIs and integrate with third-party services as needed. Work with AWS services including EC2, EKS, ECR, S3, Glue, Step Functions, EventBridge Rules, Lambda, SQS, SNS, and RDS. Collaborate with front-end developers to deliver seamless end-to-end solutions. Write clean, efficient, and well-documented code following best practices. Implement security and data protection measures in applications. Optimize application performance and troubleshoot issues as they arise. Participate in code reviews, testing, and continuous integration processes. Stay current with the latest trends and advancements in Python, Django, and database technologies. Mentor junior python developers. Requirements : 6+ years of professional experience in Python development. Strong proficiency with Django web framework. Experience working with PostgreSQL, including complex queries and performance tuning. Familiarity with RESTful API design and integration. Strong understanding of OOP, SOLID principles, and design patterns. Strong knowledge of Python multithreading and multiprocessing. Experience with AWS services: S3, Glue, Step Functions, EventBridge Rules, Lambda, SQS, SNS, IAM, Secret Manager, KMS and RDS. Understanding of version control systems (Git). Knowledge of security best practices and application deployment. Basic understanding of Microservices architecture. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Nice to Have Experience with Docker, Kubernetes, or other containerization tools. Good to have front-end technologies (React). Experience with CI/CD pipelines and DevOps practices. Experience with infrastructure as code tools like Terraform. Education : Bachelors degree in computer science engineering or related field (or equivalent experience). Do share your resume with my email address: imran.basha@newvision-software.com Please share your experience details: Total Experience: Relevant Experience: Exp: Python: Yrs, AWS: Yrs, PostgreSQL: Yrs Rest API: Yrs, Django: Current CTC: Expected CTC: Notice / Serving (LWD): Any Offer in hand: LPA Current Location Preferred Location: Education: Please share your resume and the above details for Hiring Process: - imran.basha@newvision-software.com

Posted 2 weeks ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Gurugram

Work from Office

Role Description Write and maintain build/deploy scripts. Work with the Sr. Systems Administrator to deploy and implement new cloud infra structure and designs. Manage existing AWS deployments and infrastructure. Build scalable, secure, and cost-optimized AWS architecture. Ensure best practices are followed and implemented. Assist in deployment and operation of security tools and monitoring. Automate tasks where appropriate to enhance response times to issues and tickets. Collaborate with Cross-Functional Teams: Work closely with development, operations, and security teams to ensure a cohesive approach to infrastructure and application security. Participate in regular security reviews and planning sessions. Incident Response and Recovery: Participate in incident response planning and execution, including post-mortem analysis and preventive measures implementation. Continuous Improvement: Regularly review and update security practices and procedures to adapt to the evolving threat landscape. Analyze and remediate vulnerabilities and advise developers of vulnerabilities requiring updates to code. Create/Maintain documentation and diagrams for application/security and network configurations. Ensure systems are monitored using monitoring tools such as Datadog and issues are logged and reported to required parties. Technical Skills Experience with system administration, provisioning and managing cloud infrastructure and security monitoring In-depth. Experience with infrastructure/security monitoring and operation of a product or service. Experience with containerization and orchestration such as Docker, Kubernetes/EKS Hands on experience creating system architectures and leading architecture discussions at a team or multi-team level. Understand how to model system infrastructure in the cloud with Amazon Web Services (AWS), AWS CloudFormation, or Terraform. Strong knowledge of cloud infrastructure (AWS preferred) services like Lambda, Cognito, SQS, KMS, S3, Step Functions, Glue/Spark, CloudWatch, Secrets Manager, Simple Email Service, CloudFront Familiarity with coding, scripting and testing tools. (preferred) Strong interpersonal, coordination and multi-tasking skills Ability to function both independently and collaboratively as part of a team to achieve desired results. Aptitude to pick up new concepts and technology rapidly; ability to explain it to both business & tech stakeholders. Ability to adapt and succeed in a fast-paced, dynamic startup environment. Experience with Nessus and other related infosec tooling Nice-to-have skills Strong interpersonal, coordination and multi-tasking skills Ability to work independently and follow through to achieve desired results. Quick learner, with the ability to work calmly under pressure and with tight deadlines. Ability to adapt and succeed in a fast-paced, dynamic startup environment Qualifications BA/BS degree in Computer Science, Computer Engineering, or related field; MS degree in Computer Science or Computer Engineering ( preferred)

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

As a talented software developer at our company, you will have the opportunity to showcase your passion for coding and product building. Your problem-solving mindset and love for taking on new challenges will make you a valuable addition to our rockstar engineering team. You will be responsible for designing and developing robust, scalable, and secure backend architectures using Django. Your focus will be on backend development to ensure the smooth functioning of web applications and systems. Creating high-quality RESTful APIs to facilitate seamless communication between frontend, backend, and other services will also be a key part of your role. In addition, you will play a crucial role in designing, implementing, and maintaining database schemas to ensure data integrity, performance, and security. You will work on ensuring the scalability and reliability of our backend infrastructure on AWS, aiming for zero downtime of systems. Writing clean, maintainable, and efficient code while following industry standards and best practices will be essential. Collaboration is key in our team, and you will conduct code reviews, provide feedback to team members, and work closely with frontend developers, product managers, and designers to plan and optimize features. You will break down high-level business problems into smaller components and build efficient systems to address them. Staying updated with the latest technologies such as LLM frameworks and implementing them as needed will be part of your continuous learning process. Your skills in Python, Django, SQL/PostgreSQL databases, and AWS services will be put to good use as you optimize systems, identify bottlenecks, and resolve them to enhance efficiency. To qualify for this role, you should have a Bachelor's degree in Computer Science or a related field, along with at least 2 years of experience in full-stack web development with a focus on backend development. Proficiency in Python, Django, and experience with Django Rest Framework are required. Strong problem-solving skills, excellent communication, and collaboration abilities are also essential for success in this position.,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

You should have a strong knowledge of SQL and Python. Experience in Snowflake is preferred. Additionally, you should have knowledge of AWS services such as S3, Lamdba, IAM, Step function, SNS, SQS, ECS, and Dynamo. It is important to have expertise in data movement technologies like ETL/ELT. Good to have skills include knowledge on DevOps, Continuous Integration, and Continuous Delivery with tools such as Maven, Jenkins, Stash, Control-M, Docker. Experience in automation and REST APIs would be beneficial for this role.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an AI Developer with 5-8 years of experience, you will be based in Pune with a hybrid working model. You should be able to join immediately or within 15 days. Your primary responsibility will be to develop and maintain Python applications, focusing on API building, data processing, and transformation. You will utilize Lang Graph to design and manage complex language model workflows and work with machine learning and text processing libraries to deploy agents. Your must-have skills include proficiency in Python programming with a strong understanding of object-oriented programming concepts. You should have extensive experience with data manipulation libraries like Pandas and NumPy to ensure clean, efficient, and maintainable code. Additionally, you will develop and maintain real-time data pipelines and microservices to ensure seamless data flow and integration across systems. When it comes to SQL, you are expected to have a strong understanding of basic SQL query syntax, including joins, WHERE, and GROUP BY clauses. Good-to-have skills include practical experience in AI development applications, knowledge of parallel processing and multi-threading/multi-processing to optimize data fetching and execution times, familiarity with SQLAlchemy or similar libraries for data fetching, and experience with AWS cloud services such as EC2, EKS, Lambda, and Postgres. If you are looking to work in a dynamic environment where you can apply your skills in Python, SQL, Pandas, NumPy, Agentic AI development, CI/CD pipelines, AWS, and Generative AI, this role might be the perfect fit for you.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Full Stack Developer at Codebase, you will be part of a young software services company that boasts a team of tech-savvy developers. Since our inception in the spring of 2018, we have been on a rapid growth trajectory, catering to software product companies worldwide, with a primary focus on enterprise SaaS, eCommerce, cloud solutions, and application development. Your role will entail a high level of proficiency in TypeScript, with hands-on experience in creating scalable front-end and back-end applications using contemporary frameworks and cloud services. The ideal candidate for this position excels in a dynamic, startup-like setting and is adept at working both autonomously and collaboratively within a team environment. Your responsibilities will include developing responsive and high-performance front-end applications using Qwik, TailwindCSS, and DaisyUI. Additionally, you will be tasked with writing infrastructure-as-code and backend services utilizing AWS CDK, AppSync (GraphQL), and Lambda. Your input will be valuable in making design, architecture, and implementation decisions, while also participating in code reviews, writing tests, and enhancing the overall developer experience. Collaboration with stakeholders and offering integration support will also be key aspects of your role. **Technical Skills Required:** - Possession of at least 3 years of relevant experience in TypeScript for building modern web applications - Proficiency in React-like frameworks, component-based architecture, and client-side rendering - Frontend expertise in Qwik (or similar frameworks like Astro, SolidJS), TailwindCSS, and DaisyUI - Backend proficiency in AWS AppSync (GraphQL), Lambda, and DynamoDB - Familiarity with infrastructure tools such as AWS CDK (TypeScript) and GitHub Actions **Desired Skills:** - Experience in GraphQL schema design - AWS certifications would be a plus The expected working hours necessitate a substantial overlap with the time frame between 6 AM and 8 PM MST (5:30 PM to 2 AM IST). If you are passionate about cutting-edge technology, enjoy working in a fast-paced environment, and are ready to contribute to impactful projects, we invite you to join our team at Codebase.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for mentoring and guiding a team in the role of Technical Lead. In addition, you will support the Technical Architect in designing and exploring new services and modules. Your expertise should include hands-on experience in Java, SpringBoot, and various Spring modules such as Spring MVC, Spring JPA, and Spring Actuators. Furthermore, you must have practical experience with various AWS services including EC2, S3, Lambda, API Gateway, EKS, RDS, Fargate, and CloudFormation. Proficiency in Microservices based architecture and RESTful web services is essential for this role. The ideal candidate will have a notice period of 15 days or less, with preference given to immediate joiners.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Application System Development Engineer, you will be responsible for designing, developing, and maintaining applications both independently and collaboratively within a team. Your primary focus will be to adhere to project priorities and schedules, ensuring timely completion of assigned projects while also striving to enhance system quality and efficiency through process improvements. Your key responsibilities will include designing, developing, and maintaining applications, upholding the quality and security of development tasks, and following best design and development practices. You will work in alignment with project priorities and schedules, while also embracing agile methodologies and project management practices. Additionally, you will assist in technical and project risk management, collaborate with the team on daily development tasks, and provide support and guidance to junior developers. To excel in this role, you must possess expertise in application development using HTML5, CSS3, and Angular programming languages, backed by a minimum of 5 years of relevant experience and a Bachelor's degree. Your experience should include creating application UI in projects using frameworks like AngularJS or Angular Materials, as well as a strong understanding of design patterns, application architecture, and SOLID principles. Furthermore, you should demonstrate proficiency in writing unit test cases using Jasmine/Jest, E2E test cases with Protractor, creating tool scripts, and implementing interfaces for applications using web sockets. Experience with complete software development life cycles, agile methodologies, technical mentoring, and strong communication skills are essential for this role. Ideally, you would have hands-on experience with frameworks like NRWL, knowledge of audio domains and related frameworks, and exposure to working in multicultural environments. Proficiency in project management tools such as Jira, Contour, Confluence, and configuration management systems like Git or SVN is preferred. Strong experience with AWS services, Node, TypeScript, JavaScript, and frameworks like NestJS, as well as familiarity with CloudFormation for IaC, are highly valued. In conclusion, as a Senior Application System Development Engineer at Shure, you will join a global audio equipment manufacturer with a rich history of quality and innovation. If you are passionate about creating a diverse, equitable, and inclusive work environment and possess the skills required for this role, we encourage you to apply and be part of our dynamic team at Shure. Shure is a renowned audio brand with a mission to be the most trusted worldwide, offering a supportive and inclusive culture, flexible work arrangements, and endless opportunities for growth. Headquartered in the United States, Shure operates globally with regional offices and facilities across the Americas, EMEA, and Asia, making a meaningful impact in the audio industry for nearly a century.,

Posted 3 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

kochi, kerala

On-site

As a Java Backend Developer in our IoT domain team based in Kochi, you will be responsible for designing, developing, and deploying scalable microservices using Spring Boot, SQL databases, and AWS services. Your role will involve leading the backend development team, implementing DevOps best practices, and optimizing cloud infrastructure. Your key responsibilities will include architecting and implementing high-performance, secure backend services using Java (Spring Boot), developing RESTful APIs and event-driven microservices with a focus on scalability and reliability, designing and optimizing SQL databases (PostgreSQL, MySQL), and deploying applications on AWS using services like ECS, Lambda, RDS, S3, and API Gateway. You will also be responsible for implementing CI/CD pipelines, monitoring and improving backend performance, ensuring security best practices, and authentication using OAuth, JWT, and IAM roles. The required skills for this role include proficiency in Java (Spring Boot, Spring Cloud, Spring Security), microservices architecture, API development, SQL (PostgreSQL, MySQL), ORM (JPA, Hibernate), DevOps tools (Docker, Kubernetes, Terraform, CI/CD, GitHub Actions, Jenkins), AWS cloud services (EC2, Lambda, ECS, RDS, S3, IAM, API Gateway, CloudWatch), messaging systems (Kafka, RabbitMQ, SQS, MQTT), testing frameworks (JUnit, Mockito, Integration Testing), and logging & monitoring tools (ELK Stack, Prometheus, Grafana). Preferred skills that would be beneficial for this role include experience in the IoT domain, work experience in startups, event-driven architecture using Apache Kafka, knowledge of Infrastructure as Code (IaC) with Terraform, and exposure to serverless architectures. In return, we offer a competitive salary, performance-based incentives, the opportunity to lead and mentor a high-performing tech team, hands-on experience with cutting-edge cloud and microservices technologies, and a collaborative and fast-paced work environment. If you have any experience in the IoT domain and are looking for a full-time role with a day shift schedule in an in-person work environment, we encourage you to apply for this exciting opportunity in Kochi.,

Posted 3 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Workday Technical Specialist at Nasdaq Technology in Bangalore, India, you will be a key player in delivering complex technical systems to both new and existing customers. Your role will involve discovering and implementing innovative technologies within the FinTech industry, contributing to the continuous revolutionizing of markets and adoption of new solutions. You will work as part of the Enterprise Solutions team, driving central initiatives across Nasdaq's corporate technology portfolio of Software Products and Software Services. In this role, you will collaborate with a global team to deliver critical solutions and services to Nasdaq's finance processes and operations. Your responsibilities will include designing and configuring applications to meet business requirements, maintaining integrations with internal systems and third-party vendors, and documenting technical solutions for future reference. You will also participate in end-to-end testing, build test cases, and follow established processes to ensure the quality of your work. To excel in this position, you are expected to have at least 10 to 13 years of software development experience, expertise in Workday Integration tools, Web services programming, and a strong understanding of the Workday Security model. A Bachelor's or Master's degree in computer science or a related field is required. Additionally, knowledge of Workday Finance modules, experience in multinational organizations, familiarity with middleware systems and ETL tools, as well as exposure to AWS services, would be beneficial. Nasdaq offers a vibrant and entrepreneurial work environment that encourages employees to take initiative, challenge the status quo, and embrace work-life balance. You will have the opportunity to grow within the Enterprise Solutions team, collaborate with experts globally, and contribute to cutting-edge technology solutions. If you resonate with Nasdaq's values and are eager to deliver top technology solutions to today's markets, we encourage you to apply in English at your earliest convenience. As part of the selection process, we will review applications and aim to provide feedback within 2-3 weeks. Join us at Nasdaq for an enriching career experience that includes an annual monetary bonus, opportunities to become a Nasdaq shareholder, health insurance programs, flexible working schedules, internal mentorship initiatives, and a wide range of online learning resources. Embrace the culture of innovation, connectivity, and empowerment at Nasdaq, where diversity and inclusion are celebrated, and every individual is valued for their authentic self.,

Posted 3 weeks ago

Apply

10.0 - 15.0 years

0 Lacs

maharashtra

On-site

As a Lead Software Engineer at NEC Software Solutions (India) Private Limited, you will be part of a dynamic team working on innovative applications that utilize AI to enhance efficiency within the Public Safety sector. With over 10-15 years of experience, your primary expertise in Python and React will be crucial in developing new functionalities for an AI-enabled product roadmap. Your role will involve collaborating closely with the product owner and Solution Architect to create robust, market-ready software products meeting the highest engineering and user experience standards. Your responsibilities will include writing reusable, testable, and efficient Python code, working on Document and image processing libraries, API Gateway, backend CRUD operations, and cloud infrastructure preferably AWS. Additionally, your expertise in TypeScript & React for frontend development, designing clean user interfaces, and backend programming for web applications will be instrumental in delivering software features from concept to production. Your personal attributes such as problem-solving skills, inquisitiveness, autonomy, motivation, integrity, and big picture awareness will play a vital role in contributing to the team's success. Moreover, you will have the opportunity to develop new skills, lead technical discussions, and actively engage in self-training and external training sessions to enhance your capabilities. As a Senior Full Stack Engineer, you will actively participate in discussions with the Product Owner and Solution Architect, ensure customer-centric development, oversee the software development lifecycle, and implement secure, scalable, and resilient solutions for NECSWS products. Your role will also involve providing support for customers and production systems to ensure seamless operations. The ideal candidate for this role should hold a graduate degree, possess outstanding leadership qualities, and have a strong background in IT, preferably with experience in public sector or emergency services. If you are someone who thrives in a challenging environment, enjoys working with cutting-edge technologies, and is passionate about delivering high-quality software solutions, we invite you to join our team at NEC Software Solutions (India) Private Limited.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Platform Engineer Lead at Barclays, your role is crucial in building and maintaining systems that collect, store, process, and analyze data, including data pipelines, data warehouses, and data lakes. Your responsibility includes ensuring the accuracy, accessibility, and security of all data. To excel in this role, you should have hands-on coding experience in Java or Python and a strong understanding of AWS development, encompassing various services such as Lambda, Glue, Step Functions, IAM roles, and more. Proficiency in building efficient data pipelines using Apache Spark and AWS services is essential. You are expected to possess strong technical acumen, troubleshoot complex systems, and apply sound engineering principles to problem-solving. Continuous learning and staying updated with new technologies are key attributes for success in this role. Design experience in diverse projects where you have led the technical development is advantageous, especially in the Big Data/Data Warehouse domain within Financial services. Additional skills in enterprise-level software solutions development, knowledge of different file formats like JSON, Iceberg, Avro, and familiarity with streaming services such as Kafka, MSK, and Kinesis are highly valued. Effective communication, collaboration with cross-functional teams, documentation skills, and experience in mentoring team members are also important aspects of this role. Your accountabilities will include the construction and maintenance of data architectures pipelines, designing and implementing data warehouses and data lakes, developing processing and analysis algorithms, and collaborating with data scientists to deploy machine learning models. You will also be expected to contribute to strategy, drive requirements for change, manage resources and policies, deliver continuous improvements, and demonstrate leadership behaviors if in a leadership role. Ultimately, as a Data Platform Engineer Lead at Barclays in Pune, you will play a pivotal role in ensuring data accuracy, accessibility, and security while leveraging your technical expertise and collaborative skills to drive innovation and excellence in data management.,

Posted 3 weeks ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role & responsibilities Strong experience in core Python in Application Development AWS-AWS Application development experience(Backend Services Development) AWS Services: API Gateway, Lambda Functions, Step Functions, EKS/ ECS/ EC2, S3, SQS/SNS/ EventBridge, RDS, Cloudwatch, ELB System Design-LLD(Low level Design document) using OOps, SOLID and other design principles. Gen AI- Nice to have( Knowledge of LLMs, RAG, Finetuning, LangChain , Prompt Engineering Experience in working with Stakeholders, (Business, Technology) for requirement refinement to create acceptance criteria HLD and LLD Hands on in designing (competent with designing tools like draw.io/ Lucid/ Visio etc) and developing End to End tasks Handson with alteast one RDBMS (Postgresql/ MYSQL/ Oracle DB ), Knowledge of ORM (SQLalchemy). Good to have: Mongo DB/ Dynamo DB. Excellent communication skills(Written & Oral)

Posted 3 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a Solutions Architect with over 7 years of experience, you will have the opportunity to leverage your expertise in cloud data solutions to architect scalable and modern solutions on AWS. In this role at Quantiphi, you will be a key member of our high-impact engineering teams, working closely with clients to solve complex data challenges and design cutting-edge data analytics solutions. Your responsibilities will include acting as a trusted advisor to clients, leading discovery/design workshops with global customers, and collaborating with AWS subject matter experts to develop compelling proposals and Statements of Work (SOWs). You will also represent Quantiphi in various forums such as tech talks, webinars, and client presentations, providing strategic insights and solutioning support during pre-sales activities. To excel in this role, you should have a strong background in AWS Data Services including DMS, SCT, Redshift, Glue, Lambda, EMR, and Kinesis. Your experience in data migration and modernization, particularly with Oracle, Teradata, and Netezza to AWS, will be crucial. Hands-on experience with ETL tools such as SSIS, Informatica, and Talend, as well as a solid understanding of OLTP/OLAP, Star & Snowflake schemas, and data modeling methodologies, are essential for success in this position. Additionally, familiarity with backend development using Python, APIs, and stream processing technologies like Kafka, along with knowledge of distributed computing concepts including Hadoop and MapReduce, will be beneficial. A DevOps mindset with experience in CI/CD practices and Infrastructure as Code is also desired. Joining Quantiphi as a Solutions Architect is more than just a job it's an opportunity to shape digital transformation journeys and influence business strategies across various industries. If you are a cloud data enthusiast looking to make a significant impact in the field of data analytics, this role is perfect for you.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As an AWS Data Engineer at Sufalam Technologies, located in Ahmedabad, India, you will be responsible for designing and implementing data engineering solutions on AWS. Your role will involve developing data models, managing ETL processes, and ensuring the efficient operation of data warehousing solutions. Collaboration with Finance, Data Science, and Product teams is crucial to understand reconciliation needs and ensure timely data delivery. Your expertise will contribute to data analytics activities supporting business decision-making and strategic goals. Key responsibilities include designing and implementing scalable and secure ETL/ELT pipelines for processing financial data. Collaborating closely with various teams to understand reconciliation needs and ensuring timely data delivery. Implementing monitoring and alerting for pipeline health and data quality, maintaining detailed documentation on data flows, models, and reconciliation logic, and ensuring compliance with financial data handling and audit standards. To excel in this role, you should have 5-6 years of experience in data engineering with a strong focus on AWS data services. Hands-on experience with AWS Glue, Lambda, S3, Redshift, Athena, Step Functions, Lake Formation, and IAM is essential for secure data governance. A solid understanding of data reconciliation processes in the finance domain, strong SQL skills, experience with data warehousing and data lakes, and proficiency in Python or PySpark for data transformation are required. Knowledge of financial accounting principles or experience working with financial datasets (AR, AP, General Ledger, etc.) would be beneficial.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have a solid understanding of object-oriented programming and software design patterns. Your responsibilities will include designing, developing, and maintaining Web/Service/desktop applications using .NET, .NET Core, and React.js. Additionally, you will be working on React.js/MVC and front-end development, AWS services like ASG, EC2, S3, Lambda, IAM, AMI, CloudWatch, Jenkins, RESTful API development, and integration. It is important to have familiarity with database technologies such as SQL Server and ensure the performance, quality, and responsiveness of applications. Collaboration with cross-functional teams to define, design, and ship new features will be crucial. Experience with version control systems like Git/TFS is required, with a preference for GIT. Excellent communication and teamwork skills are essential, along with familiarity with Agile/Scrum development methodologies. This is a full-time position with benefits including cell phone reimbursement. The work location is in person.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be responsible for building the most personalized and intelligent news experiences for India's next 750 million digital users. As Our Principal Data Engineer, your main tasks will include designing and maintaining data infrastructure to power personalization systems and analytics platforms. This involves ensuring seamless data flow from source to consumption, architecting scalable data pipelines to process massive volumes of user interaction and content data, and developing robust ETL processes for large-scale transformations and analytical processing. You will also be involved in creating and maintaining data lakes/warehouses that consolidate data from multiple sources, optimized for ML model consumption and business intelligence. Additionally, you will implement data governance practices and collaborate with the ML team to ensure the right data availability for recommendation systems. To excel in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field, along with 8-12 years of data engineering experience, including at least 3 years in a senior role. You must possess expert-level SQL skills and have strong experience in the Apache Spark ecosystem (Spark SQL, Streaming, SparkML), as well as proficiency in Python/Scala. Experience with the AWS data ecosystem (RedShift, S3, Glue, EMR, Kinesis, Lambda, Athena) and ETL frameworks (Glue, Airflow) is essential. A proven track record of building large-scale data pipelines in production environments, particularly in high-traffic digital media, will be advantageous. Excellent communication skills are also required, as you will need to collaborate effectively across teams in a fast-paced environment that demands engineering agility.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a DevOps Engineer, you will play a crucial role in constructing and managing a robust, scalable, and reliable 0-downtime platform. You will be actively involved in a newly initiated greenfield project that utilizes modern infrastructure and automation tools to support our engineering teams. This presents a valuable opportunity to collaborate with an innovative team, fostering a culture of fresh thinking, integrating AI and automation, and contributing to our cloud-native journey. If you are enthusiastic about automation, cloud infrastructure, and delivering high-quality production-grade platforms, this position provides you with the opportunity to create a significant impact. Your primary responsibilities will include: - **Hands-On Development**: Design, implement, and optimize AWS infrastructure by engaging in hands-on development using Infrastructure as Code (IaC) tools. - **Automation & CI/CD**: Develop and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate rapid, secure, and seamless deployments. - **Platform Reliability**: Ensure the high availability, scalability, and resilience of our platform by leveraging managed services. - **Monitoring & Observability**: Implement and oversee proactive observability using tools like DataDog to monitor system health, performance, and security, ensuring prompt issue identification and resolution. - **Cloud Security & Best Practices**: Apply cloud and security best practices, including configuring networking, encryption, secrets management, and identity/access management. - **Continuous Improvement**: Contribute innovative ideas and solutions to enhance our DevOps processes. - **AI & Future Tech**: Explore opportunities to incorporate AI into our DevOps processes and contribute towards AI-driven development. Your experience should encompass proficiency in the following technologies and concepts: - **Tech Stack**: Terraform, Terragrunt, Helm, Python, Bash, AWS (EKS, Lambda, EC2, RDS/Aurora), Linux OS, and Github Actions. - **Strong Expertise**: Hands-on experience with Terraform, IaC principles, CI/CD, and the AWS ecosystem. - **Networking & Cloud Configuration**: Proven experience with Networking (VPC, Subnets, Security Groups, API Gateway, Load Balancing, WAF) and Cloud configuration (Secrets Manager, IAM, KMS). - **Kubernetes & Deployment Strategies**: Comfortable with Kubernetes, ArgoCD, Istio, and deployment strategies like blue/green and canary. - **Cloud Security Services**: Familiarity with Cloud Security services such as Security Hub, Guard Duty, Inspector, and vulnerability observability. - **Observability Mindset**: Strong belief in measuring everything and utilizing tools like DataDog for platform health and security visibility. - **AI Integration**: Experience with embedding AI into DevOps processes is considered advantageous. This role presents an exciting opportunity to contribute to cutting-edge projects, collaborate with a forward-thinking team, and drive innovation in the realm of DevOps engineering.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As a DevOps Engineer, you will play a key role in building and maintaining a robust, scalable, and reliable 0-downtime platform. You will work hands-on with a recently kick-started greenfield initiative with modern infrastructure and automation tools to support our engineering teams. This is a great opportunity to work with a forward-thinking team and have the freedom to approach problems with fresh thinking, embedding AI and automation to help shape our cloud-native journey. If you are passionate about automation, cloud infrastructure, and delivering high-quality production-grade platforms, this role offers the chance to make a real impact. Key Responsibilities Hands-On Development: Design, implement, and optimize AWS infrastructure through hands-on development using Infrastructure as Code tools. Automation & CI/CD: Develop and maintain CI/CD pipelines to automate fast, secure, and seamless deployments. Platform Reliability: Ensure high availability, scalability, and resilience of our platform, leveraging managed services. Monitoring & Observability: Implement and manage proactive observability using DataDog and other tools to monitor system health, performance, and security, ensuring that we can see and fix issues before they impact users. Cloud Security & Best Practices: Apply cloud and security best practices, including patching and secure configuration of networking, encryption (at rest and in transit), secrets, and identity/access management. Continuous Improvement: Contribute ideas and solutions to improve our DevOps processes. AI & Future Tech: We aim to push the boundaries of AI-driven development. If you have ideas on how to embed AI into our DevOps processes, you will have the space to explore them. Your Experience Tech stack: We use Terraform, Terragrunt, Helm, Python, Bash, AWS (EKS, Lambda, EC2, RDS/Aurora), Linux OS & Github Actions. You are comfortable with all of these and have strong hands-on experience with Terraform and IaC principles, CI/CD, and the AWS ecosystem. Proven experience with Networking (VPC, Subnets, Security Groups, API Gateway, Load Balancing, WAF) and Cloud configuration (Secrets Manager, IAM, KMS). Comfortable with Kubernetes, ArgoCD, Isitio & Deployment strategies (blue/green & canary). Familiarity with Cloud Security services such as Security Hub, Guard Duty, Inspector, and vulnerability management/patching. Observability Mindset: You believe in measuring everything. You have worked with DataDog (or similar) to ensure teams have visibility into platform health and security. Experience with embedding AI into DevOps processes is advantageous.,

Posted 3 weeks ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Gurugram

Work from Office

Company Overview Incedo is a US-based consulting, data science and technology services firm with over 3000 people helping clients from our six offices across US, Mexico and India. We help our clients achieve competitive advantage through end-to-end digital transformation. Our uniqueness lies in bringing together strong engineering, data science, and design capabilities coupled with deep domain understanding. We combine services and products to maximize business impact for our clients in telecom, Banking, Wealth Management, product engineering and life science & healthcare industries. Working at Incedo will provide you an opportunity to work with industry leading client organizations, deep technology and domain experts, and global teams. Incedo University, our learning platform, provides ample learning opportunities starting with a structured onboarding program and carrying throughout various stages of your career. A variety of fun activities is also an integral part of our friendly work environment. Our flexible career paths allow you to grow into a program manager, a technical architect or a domain expert based on your skills and interests. Our Mission is to enable our clients to maximize business impact from technology by Harnessing the transformational impact of emerging technologies Bridging the gap between business and technology Role Description Write and maintain build/deploy scripts. Work with the Sr. Systems Administrator to deploy and implement new cloud infra structure and designs. Manage existing AWS deployments and infrastructure. Build scalable, secure, and cost-optimized AWS architecture. Ensure best practices are followed and implemented. Assist in deployment and operation of security tools and monitoring. Automate tasks where appropriate to enhance response times to issues and tickets. Collaborate with Cross-Functional Teams: Work closely with development, operations, and security teams to ensure a cohesive approach to infrastructure and application security. Participate in regular security reviews and planning sessions. Incident Response and Recovery: Participate in incident response planning and execution, including post-mortem analysis and preventive measures implementation. Continuous Improvement: Regularly review and update security practices and procedures to adapt to the evolving threat landscape. Analyze and remediate vulnerabilities and advise developers of vulnerabilities requiring updates to code. Create/Maintain documentation and diagrams for application/security and network configurations. Ensure systems are monitored using monitoring tools such as Datadog and issues are logged and reported to required parties. Technical Skills Experience with system administration, provisioning and managing cloud infrastructure and security monitoring In-depth. Experience with infrastructure/security monitoring and operation of a product or service. Experience with containerization and orchestration such as Docker, Kubernetes/EKS Hands on experience creating system architectures and leading architecture discussions at a team or multi-team level. Understand how to model system infrastructure in the cloud with Amazon Web Services (AWS), AWS CloudFormation, or Terraform. Strong knowledge of cloud infrastructure (AWS preferred) services like Lambda, Cognito, SQS, KMS, S3, Step Functions, Glue/Spark, CloudWatch, Secrets Manager, Simple Email Service, CloudFront Familiarity with coding, scripting and testing tools. (preferred) Strong interpersonal, coordination and multi-tasking skills Ability to function both independently and collaboratively as part of a team to achieve desired results. Aptitude to pick up new concepts and technology rapidly; ability to explain it to both business & tech stakeholders. Ability to adapt and succeed in a fast-paced, dynamic startup environment. Experience with Nessus and other related infosec tooling Nice-to-have skills Strong interpersonal, coordination and multi-tasking skills Ability to work independently and follow through to achieve desired results. Quick learner, with the ability to work calmly under pressure and with tight deadlines. Ability to adapt and succeed in a fast-paced, dynamic startup environment Qualifications BA/BS degree in Computer Science, Computer Engineering, or related field; MS degree in Computer Science or Computer Engineering ( preferred) Company Value

Posted 3 weeks ago

Apply

6.0 - 8.0 years

18 - 30 Lacs

Pune

Hybrid

Key Skills: Cloud API development, AWS API Gateway, Lambda, Python, JSON, SQL (Oracle, PostgreSQL, MariaDB), Appian, SAIL, MuleSoft, CI/CD, DevOps, Agile, JIRA, security compliance, technical documentation, stakeholder collaboration. Roles & Responsibilities: Collaborate with architects, developers, and project managers to deliver scalable and compliant solutions aligned with business needs. Research, evaluate, and implement new infrastructure technologies in line with standards and governance. Provide technical consultancy and mentoring to team members on emerging technologies. Ensure all deliverables meet high-quality standards, compliance policies, and best practices. Prepare and maintain project-related documentation, ensuring adherence to audit and compliance requirements. Offer expert support to developers and business users on secure access and API-related queries. Work closely with global and regional stakeholders on mandatory, regulatory, and development projects. Guarantee system compliance with infrastructure and security policies. Develop APIs that interact with databases and cloud storage while implementing appropriate security controls. Design and deliver APIs in the cloud through managed services such as AWS API Gateway. Support the design and development of integrations with third-party systems. Implement DevOps practices and CI/CD pipelines using modern tools and frameworks. Follow Agile Scrum methodology and actively participate in sprints using tools like JIRA. Experience Requirement: 6-10 yeras of experience in designing, developing, and delivering APIs in cloud environments using AWS API Gateway and Lambda functions. Strong hands-on experience with Python, JSON, Oracle SQL, PostgreSQL, MariaDB, and API integration tools such as MuleSoft. Experience with SAIL and Appian development is highly preferred. Background in designing and implementing CI/CD pipelines and applying DevOps principles. Knowledge of Agile Scrum methodology and experience in multi-cultural, global team environments. Ability to independently manage priorities under pressure, ensuring quality results on complex projects. Strong communication and analytical skills with the ability to convey technical solutions to diverse stakeholders. Education: B.Tech M.Tech (Dual), B.Tech, M. Tech.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

15 - 22 Lacs

Gurugram

Remote

The details of the position are: Position Details: Job Title : Data Engineer Client : Yum brands Job ID : 1666-1 Location : (Remote) Project Duration : 06 months Contract Job Description: We are seeking a skilled Data Engineer, who is knowledgeable about and loves working with modern data integration frameworks, big data, and cloud technologies. Candidates must also be proficient with data programming languages (e.g., Python and SQL). The Yum! data engineer will build a variety of data pipelines and models to support advanced AI/ML analytics projectswith the intent of elevating the customer experience and driving revenue and profit growth in our restaurants globally. The candidate will work in our office in Gurgaon, India. Key Responsibilities As a data engineer, you will: • Partner with KFC, Pizza Hut, Taco Bell & Habit Burger to build data pipelines to enable best-in-class restaurant technology solutions. • Play a key role in our Data Operations team—developing data solutions responsible for driving Yum! growth. • Design and develop data pipelines—streaming and batch—to move data from point-of-sale, back of house, operational platforms, and more to our Global Data Hub • Contribute to standardizing and developing a framework to extend these pipelines across brands and markets • Develop on the Yum! data platform by building applications using a mix of open-source frameworks (PySpark, Kubernetes, Airflow, etc.) and best in breed SaaS tools (Informatica Cloud, Snowflake, Domo, etc.). • Implement and manage production support processes around data lifecycle, data quality, coding utilities, storage, reporting, and other data integration points. Skills and Qualifications: • Vast background in all things data-related • AWS platform development experience (EKS, S3, API Gateway, Lambda, etc.) • Experience with modern ETL tools such as Informatica, Matillion, or DBT; Informatica CDI is a plus • High level of proficiency with SQL (Snowflake a big plus) • Proficiency with Python for transforming data and automating tasks • Experience with Kafka, Pulsar, or other streaming technologies • Experience orchestrating complex task flows across a variety of technologies • Bachelor’s degree from an accredited institution or relevant experience

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Who We Are:- We are a digitally native company that helps organizations reinvent themselves and unleash their potential. We are the place where innovation, design and engineering meet scale. Globant is 20 years old, NYSE listed public organization with more than 33,000+ employees worldwide working out of 35 countries globally. www.globant.com Job location: Pune/Hyderabad/Bangalore Work Mode: Hybrid Experience: 5 to 10 Years Must have skills are 1) AWS (EC2 & EMR & EKS) 2) RedShift 3) Lambda Functions 4) Glue 5) Python 6) Pyspark 7) SQL 8) Cloud watch 9) No SQL Database - DynamoDB/MongoDB/ OR any We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing, developing, and managing data pipelines, working with cloud technologies, and optimizing data workflows. You will play a key role in supporting our data-driven initiatives and ensuring the seamless integration and analysis of large datasets. Design Scalable Data Models: Develop and maintain conceptual, logical, and physical data models for structured and semi-structured data in AWS environments. Optimize Data Pipelines: Work closely with data engineers to align data models with AWS-native data pipeline design and ETL best practices. AWS Cloud Data Services: Design and implement data solutions leveraging AWS Redshift, Athena, Glue, S3, Lake Formation, and AWS-native ETL workflows. Design, develop, and maintain scalable data pipelines and ETL processes using AWS services (Glue, Lambda, RedShift). Write efficient, reusable, and maintainable Python and PySpark scripts for data processing and transformation. Optimize SQL queries for performance and scalability. Expertise in writing complex SQL queries and optimizing them for performance. Monitor, troubleshoot, and improve data pipelines for reliability and performance. Focusing on ETL automation using Python and PySpark, responsible for design, build, and maintain efficient data pipelines, ensuring data quality and integrity for various applications.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies