Jobs
Interviews

90217 Aws Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Job Overview: We are looking for a Backend Developer with 6–8 years of experience in backend development and cloud integration. This role focuses on the design and development of RESTful APIs , backend services, and seamless integration with AWS cloud infrastructure . The ideal candidate should be detail-oriented, capable of multitasking, and able to work effectively in a fast-paced, Agile environment. Key Responsibilities Design, develop, and maintain high-performance RESTful APIs using TypeScript, Node.js, and Python. Provide L3 support for complex production issues, including root cause analysis and resolution. Optimize performance for both SQL and NoSQL database queries. Integrate and manage various AWS services (Lambda, API Gateway, DynamoDB, SNS, SQS, S3, IAM). Implement secure API access using OAuth, JWT, and related security protocols. Collaborate with front-end teams for end-to-end application development. Participate in code reviews, Agile ceremonies, and sprint planning. Document incidents, resolutions, and provide technical guidance to peers. Mandatory Skills Languages/Frameworks: TypeScript Node.js Python Cloud & DevOps: Hands-on with AWS services: Lambda, API Gateway, DynamoDB, SQS, SNS, IAM, S3, CloudWatch Experience with serverless architecture API Development: Strong experience in RESTful API design and development Knowledge of API security best practices (OAuth, JWT) Databases: Proficiency with SQL (MySQL, PostgreSQL) and NoSQL (DynamoDB) Version Control: Git and Git-based workflows (GitHub, GitLab) Problem Solving & Support: Proven experience in L3 support, debugging, and issue resolution Secondary Skills (Good To Have) AWS Certification (Developer Associate / Solutions Architect) Experience with Swagger/OpenAPI, AWS X-Ray Knowledge of CI/CD pipelines Understanding of OOP, MVC, and web standards Familiarity with Agile / Scrum methodologies Soft Skills Excellent verbal and written communication skills Strong analytical and problem-solving abilities Ability to work collaboratively in cross-functional teams Skills Node.Js,Restful Apis,Aws,Python

Posted 6 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary: We are seeking a skilled and motivated Microsoft Fabric Developer (3-6yrs exp) to join our data engineering team. The ideal candidate will have hands-on experience in building and maintaining data pipelines, working with Microsoft Fabric components, and delivering scalable data solutions in cloud environments. The ideal candidate will have a strong background in data modelling, data transformation, data analytics and reporting, with expertise in utilizing cutting-edge technologies such as Power BI and scripting engines including Power Automate, Power Query, DAX and Typescript. The primary objective of the Software Developer will be to design and maintain high-quality software solutions that facilitate data analytics and reporting for our organization. Key Responsibilities: Design, develop, and maintain software applications that support the organization's data analytics and reporting requirements. Design, develop, and maintain data pipelines using Microsoft Fabric (OneLake, Lakehouse, Dataflows, Pipelines, Notebooks). Implement ETL/ELT processes using Azure Data Factory, Synapse, and Spark (PySpark, Spark SQL). Optimize data ingestion, transformation, and loading processes for performance and scalability. Collaborate with data analysts, architects, and business stakeholders to understand requirements and deliver solutions. Ensure data quality, security, and compliance with governance standards. Monitor and troubleshoot data workflows and resolve performance bottlenecks. Document technical designs, processes, and best practices. Develop comprehensive data models and transformation solutions to facilitate accurate and efficient reporting. Develop engaging and interactive dashboards and reports utilizing Power Bi. Build automation workflows utilizing Power Automate. Produce efficient, readable, and scalable code using Typescript. Collaborate closely with cross-functional teams to identify requirements, develop solutions, and ensure on-time delivery of projects. Conduct thorough unit testing, as well as timely troubleshooting and issue resolution as required. Learn new technologies including Power BI new features and Azure Stay informed on the latest developments in data analytics and reporting technologies. Key Requirements: 3 to 5 years of experience as a software developer, with a proven track record in data analytics and reporting. Expertise in data modelling, data transformation, and data analytics. Strong proficiency in utilizing technologies such as Power BI and scripting engines including Power Automate and Typescript. Good to have knowledge and experience of Azure Services Excellent problem-solving skills with keen attention to detail. Ability to work effectively as part of a collaborative, cross-functional team. Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. Proven experience utilizing Agile development methodologies. Bachelor’s or master’s degree in computer science or a related field Skills Required Working with Rest API’s, Webservices Proficient in XSLT, CSS, JavaScript, React JS, Node JS, D3 JS Hands on Experience in scripting on typescript Experience on Python or .NET Technologies would be added advantage Working experience on PL/SQL, SQL, noSQL Databases. Certifications Azure or AWS Certified associates. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process.

Posted 6 hours ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

ETL Developer looking for a Senior ETL Developer in our Enterprise Data Warehouse. In this role you will be part of a team working to develop solutions enabling the business to leverage data as an asset at the bank.The Senior ETL Developer should have extensive knowledge of data warehousing cloud technologies. If you consider data as a strategic asset, evangelize the value of good data and insights, have a passion for learning and continuous improvement, this role is for you. Key Responsibilities Translate requirements and data mapping documents to a technical design. Develop, enhance and maintain code following best practices and standards. Create and execute unit test plans. Support regression and system testing efforts. Debug and problem solve issues found during testing and/or production. Communicate status, issues and blockers with project team. Support continuous improvement by identifying and solving opportunities. Basic Qualifications Bachelor degree or military experience in related field (preferably computer science). At least 5 years of experience in ETL development within a Data Warehouse. Deep understanding of enterprise data warehousing best practices and standards. Strong experience in software engineering comprising of designing, developing and operating robust and highly-scalable cloud infrastructure services. Strong experience with Python/PySpark, DataStage ETL and SQL development. Proven experience in cloud infrastructure projects with hands on migration expertise on public clouds such as AWS and Azure, preferably Snowflake. Knowledge of Cybersecurity organization practices, operations, risk management processes, principles, architectural requirements, engineering and threats and vulnerabilities, including incident response methodologies. Understand Authentication Authorization Services, Identity & Access Management. Strong communication and interpersonal skills. Strong organization skills and the ability to work independently as well as with a team. Preferred Qualifications AWS Certified Solutions Architect Associate, AWS Certified DevOps Engineer Professional and/or AWS Certified Solutions Architect Professional Experience defining future state roadmaps for data warehouse applications. Experience leading teams of developers within a project. Experience in financial services (banking) industry. Mandatory Skills ETL - Datawarehouse concepts Snowflake CI/CD Tools (Jenkins, GitHub) python Datastage

Posted 6 hours ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Extensive hands-on experience with .NET Core, backed by a proven track record as a .NET Developer or Application Developer. Strong expertise in Web API development, CQRS patterns, design patterns, and SOLID principles. Proficient in Azure DevOps, including pipeline creation and management, with practical experience in Docker and Kubernetes for containerization and orchestration. Skilled in working with key AWS components such as SNS, SQS, S3, CloudWatch Logs, and Parameter Store. Basic knowledge of Kafka is a plus. Proficient in SQL Server, Oracle (including writing complex queries), and PostgreSQL databases. Well-versed in microservices architecture and API design best practices. Experienced in designing RESTful APIs and working with server-side API technologies and tools. Practical knowledge of code quality and security frameworks such as Veracode and SonarQube, ensuring high-quality code through development and peer reviews. Actively involved in client interactions, including project management discussions and technical delivery planning. Familiar with release management processes (added advantage). Domain knowledge in the insurance industry is considered a strong plus

Posted 6 hours ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

looking for a Senior ETL Developer in our Enterprise Data Warehouse. In this role you will be part of a team working to develop solutions enabling the business to leverage data as an asset at the bank.The Senior ETL Developer should have extensive knowledge of data warehousing cloud technologies. If you consider data as a strategic asset, evangelize the value of good data and insights, have a passion for learning and continuous improvement, this role is for you. Key Responsibilities Translate requirements and data mapping documents to a technical design. Develop, enhance and maintain code following best practices and standards. Create and execute unit test plans. Support regression and system testing efforts. Debug and problem solve issues found during testing and/or production. Communicate status, issues and blockers with project team. Support continuous improvement by identifying and solving opportunities. Basic Qualifications Bachelor degree or military experience in related field (preferably computer science). At least 5 years of experience in ETL development within a Data Warehouse. Deep understanding of enterprise data warehousing best practices and standards. Strong experience in software engineering comprising of designing, developing and operating robust and highly-scalable cloud infrastructure services. Strong experience with Python/PySpark, DataStage ETL and SQL development. Proven experience in cloud infrastructure projects with hands on migration expertise on public clouds such as AWS and Azure, preferably Snowflake. Knowledge of Cybersecurity organization practices, operations, risk management processes, principles, architectural requirements, engineering and threats and vulnerabilities, including incident response methodologies. Understand Authentication Authorization Services, Identity & Access Management. Strong communication and interpersonal skills. Strong organization skills and the ability to work independently as well as with a team. Preferred Qualifications AWS Certified Solutions Architect Associate, AWS Certified DevOps Engineer Professional and/or AWS Certified Solutions Architect Professional Experience defining future state roadmaps for data warehouse applications. Experience leading teams of developers within a project. Experience in financial services (banking) industry. Mandatory Skills ETL - Datawarehouse concepts Snowflake CI/CD Tools (Jenkins, GitHub) python Datastage

Posted 6 hours ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Role : Developer(BedRock Gen AI Engineer) Must Have Skills Overall 5+ years of industry experience with at least last 4+ years of strong experience in Generative AI and must have experience in handling Hallucinations and testing and validate LLM outputs thoroughly Experience building and deploying GenAI solutions using Amazon Bedrock Strong understanding of prompt engineering, orchestration, and LLM integration patterns Hands-on with AWS services and scalable GenAI architecture Excellent programming skills and proficiency in python. Experience with AWS and Azure cloud. Hands on exposure to integrating atleast one of the popular LLMs(Open AI GPT, PaLM 2, Dolly, Claude 2, Cohere etc.) using API endpoints. Good To Have Ensure the software and service delivered is of high quality by collaborating with their team members and leveraging unit testing and continuous integration. Be an active team member who participates in the estimation of work required, creating a work break down structure and identifying tasks to deliver software features. Overall 5+ years of industry experience with at least last 2+ years of extensive implementation experience in data analytics space or a senior developer role in one of the modern technology stack Hands on exposure to using Azure cloud services for storage, serverless-logic, search, transcription and chat Ability to build API based scalable solutions and debug & troubleshoot software or design issues.

Posted 6 hours ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Should have strong 5+ years hands on development project experience in C#, ASP.Net core, WebAPI, MVC, REST API, Micro services and SQL /Oracle. Good to have knowledge on Azure and AWS Familiarity with microservice architecture styles/APIs. Experience of working in Agile methodology using Azure DevOps preferred Group Insurance Domain Experience Preferred. Ability to support QA / UAT efforts by executing and validating API requests/responses (example Postman, Swagger other API platforms).

Posted 6 hours ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Data Engineer Must have 5+ years of experience in below mentioned skills. Must Have: Big Data Concepts , Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development, Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora

Posted 6 hours ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Noida, Bengaluru

Work from Office

Description: We’re looking for a seasoned DevOps Engineer who thrives in a fast-paced, highly collaborative environment and can help us scale and optimize our infrastructure. You’ll play a critical role in building robust CI/CD pipelines, automating cloud operations, and ensuring high availability across our services. If you’re AWS certified, hands-on with Chef, and passionate about modern DevOps practices, we want to hear from you. Requirements: • 8–12 years of hands-on DevOps/Infrastructure Engineering experience. • Proven expertise in Chef for configuration management. • AWS Certified Solutions Architect – Associate or Professional (Required). • Strong scripting skills in Python, Shell, YAML, or Unix scripting. • In-depth experience with Terraform for infrastructure as code (IAC). • Docker and Kubernetes production-grade implementation experience. • Deep understanding of CI/CD processes in microservices environments. • Solid knowledge of monitoring and logging frameworks (e.g., ELK, Prometheus). Job Responsibilities: • Design and implement scalable CI/CD pipelines using modern DevOps tools and microservices architecture. • Automate infrastructure provisioning and configuration using Terraform, Chef, and CloudFormation (if applicable). • Work closely with development teams to streamline build, test, and deployment processes. • Manage and monitor infrastructure using tools like ELK Stack, Prometheus, Grafana, and New Relic. • Maintain and scale Docker/Kubernetes environments for high-availability applications. • Support cloud-native architectures with Lambda, Step Functions, and DynamoDB. • Ensure secure, compliant, and efficient cloud operations within AWS. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 6 hours ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Position Title: Senior Data Engineer Department: Development / Engineering Reports To: Team Lead / Tech Lead Work Location: On-site Position Summary The Senior Data Engineer is responsible for designing, building, and maintaining robust data pipelines and data warehouse solutions. This role works closely with data analysts, software engineers, and other stakeholders to ensure data is clean, reliable, and accessible for business insights. The ideal candidate will bring strong ETL development experience, proficiency in SQL and cloud platforms, and the ability to work independently on complex data initiatives. Essential Duties and Responsibilities Key responsibilities include, but are not limited to: Design, develop, and maintain scalable ETL pipelines and data warehousing solutions using tools such as SQL Server, Redshift, and dbt. Write advanced SQL queries and scripts for data extraction, transformation, and loading from various structured and unstructured data sources. Collaborate with cross-functional teams to understand data needs and translate them into scalable and maintainable solutions. Maintain and improve existing data models and data flows; implement changes to optimize performance and reduce redundancy. Ensure data integrity, accuracy, and security throughout the data lifecycle. Work with cloud-based platforms (preferably AWS) to deploy, manage, and monitor data services and pipelines. Develop and maintain technical documentation related to data models, workflows, and systems architecture. Perform data profiling, cleansing, and validation to ensure high-quality and reliable analytics. Contribute to architectural decisions for enterprise data infrastructure and mentor junior data engineers when needed. Stay current with emerging trends and technologies in data engineering and recommend improvements to the existing infrastructure. Supervisory Responsibilities None Qualifications To perform this role successfully, candidates must meet the following requirements: Education & Experience Bachelors degree in Computer Science, Information Technology, Data Science, or a related field (preferred). Minimum of 5 years of hands-on experience as a Data Engineer or in a similar role. Proven expertise in data warehousing concepts and ETL development tools (any). Proficient in SQL Server , AWS Redshift , and dbt for data modeling and transformation. Strong SQL scripting capabilities and intermediate Python programming skills. Exposure to cloud technologies , particularly AWS (e.g., S3, Lambda, Glue, Redshift, etc.). Experience in working with large-scale datasets and optimizing data pipelines for performance and scalability. Strong analytical thinking and problem-solving skills with the ability to work in a fast-paced environment. Language Skills Ability to read and comprehend technical documentation and business requirements. Ability to draft moderately complex documentation and communicate clearly with team members. Strong verbal communication skills for both technical and non-technical audiences. Mathematical Skills Ability to apply mathematical concepts such as fractions, percentages, ratios, and proportions to practical problems. Comfort with data profiling, statistics, and data quality metrics. Reasoning Ability Strong ability to troubleshoot and resolve complex data issues. Ability to understand and solve ambiguous problems through structured thinking and analysis. Ability to handle multiple priorities and deliver quality outcomes within tight deadlines. Computer Skills Proficient in database technologies, SQL coding, and ETL tools. Comfortable using cloud services (AWS preferred), version control systems (Git), and development tools. Familiarity with workflow orchestration tools (e.g., Airflow, Prefect) is a plus. Certificates and Licenses None required (Relevant AWS certifications are a plus) Travel Requirements Minimal travel (up to 5%) may be required for meetings, training, or conferences. Information Security & Privacy Compliance Ensure the secure handling and storage of sensitive data. Implement access control protocols and data encryption standards. Maintain compliance with privacy regulations (e.g., GDPR, HIPAA) and internal data governance policies. Participate in regular security audits and proactively address vulnerabilities in data systems. Position Description Acknowledgment By signing below, I acknowledge that I have reviewed and understood the duties and expectations associated with the Senior Data Engineer role. I understand that responsibilities may evolve over time and agree to communicate with my supervisor for clarification as needed. I also understand that performance evaluations and compensation may be tied to fulfilling these responsibilities. Employees Name (Print): ______________________________________ Employees Signature: ___________________________ Date: _______________ Managers Signature: ___________________________ Date: _______________ Version 1 Effective June 2025

Posted 6 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Description: Geospatial Analyst Location: Gurgaon (On-site) Employment: Full-Time Experience Level: 3-5 years About Us Aaizel Tech Labs is a pioneering tech startup at the intersection of Cybersecurity, AI, Geospatial solutions, and more. We drive innovation by delivering transformative technology solutions across industries. As a growing startup, we are looking for passionate and versatile professionals eager to work on cutting-edge projects in a dynamic environment. Job Summary We are seeking a talented Geospatial Analyst to join our R&D team and contribute to the development of next-generation geospatial products. You will be involved in data acquisition, spatial analysis, and visualization, driving innovative solutions across domains like remote sensing, smart cities, precision agriculture, and environmental monitoring . Key Responsibilities: 1. Geospatial Data Acquisition and Processing: Collect and process high-resolution satellite imagery, LiDAR data, and drone-acquired datasets. Use remote sensing software (ENVI, ERDAS) to preprocess data, including radiometric corrections, georeferencing, and orthorectification. 2. Spatial Analysis and Modelling: Develop spatial models and algorithms for applications such as land use classification, change detection, and object recognition in geospatial data. Implement advanced GIS techniques, including spatial interpolation, hydrological modelling, and network analysis. 3. Visualisation and Cartography: Create detailed and interactive maps, 3D models, and geospatial visualisations using ArcGIS, QGIS, and Mapbox. Utilise tools like Blender and Unity for 3D environmental modelling and simulation. 4. Data Integration and Database Management: Integrate geospatial data with other data sources (IoT, GPS, weather data) for comprehensive analysis. Design and manage spatial databases using PostgreSQL/PostGIS, ensuring efficient data storage and retrieval. 5. Advanced Geospatial Analytics: Develop custom scripts in Python or R for spatial data analysis, including machine learning applications like predictive modelling and anomaly detection. Apply geostatistical methods (Kriging, Moran’s I) for environmental impact assessments and resource management. 6. Collaboration and Reporting: Collaborate with AI/ML engineers, software developers, and project managers to integrate geospatial insights into broader tech solutions. Prepare detailed analytical reports, dashboards, and presentations to communicate findings to stakeholders 7. Tool Development and Automation: Develop automated geospatial tools using APIs (Google Earth Engine, OpenStreetMap) to streamline data analysis workflows. Implement automated change detection systems for monitoring environmental changes or urban expansion. Required Skills, Qualification and Experience: Educational Background: Master’s in Geoinformatics, Remote Sensing, or related field; strong preference for candidates from preferably top-tier institutions. Experience: 3-5 years of experience with a strong portfolio of completed projects Technical Skills: Proficiency in GIS software (ArcGIS, QGIS) and remote sensing tools (ENVI, ERDAS). Experience with programming languages (Python, R) for geospatial data manipulation and analysis. Familiarity with cloud-based geospatial platforms like AWS S3, GCP Earth Engine, or Azure Maps. Strong understanding of spatial databases (PostgreSQL/PostGIS) and geospatial data standards (GeoJSON, WMS/WFS). Problem-Solving: Ability to solve complex spatial problems using data-driven approaches and innovative techniques. Communication: Strong presentation skills to convey complex geospatial information clearly to technical and non-technical stakeholders. Attention to Detail: High level of precision in geospatial data processing and analysis. Application Process: To apply, please submit your resume and a cover letter detailing your relevant experience and enthusiasm for the role to hr@aaizeltech.com or Bhavik@aaizeltech.com or anju@aaizeltech.com (Contact No- 8493801093)

Posted 6 hours ago

Apply

12.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

We are hiring for Director- AIML in Andheri, Mumbai. Here are the skills required - You’re a hands-on AI engineer or data scientist with 12-16 years of experience driving impactful AI initiatives in production settings. You’re passionate about solving complex problems, building systems that scale, and keeping abreast of the latest AI developments. Must-Haves Proven experience framing and solving complex problems with AI models, specifically in computer vision or NLP, and GenAI deploying these at scale. Hands-on experience with LLMs, with expertise in fine-tuning, retrieval-augmented generation (RAG), and model selection. You should have worked on at least two production use cases in this domain. You should know what models / approaches to choose based on the use-cases and their solutions. You understand that model building is only a fraction of production AI work and you know what it takes to maintain high-performance AI systems. 6+ years building and maintaining large-scale production AI systems, including optimization and tuning for deployment at scale An insatiable curiosity and passion for state-of-the-art research, combined with the drive to apply innovative approaches to real-world problems. Expertise in Python and knowledge of its standard libraries Strong skills in monitoring and debugging production AI systems for sustained reliability and efficiency. Experience mentoring and guiding junior engineers, sharing your expertise to uplift team knowledge. Capability to interact with a variety of stakeholders including senior leadership and customers from the most demanding industries Some familiarity with AI Ops and associated tooling to streamline AI processes. Cloud experience with AWS/GCP/Azure Good-to-Haves Experience with other languages Familiarity with Docker and Kubernetes for efficient service deployment. Experience with a variety of platforms, frameworks, and tools in the AI and machine learning ecosystem. Publications and patents in the AI domain

Posted 6 hours ago

Apply

4.0 - 8.0 years

0 - 0 Lacs

Pune

Hybrid

So, what’s the role all about? We are seeking a highly skilled and motivated Senior Software Engineer (Back-end, Full stack preferred) to join our dynamic X-Sight R&D Engineering team in Pune, focused on building scalable compliance solutions for financial markets. Take ownership of understanding the requirements, the development, implementation, and deployment of various services. As a senior member of the development team, you will work closely with peer software engineers, Tech Managers, product managers, and other stakeholders to ensure our software meets the requirements with the highest quality standards. 4-8 years of experience software development How will you make an impact? Play a critical role in the data migration and client onboarding processes. Develop/integrate microservices using cloud-native components as a part of X-Sight platform. Design & Implementation of software features according to architecture & product requirements. Write automation test scripts (UT, IT) to ensure fit to design/requirements. Deploy the service & frontend components you developed to production environments & ensure there are no downtimes because of your service. Work and collaborate in multi-disciplinary Agile teams, adopting Agile spirit, methodology and tools. Collaborate with various development & product teams in India, Israel, Slovakia, and US. Have you got what it takes? Key Technical Skills: Software design & development in Java (v 11+), Spring Boot, python, solid grounding in OOPS, JavaScript, HTML5, CSS3, AngularJS/ReactJS Experience in building, testing, deploying microservices Experience with AWS services like S3, EC2, RDS, Iceberg, DynamoDB, Lambda, EKS Good at problem solving, interpersonal and communication skills; friendly disposition; work effectively as a team player Good to Have: Experience with frontend development in VueJs Experience with CI/CD & Jenkins, Artifactory, Terraform, Docker, Kubernetes Experience in financial markets compliance domain Qualifications: Bachelor’s degree in computer science or related fields 4-8 years of experience software development What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 8069 Reporting into: Tech Manager Role Type: Senior Software Engineer

Posted 6 hours ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Company Size Large-scale / Global Experience Required 5 - 8 years Working Days 6 days/week Office Location Viman Nagar, Pune Role & Responsibilities Lead scalable, high-performance application architecture. Develop and design enterprise-grade applications. Manage Azure DevOps processes and performance optimization. Conduct solution design, RCA documentation, and interact with cross-functional teams. Ideal Candidate Qualification: Graduation in Computers/Electronics or Post-Graduation in Computer Science. Experience: 5–8 years in software/application development. Mandatory Technical Skills Core Technologies: Python, FastAPI, React/TypeScript, Langchain, LangGraph, AI Agents Docker, Azure Open AI, Prompt Engineering Cloud & Infrastructure: AWS (Secrets Manager, IAM, ECS/EC2), Azure AD, Azure DevOps, GitHub Database & Performance: MongoDB (Motor, Beanie ODM), Redis, caching strategies Security: OAuth2/SAML, JWT, Azure AD integration, audit logging Soft Skills: Strong problem-solving, mentoring, technical communication, Independent contributor with high ownership mindset Perks, Benefits and Work Culture Our people define our passion and our audacious, incredibly rewarding achievements. Bajaj Finance Limited is one of India’s most diversified Non-banking financial companies, and among Asia’s top 10 Large workplaces. If you have the drive to get ahead, we can help find you an opportunity at any of the 500+ locations we’re present in India. Skills: fastapi,application architecture,aws,github,ai agents,prompt engineering,saml,docker,azure devops,redis,python,langgraph,design,devops,mentoring,azure ad,cloud,problem-solving,technical communication,oauth2,jwt,azure,audit logging,typescript,langchain,react,azure open ai,mongodb

Posted 6 hours ago

Apply

10.0 - 15.0 years

30 - 35 Lacs

Hyderabad

Work from Office

What is the Director - Research Scientist AI & Optimization responsible for? The core mandate of this role is to bring innovative digital investment products and solutions to market, leveraging a patented and innovative digital WealthTech/FinTech product - Goals Optimization Engine (GOE) - built with several years of academic research in mathematical optimization, probability theory and AI techniques at its core. The mandate also extends to leveraging cutting edge AI, such as Generative AI, in addition to Reactive AI to create value within various business functions within Franklin Templeton such as Investment Solutions, Portfolio Management, Sales & Distribution, Marketing, and HR functions, among others, in a responsible and appropriate manner. The possibilities are limitless here and present a fantastic opportunity for a self-motivated and driven professional to make significant contributions to the organization and to themselves. What are the ongoing responsibilities of a Director - Research Scientist AI & Optimization? As a Principal Research Scientist - AI and Optimization, you will play a pivotal role in driving innovation, product research, and proof of concepts for our AI research and Goals Optimization Engine (GOE) product roadmap. You will be responsible for mentoring and guiding a team of highly motivated research scientists, creating intellectual property, and ensuring successful client deployments and product development. Key Responsibilities: Innovation, Product Research, Proof of Concepts, Pseudocode & Design (40%): Lead and contribute to the multi-year Goals Optimization Engine (GOE) product roadmap, conceptualizing fitment against various industry use cases, creating product variants, and designing new features and enhancements across multiple distribution lines and geographies Mentor and guide a team of research scientists to achieve common objectives Serve as the Subject Matter Expert (SME) for a specific domain within AI and/or Optimization, acting as the go-to person for all internal stakeholders Develop pseudocode and working prototypes in a Python environment, collaborating closely with Product Managers and Product Developers Create well-articulated design documents and presentations to explain research to internal and external stakeholders, including clients and partners located globally Lead industry research and evaluate partnerships with third-party vendors and specialized service providers where appropriate Maintain a thorough understanding of boundary conditions, regulatory environments, data challenges, technology integrations, algorithmic dependencies, and operational process nuances to ensure nothing slips through the cracks Stay up to date with the latest developments in the Investment Management industry, Financial Mathematics, Portfolio Construction, and Portfolio Management IP Creation, Paper Writing, and Thought Leadership (30%): Conceptualize and produce high-quality intellectual property for publication in top-tier academic and practitioner journals Peer-review the work of other research scientists and improve the outcome of their research output Create patent-worthy intellectual content, apply for patents, and win them Take responsibility for winning industry awards for exceptional product research and innovative work products Stay informed about the latest industry research and evaluate it objectively and in an unbiased manner Publish research works for conferences Client Deployment, Product Development, and Vendor Due Diligence (30%): Act as the SME in initial client deployment discussions, showcasing the rigor of research, explaining the product or solution concept, and engaging in discussions with similar individuals/teams from the client/partner side Contribute to product development by ensuring alignment with research and design Provide hands-on support where required to complete time-critical work successfully Engage with third-party vendors and potential integration partners to understand their capabilities, methodologies, and algorithms, and perform rigorous due diligence to make clear recommendations on Go/No-Go decisions What ideal qualifications, skills & experience would help someone to be successful? Education: Bachelor's and master's degree in STEM disciplines; a PhD in a relevant discipline (Optimization, Probability, Quant Finance, AI & ML, Computational Mathematics, Statistics, etc.) would be a + Relevant industry certifications Experience - Core Skills: 10+ years of applied R &D experience in research departments of reputed organizations post-Masters or PhD Track record of real innovation generating impact is essential Demonstrated ability to create intellectual content, publish papers, and obtain patents Ability to effectively bridge the gap between academia and practice, ensuring research is practical and implementable Structured thinking and exceptional mathematical skills Excellent team player with the ability to work with ambiguity and thrive in chaos Familiarity with ML/DL/DRL, NLP, especially Large Language Models, dynamic programming, and/or convex optimization Solid experience with AWS and/or Azure Familiar with Python, PySpark, SQL, Hadoop, C++ Experience - Other Soft Skills: Proven ability to take initiative and work under pressure in a changing, fast paced environment Exceptional decision-making skills, with the ability to prioritize across needs given limited resources Thrives in a startup-like environment: loves dealing with a fast pace and changing needs Ability to build relationships both inside and outside of the product organization Ability to narrate a story for a problem along with the capacity to dive into minute details Superlative communication and consensus-building skills Work Shift Timings - 2:00 PM - 11:00 PM IST

Posted 6 hours ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for a skilled DevOps Engineer to manage and support our infrastructure, applications, and cloud environments, ensuring high availability and optimal performance. Key Skills: • Proficient in Linux • Knowledge of MySQL, NGINX, Apache • Skills in disk management, LVM, SSH, FTP, NFS • Experience with monitoring tools (Nagios, Zabbix, Prometheus, Grafana) • Administration of MySQL and MS SQL Server • Java debugging support for applications • Experience with CI/CD tools (Jenkins, GitLab CI, CircleCI, Travis CI, GitHub Actions) • Containerization (Docker) and orchestration (Kubernetes, Docker Swarm) • Cloud platforms (AWS, Azure, GCP) and serverless (Lambda, Azure Functions) • Log management and analysis (ELK Stack, Splunk) • Microservices architecture knowledge • Scripting skills (Bash, Python, PowerShell) • Preferred Qualifications: • Experience supporting high-availability systems in cloud and on-prem environments • Certifications in cloud platforms or Kubernetes (preferred) • Bachelor’s degree in Computer Science, IT, or related field (or equivalent experience)

Posted 6 hours ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description We are seeking a skilled and experienced Java and Spring Boot and Elasticsearch Developer to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining high-performance Java applications with a focus on Elasticsearch integration. The candidate should have a strong background in Java development, along with expertise in implementing and optimizing Elasticsearch : Java And Spring Boot Development Design, develop, and maintain robust and scalable Java applications. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of Integration : Implement Elasticsearch solutions for efficient data indexing, searching, and retrieval. Develop and optimize Elasticsearch queries to meet performance and scalability requirements. Troubleshoot and resolve issues related to Elasticsearch Review And Optimization : Conduct code reviews to ensure code quality and adherence to best practices. Identify and address performance bottlenecks and optimize code for maximum And Communication : Work closely with other developers, product managers, and stakeholders to deliver high-quality solutions. Communicate effectively with team members and provide technical guidance as : Education Bachelor's degree in Computer Science, Engineering, or a related : Proven experience in Java development with a minimum of 4 years of hands on experience including 2 years (or 2 recent projects) of strong hands on knowledge with full implementation of Elasticsearch and Spring Boot. Strong knowledge of Spring Boot and its ecosystem. Significant experience in designing and implementing Elasticsearch solutions. Strong expertise in Elasticsearch, including indexing, querying, and performance optimization. Experience with microservices architecture and RESTful API design. Experience with Spring Boot and RabbitMQ. Strong skills in In-memory applications, Database Design, Data Integration. Excellent relationship building and communication skills; ability to interact and work effectively with all Skills : Proficiency in Java programming language. Proficiency in Spring Boot. Experience with RESTful APIs and web services. Familiarity with relevant tools and frameworks. Strong in any of SQL Database. Strong knowledge of Elasticsearch, including indexing, querying, and performance tuning. Familiarity with GIT & version Preferred Skills : Experience with containerization technologies (e., Docker, Kubernetes). Knowledge of Micro services & any API Gateway. Knowledge of cloud platforms (e., AWS, Azure, or GCP). Familiarity with S3 bucket. Familiarity with message brokers (e., Rabbit (ref:hirist.tech)

Posted 6 hours ago

Apply

0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

🚀 AI-FULL STACK INTERN → FUTURE TECH LEAD (MERN / PYTHON + DEVOPS) Location: Lucknow (Onsite) | Duration: 6 Months → Full-Time “For rebels who fine-tune Llama 3 before breakfast and argue about Kubernetes over chai. If deploying open-source models on Hetzner at 2 AM excites you—this is your battleground.” 💻 Your War Mission Build AI-powered business weapons that redefine industries: ⚔️ Deploy open-source giants : Llama 3, Mistral, Phi-3 — optimize for consultative salesbots, customer assistants, and predictive engines. ⚔️ Architect at scale : Melt cloud clusters (AWS/Hetzner/Runpod) with real-time RAG systems, then rebuild them cost-efficient. ⚔️ Lead like a hacker-general : Mentor squads, review PRs mid-deployment, and ship production-grade tools in 48-hour sprints. ⚔️ Bridge chaos to clarity : Turn founder visions into Python + React missiles — no red tape, just impact. ⚔️ Your Arsenal 🧑‍💻 Code Weapons Python (Flask, Django) Node.js / Express React / Next.js MongoDB / Postgres ☁️ Cloud & DevOps Gear AWS (Lambda, ECS) Hetzner Bare Metal Servers Runpod GPU Clusters Docker / Kubernetes CI/CD Pipelines 🧠 AI / ML Firepower OSS Models: Llama 3, Mistral, DeepSeek LangChain / LangGraph + custom RAG hacks HuggingFace Transformers Real-time inference tuning 🧠 Who You Are ✅ Code gladiator with 3+ real projects on GitHub (bonus if containers have escaped into prod). ✅ Cloud insurgent fluent in IaC (Infrastructure as Code) – Hetzner and Runpod are your playground. ✅ Model whisperer – you’ve fine-tuned, quantized, and deployed open weights in real battles. ✅ Startup DNA – problems are loot boxes, not blockers. Permission is for the weak. 💥 Why This Beats Corporate Internships 🔧 Tech Stack: MERN + Python + Open-source AI/DevOps fusion (rare combo!) 🚀 Real Impact: Your code goes live to clients – no “simulations” or shadow projects. 🧠 Full Autonomy: You’ll get access to GPU clusters + full architectural freedom. 📈 Growth Path: Fast-track to full-time with competitive compensation + equity. 💼 Culture: No red tape. Just shipping, solving, and high-fives. 🎯 The Deal Phase 1: Intern (0–6 Months) Fixed stipend (for the bold, not the comfy) Ship 2+ client-ready AI products (portfolio > pedigree) Master open-source model deployment at scale Phase 2: FTE (Post 6 Months) Competitive comp + meaningful equity Lead AI pods with cloud budget autonomy ⚡ Apply If You: Can optimize Llama 3 APIs on Hetzner while debugging K8s Believe open-source > closed models for real-world impact Treat “impossible deadlines” as power-ups Can start yesterday 📮 How to Apply Drop your GitHub link (show us your best OSS battle scars) Write a 1-sentence battle cry : “How I’d deploy Mixtral to crush customer support costs” Email us at: careers@foodnests.com Subject line: [OSS GLADIATOR] - {Your Name} - {Cloud War Story} “We don’t count years. We count models deployed at 3 AM.” (Top 10 GitHub profiles get early interviews) #HiringNow #AIInternship #FullStackIntern #OpenSourceAI #MERNStack #PythonDeveloper #DevOpsJobs #LangChain #Runpod #Kubernetes #GitHubHackers #StartupJobs #EngineeringGraduates #BTechLife #LifeAtStartup #NowHiring #HackAndLead #ProductMindset #FullStackLife #GPTDev #AIxEngineering #BuilderNotBystander #StartupTech #GrowthHack #NodejsJobs #PythonDev #AWSCloud #EngineeringLeadership #JaipurTech #MakeStuffReal

Posted 6 hours ago

Apply

5.0 years

0 Lacs

Rajkot, Gujarat, India

On-site

Position Overview We are looking for a skilled and motivated PHP Developer with 5+ years of professional experience to join our growing development team at HalaCampus in Rajkot, Gujarat. In this role, you will be responsible for developing, maintaining, and enhancing web applications using Laravel and CodeIgniter frameworks. You will work closely with designers, project managers, and other developers to deliver high-quality software solutions that align with client needs and business goals. Responsibilities Develop, maintain, and enhance web applications using Laravel and CodeIgniter frameworks. Collaborate with cross-functional teams to understand project requirements and deliver features on time. Write clean, well-documented, and efficient code following best practices and coding standards. Participate in code reviews and contribute to continuous improvements in code quality. Debug and troubleshoot issues across various browsers and platforms. Stay current with emerging technologies and propose improvements where appropriate. Qualifications Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of hands-on experience in PHP development, particularly with Laravel and CodeIgniter. Strong understanding of object-oriented PHP and MVC architecture. Proficiency in front-end technologies like HTML, CSS, JavaScript, and responsive design. Experience with database systems such as MySQL, PostgreSQL, or MongoDB. Solid problem-solving skills and a keen attention to detail. Ability to communicate technical ideas effectively within a team. Bonus Skills Experience with modern JavaScript frameworks like Vue.js or React.js. Familiarity with RESTful API development and third-party integrations. Exposure to DevOps tools such as Docker, Jenkins, or cloud platforms (AWS, Azure, GCP). Understanding of version control tools like Git. About Us At HalaCampus, we're not just building software—we’re transforming the future of education technology. Our goal is to make learning more engaging, accessible, and fun for students around the world. We’re a passionate team that values creativity, collaboration, and innovation. If you’re looking for a role that challenges you and gives you the chance to make a real impact, HalaCampus is the place to be.

Posted 6 hours ago

Apply

4.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Summary Location: Noida Job Type: Full time Experience: 4-7 Years Qualification: B.tech/ BE Skills .Net Core, C#, SQL, React.js or Angular,cloud exposure About the job: JobPosition: Fullstack .Net Developer NoticePeriod: Immediate-30 days Job details: Responsibilities •Build and maintain scalable full-stack applications using .NET, C#, SQL Server and ASP.Net/MVC/ React/Angular •Design RESTful APIs and integrate them with frontend frameworks •Debug complex issues across frontend and backend layers •Use AI tools (e.g., GitHub Copilot, ChatGPT, AI-based testing or generation) to enhance development productivity •Implement secure, maintainable, and well-documented code •Work with cloud services (preferably Azure/AWS) for deployment, configuration, and monitoring •Participate in code reviews, technical discussions, and mentoring of junior team members Required Skills •4 to 8 years of hands-on experience in .NET stack •Strong analytical, debugging, and problem-solving skills •Experience building REST APIs and full-stack applications •Familiarity with modern development workflows (Git, CI/CD) •Demonstrated usage of AI tools in real projects (e.g., Copilot, ChatGPT for refactoring, documentation, code generation)

Posted 6 hours ago

Apply

12.0 - 18.0 years

37 - 55 Lacs

Mumbai

Work from Office

Essential Services : Role & Location fungibility At ICICI Bank, we believe in serving our customers beyond our role definition, product boundaries, and domain limitations through our philosophy of customer 360-degree. In essence, this captures our belief in serving the entire banking needs of our customers as One Bank, One Team . To achieve this, employees at ICICI Bank are expected to be role and location-fungible with the understanding that Banking is an essential service . The role descriptions give you an overview of the responsibilities, it is only directional and guiding in nature. About the role: As a SOC Analyst - Detection Engineering in the banks security operations center (SOC), the individual will be responsible to strengthen the creation and optimization of Analytical rules and alerts configured in the banks SIEM platform. Key Responsibilities: Business Understanding: Accountable to ensure all security anomalous activities are detected by the banks SIEM platform and false positives are kept to a minimum. You will be responsible to build analytical correlational rules in the banks SIEM platform covering network, systems and endpoints, cloud (SAAS, IAAS and PAAS) and applications (both COTS and internally developed). Collaborate: Verify the ingested logs and ensure log parsing to normalize the events. Implement a testing methodology to test the alerts configured and obtain sign off before releasing into production. Provide expert guidance and support to the security operations team in the use of for threat hunting and incident investigation. Analyzing the detected Incidents to identify lessons learned to improve response processes and make recommendations for enhancing security posture. Reporting: Develop and maintain documentation for Analytical rules processes and procedures. Stay Up to date with the latest trends and developments in cybersecurity and SIEM technologies and recommend improvements to the organization security posture. Qualifications & Skills Educational Qualification: Engineering Graduate in CS, IT, EC or InfoSec, CyberSec or MCA equivalent with experience in cloud security with any of the following - Microsoft Azure, Google cloud, Ability to develop and implement security policies, procedures and best practices. Experience: At least 5 years of experience working as a SOC analysts responsible to create SIEM rules/alerts. Hands-on experience in creation of security alerts in any of the commonly used SIEM solutions is a must. Certifications: SIEM Certification from any of the leading SIEM OEMs Splunk, Palo Alto, Securonix, LogRhythm, etc,. CEH or CISSP CCNA Security and/or any of the Cloud security certifications (AWS, GCP, Azure, OCI). Compliance: Knowledge of Networking components, Servers (RHEL, Windows, etc.) and Endpoints, cloud infrastructure along with Machine learning models used for detection of security alerts. Knowledge of various log types, event parsing and ingestion mechanisms across Systems, networks, cloud and commonly used applications in banks. Communication Skills: Excellent communication and interpersonal skills. Synergize with the Team: Working with the designated bank personnel to ensure alignment with RBI guidelines on detection of security alerts applicable to banks. Should have strong understanding of cybersecurity principles, threat detection and incident response. About the Business Group ICICI Bank’s Information Security Group believes in providing services to its customers in the safest and secured manner, keeping in mind that data protection for its customers is as important as providing quality banking services across the spectrum. The CIA triad of Confidentiality, Integrity, and Availability is built on the vision of creating a comprehensive information security framework. The Bank also lays emphasis on customer elements like protection from phishing, adaptive authentication, awareness initiatives, and provide easy to use protection and risk configuration ability in the hands of customers. With this core responsibly, ICICI administer and promotes on going campaigns to create awareness among customers on security aspects while banking through digital channels.

Posted 6 hours ago

Apply

10.0 - 12.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities 10- 12 years of experience in supporting IT infrastructure. Must have good understanding of OS (Win & Linux), Cloud, infra components like Backup/Storage and Network. Experience in design, build/setup and manage end to end infrastructure components. Cloud Architect certification is preferred or TOGA certified architect. Hands on knowledge to work with application team and build landing zones as per business/application requirements. Ability to write technical solution, high level architecture design, RFP responses. Should have good understanding of application functionality and some development experience for automation etc like Shell scripting, Terraform/Ansible. At least 7 years of hands-on experience in RedHat OpenShift platform (RHOS) and IT operational experience in a global enterprise environment. Cloud Architect / Lead Cloud Architect for RHOS Responsibilities:- This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other. Be the first one to experiment on new age cloud offerings, help define the best practice as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels. Good understanding of cloud design principles, sizing, multi-zone/cluster setup, resiliency and DR design. TOGAF or equivalent certifications Use your experience in AWS to build hybrid-cloud solutions for customers. Multicloud experience is a must Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud based technology and method Preferred Education Master's Degree Required Technical And Professional Expertise Technical Skills: Proficiency in Linux, containerization, and Kubernetes. Experience with Red Hat OpenShift and its ecosystem, including Red Hat OpenShift Origin, Red Hat OpenShift Enterprise, and Red Hat OpenShift Online. Certifications: Red Hat Certified Engineer (RHCE) or Red Hat Certified Specialist in OpenShift Administration (RHOSA) certification is highly desirable. Cloud Computing: Experience with cloud computing concepts, including scalability, reliability, and security, Scripting: Strong scripting skills in languages such as Python, Bash, or Rub, Troubleshooting: Strong troubleshooting and analytical skills to identify and resolve complex issues, Communication: Excellent communication and collaboration skills to work effectively with development teams. Linux Administration: Experience with Linux administration, including configuration, deployment, and troubleshooting, Containerization: Experience with containerization, including Docker, Kubernetes, and Red Hat OpenShift. VMware Administration: Experience with VMWare administration, including vSphere, vSAN, and NSX, VMware Certification: VMware VCP or VCDX certification is highly desirable Preferred Technical And Professional Experience Kubernetes, OpenShift Linux, VMWare

Posted 6 hours ago

Apply

5.0 - 8.0 years

20 - 30 Lacs

Pune, Gurugram

Work from Office

Role Overview: We are seeking a motivated and talented Tech lead Python Fullstack Developer. You will play a crucial role in building and enhancing our enterprise solutions and be part of a dynamic startup environment. Responsibilities: Lead the design and architecture of scalable AI-driven applications using Python fullstack technologies, ensuring robust frontend-backend integration and optimal system performance Drive technical decision-making for complex AI solutions, establishing coding standards, architectural patterns, and technology stack choices across the development lifecycle. Spearhead the fine-tuning, and production deployment of Large Language Models (LLMs) and generative AI solutions, ensuring seamless integration with existing systems Design and implement end-to-end AI pipelines, from data ingestion and model training to API development and real-time inference serving Take complete ownership of software development lifecycle, from stakeholder requirement gathering and technical analysis to deployment, monitoring, and post-production maintenance Develop high-performance, scalable applications using modern Python frameworks (Django/Flask/FastAPI) with responsive frontend technologies, ensuring optimal user experience across all touchpoints Stay up-to-date with emerging trends and advancements in Generative AI. Qualifications: Mandatory: must have a professional level cloud certification. 5-8 years of experience as a tech lead or equivalent roles with expertise in Python A degree in computer science or in a relevant field is a plus. Backend development for a cloud-native application Demonstrated interest in Generative AI with hands-on experience in implementing AI models is highly preferred. Strong problem-solving skills and the ability to troubleshoot complex issues independently. Excellent communication skills and a collaborative mindset to work effectively within a team. Self-driven, motivated, and proactive attitude with a willingness to learn and adapt in a fast-paced startup environment. Benefits: Competitive compensation package (potentially with performance-based bonuses or startup equity). Opportunity to work on cutting-edge technology in the emerging field of Generative AI. A collaborative and inclusive company culture that encourages innovation and growth. Professional development opportunities and mentorship from experienced industry professionals.

Posted 6 hours ago

Apply

6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Description AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. Key job responsibilities Key Job Responsibilities The Data Center Construction Project Engineer will be responsible for: Program management on Various tools like MSP, Primavera, Procore, MS excel etc. Working on and support in creating project and other initiative dashboards for reviews. Updates and circulate the checklists for snag lists, safety inspections, and quality observations. Establishing communication and coordination across a data center region’s general contractors, stakeholders, and internal teams on site, shell and room build activities. Supporting set up of Procore repository like Design management, Project Execution management, Financial management, Quality and safety management etc. and utilization by partner teams. Tracking and stewarding build documentation including design changes, submittals, RFI’s, change orders, and invoicing on Procore. Requesting and reviewing MOPs (Method of Procedure) for proper details, necessity, and risk. Onboarding new vendors for badging and orientation. Updating project management milestone dates, correspondence, and documents. Monitoring delivery of owner furnished material to site. Overseeing project closeout efforts including verification of closeout documents (e.g. As-Builts) and ensuring timely financial closeout. Contributing to specific initiatives aimed at improving the project management and execution delivery. A day in the life The person will be part of the Construction Management team for AWS Data Centers in India. This team is part of the overall Data Center Capacity Delivery ( DCCD) team for APJC (Asia Pacific, Japan, China) region of which India is a part. The team is led by an India Construction Management lead with two construction cluster /zones – Mumbai and Hyderabad. Each region is led by a Regional Construction Manager who has a team of Construction managers who plan and execute the respective Data Center projects. This role will report directly into the Construction Head for India. About The Team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Bachelor’s degree or Diploma in Civil, Mechanical Engineering, Electrical Engineering, Construction Management or an equivalent engineering science OR 6+ years of related construction management experience. Proficiency with Microsoft office tools such as Excel, Word, PowerPoint. Min. 3 years experience in construction management of projects involving Civil, mechanical, electrical and plumbing (MEP). Preferred Qualifications Experience working with cross function teams to deliver complex construction projects. Knowledge of governing building codes and regulations. Experience in program management, Microsoft Projects, Primavera tools. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADSIPL - Maharashtra Job ID: A2943626

Posted 6 hours ago

Apply

7.0 - 12.0 years

15 - 20 Lacs

Bengaluru

Work from Office

1. Job Description As a Threat Hunter, you will be responsible for proactively identifying, analysing, and mitigating potential threats across our environments. You will lead threat hunts, leverage data from multiple sources, and apply advanced techniques to detect suspicious behaviour and uncover threats. Collaborating with cross-functional teams, youll refine detection strategies and enhance our overall security posture. This is an exciting opportunity to make a significant impact by driving proactive security measures. 2. Responsibilities • Performing day-to-day operations as a trusted advisor on advanced threat hunt for team • Leading "hunt missions" using threat intelligence, data from multiple sources and results of brainstorming sessions to discover evidence of threats, insider misconduct, or anomalous behavior • Utilizing advanced threat hunting techniques and tools to detect, analyze, and respond to anomalous activities. This includes Identifying threat actor groups and characterizing suspicious behaviors as well as being able to identify traits, C2, and develop network and host-based IOCs or IOAs. • Finding evidence of threats or suspicious behavior and leveraging data to improve controls and processes; this will require a blend of investigative, analytical, security, and technical skills to be successful. • Evaluating and making recommendations on security tools and technologies needed to analyze potential threats to determine impact, scope, and recovery. • Ensuring gaps in detections are socialized with Cyber Security stakeholders; this includes identifying dependencies, recommendations, and collaborating to mitigate threats. • Should have understanding and experience on MITRE ATT&CK Framework based Threat Hunting. • Acting as subject matter expert in internal and external audit reviews. This includes producing and presenting artifacts and executive summaries to support the overall mission. • Participating in Purple Team, Threat Hunt, and tabletop exercises. • Working closely with key cross-functional stakeholders to develop and utilize proactive and mitigating measures to prevent, detect and respond to potential threats to Verizon on prem and cloud environments. • Mentoring and advising team members by educating them on advanced techniques on threat hunting. • Experience in threat Hunting to find presence of adversaries within organizational infrastructure. • Promoting an environment of collaboration and individual accountability when it comes to problem-solving, decision-making, and process improvements. 3. Qualifications • Bachelor's and/or master’s degree in IT Security, Engineering, Computers Science, or related field/experience • 5+ years overall technical experience in threat hunting. • Deep understanding of common network and application stack protocols, including but not limited to TCP/IP, SMTP, DNS, TLS, XML, HTTP, etc. Hinduja Global Solutions Limited Comprehensive knowledge utilizing system, cloud, application and network logs. • Experience working with IOCs, IOA, and TTPs. • Proficient knowledge of the cyber threat landscape including types of adversaries, campaigns, and the motivations that drive them. • Proficient knowledge of different programming languages, like, KQL, Python, PowerShell etc. • Experience working with analysis techniques, identifying indicators of compromise, threat hunting, and identification of intrusions and potential incidents. • Fundamental understanding of tactics, technologies, and procedures related to Cyber Crime,Malware, Botnets, Hacktivism, Social Engineering, APT or Insider Threat • Knowledge of operating system internals, OS security mitigations & understanding of Security challenges in Windows, Linux, Mac, Android & iOS platforms • Knowledge on query structures like Strong understanding of cyber based adversarial frameworks including MITRE ATT&CK and Lockheed Martin’s Cyber Kill Chain. • Knowledgeable with Regular Expressions, YARA and SIGMA rules, AQL and KQL type and atleast one common scripting language (PERL, Python, PowerShell) • Excellent analytical and problem-solving skills, a passion for research and puzzle-solving • Excellent cross-group and interpersonal skills, with the ability to articulate business need for detection improvements 4. Certifications • Certifications such as below or similar threat-hunting credentials are highly desirable. o Certified Threat Hunting Professional eCTHP o Certified Incident Responder (eCIR) o Certified Digital Forensics Professional eCDFP o GIAC Certified Incident Handler Certification (GCIH) o GIAC Enterprise Incident Response (GEIR) o Network+, Security+, CISSP, CISM, GCIH, GCFA, GCFE, GREM and/or or cloud-specific certifications (ex: AWS Certified Security - Specialty, Microsoft Certified: Azure Security Engineer Associate, Google Cloud Certified Professional Cloud Security Engineer)

Posted 6 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies