Home
Jobs

20025 Gcp Jobs - Page 18

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

Chennai

Work from Office

Naukri logo

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Software Engineer Senior Location: Chennai Work Type: Onsite Position Description: As a Software Engineer on our team, you will be instrumental in developing and maintaining key features for our applications. You'll be involved in all stages of the software development lifecycle, from design and implementation to testing and deployment. Responsibilities: Develop and Maintain Application Features: Implement new features and maintain existing functionality for both the front-end and back-end of our applications. Front-End Development: Build user interfaces using React or Angular, ensuring a seamless and engaging user experience. Back-End Development: Design, develop, and maintain robust and scalable back-end services using [Backend Tech - e.g., Node.js, Python/Django, Java/Spring, React]. Cloud Deployment: Deploy and manage applications on Google Cloud Platform (GCP), leveraging services like [GCP Tech - e.g., App Engine, Cloud Functions, Kubernetes]. Performance Optimization: Identify and address performance bottlenecks to ensure optimal speed and scalability of our applications. Code Reviews: Participate in code reviews to maintain code quality and share knowledge with team members. Unit Testing: Write and maintain unit tests to ensure the reliability and correctness of our code. SDLC Participation: Actively participate in all phases of the software development lifecycle, including requirements gathering, design, implementation, testing, and deployment. Collaboration: Work closely with product managers, designers, and other engineers to deliver high-quality software that meets user needs. Skills Required: Python, GCP, Angular, DevOps Skills Preferred: API, Tekton, Terraform Experience Required: 5+ years of professional software development experience Education Required: Bachelor's Degree TekWissen Group is an equal opportunity employer supporting workforce diversity.

Posted 2 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Title/Position: Data QA Engineer Job Location : Pune Employment Type: Full Time Shift Time : UK Shift Job Overview: We are seeking a detail-oriented and highly motivated Data Quality Assurance Engineer to join our dynamic team. As a QA Engineer, will be responsible for designing and executing data validation strategies, identifying anomalies, and collaborating with cross-functional teams to uphold data quality standards across the organization. For delivering high-quality software solutions. Responsibilities: Develop and execute test plans, test cases, and automation scripts for ETL pipelines and data validation. Perform SQL-based data validation to ensure data integrity, consistency, and correctness. Work closely with data engineers to test and validate Data Hub implementations. Automate data quality tests using Python and integrate them into CI/CD pipelines. Debug and troubleshoot data-related issues across different environments. Ensure data security, compliance, and governance requirements are met. Collaborate with stakeholders to define and improve testing strategies for big data solutions. Requirements: 6+ years of experience in QA, with a focus on data testing, ETL testing, and data validation. Strong proficiency in SQL for data validation, transformation testing, and debugging. Proficiency in Python is an added advantage. Experience with ETL testing and Data solutions. Experience with cloud platforms (AWS, Azure, or GCP) is a plus. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work across functional teams. Company Profile Stratacent is an IT Consulting and Services firm, headquartered in Jersey City, NJ, with two global delivery centres in New York City area and New Delhi area plus offices in London, Canada and Pune, India. We are a leading IT services provider focusing in Financial Services, Insurance, Healthcare and Life Sciences. We help our customers in their digital transformation journey and provides services/ solutions around Cloud Infrastructure, Data and Analytics, Automation, Application Development and ITSM. We have partnerships with SAS, Automation Anywhere, Snowflake, Azure, AWS and GCP. URL - http://stratacent.com Employee Benefits: • Group Medical Insurance • Cab facility • Meals/snacks • Continuous Learning Program Stratacent India Private Limited is an equal opportunity employer and will not discriminate against any employee or applicant for employment on the basis of race, color, creed, religion, age, sex, national origin, ancestry, handicap, or any other factors.

Posted 2 days ago

Apply

5.0 years

20 - 27 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Industry: Information Technology | Database & Infrastructure Services We are a fast-scaling managed services provider helping enterprises in finance, retail, and digital-native sectors keep mission-critical data available, secure, and high-performing. Our on-site engineering team in India safeguards petabytes of transactional data and drives continuous optimisation across hybrid environments built on open-source technologies. Role & Responsibilities Administer and optimise PostgreSQL clusters across development, staging, and production workloads. Design, implement, and automate backup, recovery, and disaster-recovery strategies with point-in-time restore. Tune queries, indexes, and configuration parameters to achieve sub-second response times and minimise resource consumption. Configure and monitor logical and streaming replication, high availability, and failover architectures. Harden databases with role-based security, encryption, and regular patching aligned to compliance standards. Collaborate with DevOps to integrate CI/CD, observability, and capacity planning into release pipelines. Skills & Qualifications Must-Have 5+ years PostgreSQL administration in production. Expertise in query tuning, indexing, and vacuum strategies. Proficiency with Linux shell scripting and automation tools. Hands-on experience with replication, HA, and disaster recovery. Preferred Exposure to cloud-hosted PostgreSQL (AWS RDS, GCP Cloud SQL). Knowledge of Ansible, Python, or Kubernetes for infrastructure automation. Benefits & Culture Highlights Engineer-led culture that values technical depth, peer learning, and continuous improvement. Access to enterprise-grade lab environments and funded certifications on PostgreSQL and cloud platforms. Competitive salary, health insurance, and clear growth paths into architecture and SRE roles. Work Location: On-site, India. Skills: postgresql,shell scripting,vacuum strategies,dba,linux shell scripting,python,disaster recovery,automation tools,cloud-hosted postgresql,indexing,query tuning,replication,ansible,high availability,kubernetes,postgresql administration

Posted 2 days ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Flexing It is a freelance consulting marketplace that connects freelancers and independent consultants with organisations seeking independent talent. Flexing It has partnered with Our Client a leading consulting firm are looking to engage with a Enterprise/Data Architect. Key Responsibilities: 1. Enterprise Architecture Analysis: a. Assess current enterprise architecture across infrastructure, data, applications, and analytics layers. b. Apply TOGAF, MACH or equivalent frameworks to evaluate and redesign architecture for scalability, agility, and compliance. c. Assess legacy systems and define modernization roadmaps aligned with digital transformation goals. 2. Data Architecture & Governance: a. Analyze current data management practices including ingestion, storage, processing, and governance. b. Design a robust, scalable, and compliant target data architecture with a clear classification hierarchy and lifecycle governance. c. Define metadata management, data lineage, and master data management (MDM) strategies. 3. IT-OT Integration: a. Evaluate integration mechanisms between IT and OT systems (e.g., SAP, SDMS, SCADA, DCS). b. Recommend frameworks for secure, real-time data exchange and analytics enablement. c. Ensure alignment with OT cybersecurity standards. 4. TOGAF-Based Analysis: a. Use TOGAF principles to benchmark and align architecture with global best practices. b. Identify gaps and improvement opportunities in the current architecture. 5. Design of “To-Be” Architecture: a. Develop a future-ready architecture blueprint b. Ensure compatibility with cloud adoption strategies, cybersecurity frameworks, and business objectives. c. Incorporate MACH architecture principles (Microservices, API-first, Cloud-native, Headless) for modular, scalable, and composable enterprise systems. 6. Stakeholder Engagement: a. Collaborate with divisional IT teams, data center teams, and business units to gather requirements and validate architectural decisions. b. Lead and participate in workshops and whiteboarding sessions to co-create solutions. Skills Required 1. Architecture & Frameworks: a. Strong understanding of TOGAF, Zachman, or FEAF frameworks. b. Experience designing MACH-based architectures (Microservices, API-first, Cloud- native, Headless). c. Familiarity with event-driven architecture (EDA) and domain-driven design (DDD). 2. Cloud & Infrastructure: a. Experience with cloud platforms (AWS, Azure, GCP) and hybrid hosting models. b. Knowledge of containerization (Docker, Kubernetes), CI/CD pipelines, and DevSecOps practices. c. Infrastructure modernization including edge computing and IoT integration. 3. Data & Integration: a. Expertise in data architecture, data lakes, data mesh, and data fabric concepts. b. Proficiency in ETL/ELT tools, data virtualization, and real-time streaming platforms (Kafka, MQTT). c. Strong understanding of data governance, privacy laws (e.g., DPDP Act, GDPR), and compliance frameworks. 4. Security & Compliance: a. Familiarity with cybersecurity standards (ISO/IEC 27001, NIST, IEC 62443). b. Experience implementing zero trust architecture and identity & access management (IAM). 5. Tools & Platforms: a. Experience with enterprise architecture tools (e.g., ArchiMate, Sparx EA, LeanIX). b. Familiarity with integration platforms (MuleSoft, Dell Boomi, Apache Camel). c. Exposure to ERP systems (SAP, Oracle), CRM systems (Oracle, Salesforce, etc.) OT platforms, and industrial protocols. 6. Soft Skills: a. Ability to translate business needs into technical architecture. b. Strong documentation skills.

Posted 2 days ago

Apply

5.0 - 7.0 years

14 - 24 Lacs

Bengaluru

Work from Office

Naukri logo

Data Visualization Software Developer Engineer (5-8 Years Experience) Role Overview: We are looking for a skilled Data Visualization Software Developer Engineer with 6-8 years of experience in developing interactive dashboards and data-driven solutions using Looker and LookerML. The ideal candidate will have expertise in Google Cloud Platform (GCP) and BigQuery and a strong understanding of data visualization best practices. Experience in the media domain (OTT, DTH, Web) will be a plus. Key Responsibilities: Work with BigQuery to create efficient data models and queries for visualization. Develop LookML models, explores, and derived tables to support business intelligence needs. Optimize dashboard performance by implementing best practices in data aggregation and visualization. Collaborate with data engineers, analysts, and business teams to understand requirements and translate them into actionable insights. Implement security and governance policies within Looker to ensure data integrity and controlled access. Leverage Google Cloud Platform (GCP) services to build scalable and reliable data solutions. Maintain documentation and provide training to stakeholders on using Looker dashboards effectively. Troubleshoot and resolve issues related to dashboard performance, data accuracy, and visualization constraints. Maintain and optimize existing Looker dashboards and reports to ensure continuity and alignment with business KPIs Understand, audit, and enhance existing LookerML models to ensure data integrity and performance Build new dashboards and data visualizations based on business requirements and stakeholder input Collaborate with data engineers to define and validate data pipelines required for dashboard development and ensure the timely availability of clean, structured data Document existing and new Looker assets and processes to support knowledge transfer, scalability, and maintenance Support the transition/handover process by acquiring detailed knowledge of legacy implementations and ensuring a smooth takeover Required Skills & Experience: 6-8 years of experience in data visualization and business intelligence using Looker and LookerML. Strong proficiency in writing and optimizing SQL queries, especially for BigQuery. Experience in Google Cloud Platform (GCP), particularly with BigQuery and related data services. Solid understanding of data modeling, ETL processes, and database structures. Familiarity with data governance, security, and access controls in Looker. Strong analytical skills and the ability to translate business requirements into technical solutions. Excellent communication and collaboration skills. Expertise in Looker and LookerML, including Explore creation, Views, and derived tables Strong SQL skills for data exploration, transformation, and validation Experience in BI solution lifecycle management (build, test, deploy, maintain) Excellent documentation and stakeholder communication skills for handovers and ongoing alignment Strong data visualization and storytelling abilities, focusing on user-centric design and clarity Preferred Qualifications: Experience working in the media industry (OTT, DTH, Web) and handling large-scale media datasets. Knowledge of other BI tools like Tableau, Power BI, or Data Studio is a plus. Experience with Python or scripting languages for automation and data processing. Understanding of machine learning or predictive analytics is an advantage

Posted 2 days ago

Apply

3.0 - 6.0 years

14 - 30 Lacs

Delhi, India

On-site

Linkedin logo

Industry & Sector: A fast-growing services provider in the enterprise data analytics and business-intelligence sector, we deliver high-throughput data pipelines, warehouses, and BI insights that power critical decisions for global BFSI, retail, and healthcare clients. Our on-site engineering team in India ensures the reliability, accuracy, and performance of every dataset that reaches production. Role & Responsibilities Design, execute, and maintain end-to-end functional, regression, and performance test suites for ETL workflows across multiple databases and file systems. Validate source-to-target mappings, data transformations, and incremental loads to guarantee 100% data integrity and reconciliation. Develop SQL queries, Unix shell scripts, and automated jobs to drive repeatable test execution, logging, and reporting. Identify, document, and triage defects using JIRA/HP ALM, partnering with data engineers to resolve root causes quickly. Create reusable test data sets and environment configurations that accelerate Continuous Integration/Continuous Deployment (CI/CD) cycles. Contribute to test strategy, coverage metrics, and best-practice playbooks while mentoring junior testers on ETL quality standards. Skills & Qualifications Must-Have: 3-6 years hands-on ETL testing experience in data warehouse or big-data environments. Advanced SQL for complex joins, aggregations, and data profiling. Exposure to leading ETL tools such as Informatica, DataStage, or Talend. Proficiency in Unix/Linux command-line and shell scripting for job orchestration. Solid understanding of SDLC, STLC, and Agile ceremonies; experience with JIRA or HP ALM. Preferred: Automation with Python, Selenium, or Apache Airflow for data pipelines. Knowledge of cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Performance testing of large datasets and familiarity with BI tools like Tableau or Power BI. Benefits & Culture Highlights Merit-based growth path with dedicated ETL automation upskilling programs. Collaborative, process-mature environment that values quality engineering over quick fixes. Comprehensive health cover, on-site cafeteria, and generous leave policy to support work-life balance. Workplace Type: On-site | Location: India | Title Used Internally: ETL Test Engineer. Skills: agile methodologies,aws redshift,jira,hp alm,datastage,apache airflow,test automation,power bi,selenium,advanced sql,data warehouse,unix/linux,azure synapse,stlc,gcp bigquery,shell scripting,sql,performance testing,agile,python,sdlc,tableau,defect tracking,informatica,etl testing,dimension modeling,talend

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Role : Sr. LLM Engineer: Gen AI Location : Hyderabad/Gurgaon Fulltime About the Rol e Sr. LLM Engineer: Gen AI Turing is looking for people with LLM experience to join us in solving business problems for our Fortune 500 customers. You will be a key member of the Turing GenAI delivery organization and part of a GenAI project. You will be required to work with a team of other Turing engineers across different skill sets. In the past, the Turing GenAI delivery organization has implemented industry leading multi-agent LLM systems, RAG systems, and Open Source LLM deployments for major enterprises. Required skills • 5+ years of professional experience in building Machine Learning models & systems • 1+ years of hands-on experience in how LLMs work & Generative AI (LLM) techniques particularly prompt engineering, RAG, and agents. • Experience in driving the engineering team toward a technical roadmap. • Expert proficiency in programming skills in Python, Langchain/Langgraph and SQL is a must. • Understanding of Cloud services, including Azure, GCP, or AWS • Excellent communication skills to effectively collaborate with business SMEs Roles & Responsibilities • Develop and optimize LLM-based solutions: Lead the design, training, fine-tuning, and deployment of large language models, leveraging techniques like prompt engineering, retrieval-augmented generation (RAG), and agent-based architectures. • Codebase ownership: Maintain high-quality, efficient code in Python (using frameworks like LangChain/LangGraph) and SQL, focusing on reusable components, scalability, and performance best practices. • Cloud integration: Aide in deployment of GenAI applications on cloud platforms (Azure, GCP, or AWS), optimizing resource usage and ensuring robust CI/CD processes. • Cross-functional collaboration: Work closely with product owners, data scientists, and business SMEs to define project requirements, translate technical details, and deliver impactful AI products. • Mentoring and guidance: Provide technical leadership and knowledge-sharing to the engineering team, fostering best practices in machine learning and large language model development. • Continuous innovation: Stay abreast of the latest advancements in LLM research and generative AI, proposing and experimenting with emerging techniques to drive ongoing improvements in model performance.

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Sr. Backend Developer Location: Gurgaon Experience: 5+ Years CTC: Up to 20 LPA Industry: Tech Product Job Description: We are looking for a talented and motivated Backend Developer to join our dynamic team. The ideal candidate will have a strong foundation in Node.js, cloud-native development, and AI/LLM integration. In this role, you will design scalable backend systems, deploy LLM models and AI agents (Agnos / LangGraph), and collaborate across teams to build secure, high-performance applications in a cloud-native environment. Key Responsibilities: -Build scalable backend services and RESTful APIs using Node.js, Express.js, and NestJS -Architect and manage cloud-native systems on AWS and GCP -Integrate AI/LLM agents using Agnos, LangGraph, or related tools -Set up and manage CI/CD pipelines (AWS CodePipeline, Lambda) -Implement real-time communication using Socket.IO, Kafka, RabbitMQ, SQS -Ensure API security, access control, and multi-tenant system design -Collaborate with frontend teams working on React.js -Guide and mentor team members technically Required Skills & Qualifications -5+ years of backend development experience with interactive applications -Strong command over Node.js, Express.js, NestJS -Experience working with frontend technologies like React.js -Advanced knowledge of AWS, GCP, and cloud cost/security optimization -Familiar with GraphQL, AWS AppSync, Elasticsearch -Experience deploying LLMs and using Agnos / LangGraph -Practical understanding of CI/CD pipelines and automation -Expertise in real-time systems like Kafka, RabbitMQ, Socket.IO, SQS

Posted 2 days ago

Apply

4.0 - 7.0 years

4 - 9 Lacs

Mumbai

Work from Office

Naukri logo

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired bya collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizationsunlock the value of technology and build a more sustainable, more inclusive world. Design, configure, and manage NetApp storage solutions, including ONTAP, AFF, and FAS series. Implement and maintain storage replication, backup, and disaster recovery strategies using SnapMirror, SnapVault, and MetroCluster. Perform storage provisioning, troubleshooting, and performance tuning. Work with NAS (CIFS/NFS) and SAN (FC/iSCSI) technologies for enterprise environments. Support NetApp integration with cloud providers (AWS, Azure, Google Cloud). Automate storage management tasks using PowerShell, Python, or Ansible. Collaborate with IT teams to ensure high availability, security, and efficiency of storage environments. Monitor and resolve storage-related incidents and performance issues. Required Skills & Qualifications Bachelor"s degree in Computer Science, Information Technology, or a related field. Experience in NetApp storage administration. Expertise in NetApp ONTAP, SnapMirror, SnapVault, MetroCluster, and Active IQ. Hands-on experience with SAN/NAS protocols (NFS, CIFS, iSCSI, FC). Knowledge of cloud-based storage solutions (AWS FSx, Azure NetApp Files, Google Cloud). Familiarity with automation tools like PowerShell, Ansible, Python. Strong troubleshooting and problem-solving skills. NetApp certifications (NCDA, NCIE, or equivalent) are a plus. - Grade Specific Netapp Storage Admin Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of "22.5 billion.

Posted 2 days ago

Apply

2.0 - 4.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

"Strong working experience in python based Django,Flask framework. Experience in developing microservices based design and architecture. Strong programming knowledge in Javascript,HTML5, Python, Restful API, gRPC API Programming experience & object-oriented concepts in PYTHON. Knowledge of python libraries like Numpy, Pandas, Ppen3D, OpenCV, Matplotlib. Knowledge of MySQL/Postgres/MSSQL database. Knowledge of 3D geometry. Knowledge of SSO/OpenID Connect/OAuth authentication protocols. Working experience with version control systems like GitHub/BitBucket/GitLab. Familiarity with continuous integration and continuous deployment (CI/CD) pipelines. Basic knowledge of image processing. Knowledge of data-analysis, data science. Strong communication skills. Very good in analytical & logical thinking on different perspectives. Ability to handle challenges & resolve any blockers. Good team player & proactive in giving new ideas/suggestions/solutions & constructive analysis of team member"s ideas. Interested to working in a fast-paced, Agile software development team Good to Have: Knowledge of other programming language like C, C++. Knowledge of basics of machine-learning. Exposure to NoSQL Databases. Knowledge of GCP/AWS/Azure cloud" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Has more than a year of relevant work experience. Solid understanding of programming concepts, software design and software development principles. Consistently works to direction with minimal supervision, producing accurate and reliable results. Individuals are expected to be able to work on a range of tasks and problems, demonstrating their ability to apply their skills and knowledge. Organises own time to deliver against tasks set by others with a mid term horizon. Works co-operatively with others to achieve team goals and has a direct and positive impact on project performance and make decisions based on their understanding of the situation, not just the rules.

Posted 2 days ago

Apply

7.0 - 12.0 years

6 - 10 Lacs

Mumbai

Work from Office

Naukri logo

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired by a collaborative community of colleagues around the world, and where you ’ ll be able to reimagine what ’ s possible. Join us and help the world ’ s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role As a SAP HANA on RISE Consultant with 7 to 12 years of Experience, you will be responsible for guiding customers through their digital transformation journey by leveraging SAP’s cloud-based RISE platform. You will support the migration, deployment, and optimization of SAP S/4HANA systems on RISE, ensuring high availability, scalability, and performance in a cloud environment. Lead or support SAP S/4HANA migrations to RISE with SAP (public/private cloud) Collaborate with BASIS, infrastructure, and cloud teams for system setup and operations Ensure system performance, security, and compliance in cloud-hosted environments Provide technical guidance on upgrades, patches, and integration with other SAP modules Your Profile 2+ years of relevant experience in SAP HANA and cloud-based SAP landscapes Hands-on experience with RISE with SAP, including provisioning and migration Strong knowledge of SAP S/4HANA architecture and cloud infrastructure (AWS, Azure, GCP) Familiarity with SAP BTP, SAP Cloud ALM, and SAP Activate methodology Excellent problem-solving and communication skills WHAT YOU"LL LOVE ABOUT WORKING HERE We value flexibility and support your work-life balance. Enjoy remote work options tailored to your lifestyle. Benefit from flexible working hours to suit your personal needs. Advance your career with structured growth programs. Access certifications in SAP and leading cloud platforms like AWS and Azure. Stay ahead in your field with continuous learning opportunities.

Posted 2 days ago

Apply

7.0 - 11.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Cloud Infrastructure Architects define, design and deliver a comprehensive and coherent transformation implementation across Business, Information, Systems and Technology through strong technical Cloud and Infrastructure expertise. They design the entire Cloud- and Infrastructure-based IT lifecycle to deliver business change, which may be enabled by Cloud. - Grade Specific Managing Cloud Infrastructure Architect - Design, deliver and manage complete cloud infrastructure architecture solutions. Demonstrate leadership of topics in the architect community and show a passion for technology and business acumen. Work as a stream lead at CIO/CTO level for an internal or external client. Lead Capgemini operations relating to market development and/or service delivery excellence. Are seen as a role model in their (local) community. Certificationpreferably Capgemini Architects certification level 2 or above, relevant cloud and infrastructure certifications, IAF and/or industry certifications such as TOGAF 9 or equivalent. Skills (competencies) Agile (Software Development Framework) Analytical Thinking AWS Architecture Business Acumen Capgemini Integrated Architecture Framework (IAF) Change Management Cloud Architecture Coaching Collaboration Commercial Awareness DevOps Google Cloud Platform (GCP) Influencing Innovation Knowledge Management Managing Difficult Conversations Network Architecture Risk Assessment Risk Management SAFe Stakeholder Management Storage Architecture Storytelling Strategic Planning Strategic Thinking Sustainability Awareness Technical Governance Verbal Communication Written Communication

Posted 2 days ago

Apply

7.0 - 10.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications. 1. Applies scientific methods to analyse and solve software engineering problems. 2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance. 3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers. 4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities. 5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Is highly respected, experienced and trusted. Masters all phases of the software development lifecycle and applies innovation and industrialization. Shows a clear dedication and commitment to business objectives and responsibilities and to the group as a whole. Operates with no supervision in highly complex environments and takes responsibility for a substantial aspect of Capgemini’s activity. Is able to manage difficult and complex situations calmly and professionally. Considers ‘the bigger picture’ when making decisions and demonstrates a clear understanding of commercial and negotiating principles in less-easy situations. Focuses on developing long term partnerships with clients. Demonstrates leadership that balances business, technical and people objectives. Plays a significant part in the recruitment and development of people. Skills (competencies) Verbal Communication

Posted 2 days ago

Apply

9.0 - 13.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

This role involves the development and application of engineering practice and knowledge in defining, configuring and deploying industrial digital technologies including but not limited to PLM MES for managing continuity of information across the engineering enterprise, including design, industrialization, manufacturing supply chain, and for managing the manufacturing data. - Grade Specific Focus on Digital Continuity Manufacturing. Fully competent in own area. Acts as a key contributor in a more complex critical environment. Proactively acts to understand and anticipates client needs. Manages costs and profitability for a work area. Manages own agenda to meet agreed targets. Develop plans for projects in own area. Looks beyond the immediate problem to the wider implications. Acts as a facilitator, coach and moves teams forward.

Posted 2 days ago

Apply

7.0 - 10.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

The engineer is expected to help setup automation and CI/CD pipelines across some of the new frameworks being setup by the Blueprints & continuous assurance squad. Our squad is working on multiple streams to improve the cloud security posture for the bank. Required skills: Strong hands-on experience and understanding on the GCP cloud. Strong experience with automation and familiarity with one or more scripting languages like python, GO, etc Knowledge and experience with any Infrastructure as code language like Terraform(preferred), Cloudformation, etc Ability to take quickly learn the frameworks and tech stack used and contribute towards the goals of the squad" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Is highly respected, experienced and trusted. Masters all phases of the software development lifecycle and applies innovation and industrialization. Shows a clear dedication and commitment to business objectives and responsibilities and to the group as a whole. Operates with no supervision in highly complex environments and takes responsibility for a substantial aspect of Capgemini’s activity. Is able to manage difficult and complex situations calmly and professionally. Considers ‘the bigger picture’ when making decisions and demonstrates a clear understanding of commercial and negotiating principles in less-easy situations. Focuses on developing long term partnerships with clients. Demonstrates leadership that balances business, technical and people objectives. Plays a significant part in the recruitment and development of people. Skills (competencies) Verbal Communication

Posted 2 days ago

Apply

4.0 - 6.0 years

3 - 7 Lacs

Chennai

Work from Office

Naukri logo

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired bya collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizationsunlock the value of technology and build a more sustainable, more inclusive world. Experience with building Cloud vendor agnostic SaaS product. Experience in Java, Spring boot microservices, deployed as containers in Kubernetes ecosystem. In depth understanding of micro services architectures, technological familiarity with public/private/hybrid cloud, Openstack, GCE, Kubernetes, AWS Have deep understanding of building API"s/services That are built on top of MQ"s - RabbitMQ, Kafka, NATS etc. That uses cache like Redis, Memcached to improve the performance of the platform That scales to millions of users in a cloud environment like Private cloud, GCP, AWS, Azure, etc. Good to have OAuth, OpenID, SAML based authentication experience.

Posted 2 days ago

Apply

6.0 - 11.0 years

30 - 45 Lacs

Bengaluru

Hybrid

Naukri logo

Key Skills: Java, Python, Core Java, Kubernetes, Docker, GCP, AWS Cloud, Azure, MS SQL Server Roles and Responsibilities: Design and build robust, scalable, and secure backend systems using Java, Python, or similar technologies. Contribute to system architecture, code reviews, and software design best practices. Collaborate with cross-functional teams including product, QA, DevOps, and frontend engineers. Work with containerization and orchestration tools like Docker and Kubernetes. Build and deploy cloud-native applications on AWS, GCP, or Azure. Drive CI/CD implementation and DevOps automation. Mentor junior engineers and foster a strong engineering culture. Continuously identify performance improvements and bottlenecks Skills Required: Must-Have: 4+ years of hands-on experience in software engineering. Strong coding skills in Java, Python, or similar backend language. Solid understanding of Core Java concepts and software design patterns. Experience with Microservices, REST APIs, and distributed system design. Proficiency in Docker, Kubernetes, and cloud platforms like AWS, GCP, or Azure. Familiarity with SQL databases (e.g., MS SQL Server). Strong communication and problem-solving skills. Nice-to-Have: Frontend experience using React/Angular/Vue. Exposure to CI/CD tools and DevOps culture. Familiarity with data engineering or ML pipelines. Experience working in a fast-paced startup environment. Education: Bachelor's or Master's degree in Computer Science, Engineering, or related field, or equivalent practical experience.

Posted 2 days ago

Apply

5.0 years

20 - 27 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Industry: Information Technology | Database & Infrastructure Services We are a fast-scaling managed services provider helping enterprises in finance, retail, and digital-native sectors keep mission-critical data available, secure, and high-performing. Our on-site engineering team in India safeguards petabytes of transactional data and drives continuous optimisation across hybrid environments built on open-source technologies. Role & Responsibilities Administer and optimise PostgreSQL clusters across development, staging, and production workloads. Design, implement, and automate backup, recovery, and disaster-recovery strategies with point-in-time restore. Tune queries, indexes, and configuration parameters to achieve sub-second response times and minimise resource consumption. Configure and monitor logical and streaming replication, high availability, and failover architectures. Harden databases with role-based security, encryption, and regular patching aligned to compliance standards. Collaborate with DevOps to integrate CI/CD, observability, and capacity planning into release pipelines. Skills & Qualifications Must-Have 5+ years PostgreSQL administration in production. Expertise in query tuning, indexing, and vacuum strategies. Proficiency with Linux shell scripting and automation tools. Hands-on experience with replication, HA, and disaster recovery. Preferred Exposure to cloud-hosted PostgreSQL (AWS RDS, GCP Cloud SQL). Knowledge of Ansible, Python, or Kubernetes for infrastructure automation. Benefits & Culture Highlights Engineer-led culture that values technical depth, peer learning, and continuous improvement. Access to enterprise-grade lab environments and funded certifications on PostgreSQL and cloud platforms. Competitive salary, health insurance, and clear growth paths into architecture and SRE roles. Work Location: On-site, India. Skills: postgresql,shell scripting,vacuum strategies,dba,linux shell scripting,python,disaster recovery,automation tools,cloud-hosted postgresql,indexing,query tuning,replication,ansible,high availability,kubernetes,postgresql administration

Posted 2 days ago

Apply

3.0 - 6.0 years

14 - 30 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Industry & Sector: A fast-growing services provider in the enterprise data analytics and business-intelligence sector, we deliver high-throughput data pipelines, warehouses, and BI insights that power critical decisions for global BFSI, retail, and healthcare clients. Our on-site engineering team in India ensures the reliability, accuracy, and performance of every dataset that reaches production. Role & Responsibilities Design, execute, and maintain end-to-end functional, regression, and performance test suites for ETL workflows across multiple databases and file systems. Validate source-to-target mappings, data transformations, and incremental loads to guarantee 100% data integrity and reconciliation. Develop SQL queries, Unix shell scripts, and automated jobs to drive repeatable test execution, logging, and reporting. Identify, document, and triage defects using JIRA/HP ALM, partnering with data engineers to resolve root causes quickly. Create reusable test data sets and environment configurations that accelerate Continuous Integration/Continuous Deployment (CI/CD) cycles. Contribute to test strategy, coverage metrics, and best-practice playbooks while mentoring junior testers on ETL quality standards. Skills & Qualifications Must-Have: 3-6 years hands-on ETL testing experience in data warehouse or big-data environments. Advanced SQL for complex joins, aggregations, and data profiling. Exposure to leading ETL tools such as Informatica, DataStage, or Talend. Proficiency in Unix/Linux command-line and shell scripting for job orchestration. Solid understanding of SDLC, STLC, and Agile ceremonies; experience with JIRA or HP ALM. Preferred: Automation with Python, Selenium, or Apache Airflow for data pipelines. Knowledge of cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Performance testing of large datasets and familiarity with BI tools like Tableau or Power BI. Benefits & Culture Highlights Merit-based growth path with dedicated ETL automation upskilling programs. Collaborative, process-mature environment that values quality engineering over quick fixes. Comprehensive health cover, on-site cafeteria, and generous leave policy to support work-life balance. Workplace Type: On-site | Location: India | Title Used Internally: ETL Test Engineer. Skills: agile methodologies,aws redshift,jira,hp alm,datastage,apache airflow,test automation,power bi,selenium,advanced sql,data warehouse,unix/linux,azure synapse,stlc,gcp bigquery,shell scripting,sql,performance testing,agile,python,sdlc,tableau,defect tracking,informatica,etl testing,dimension modeling,talend

Posted 2 days ago

Apply

7.0 - 12.0 years

14 - 18 Lacs

Kolkata

Remote

Naukri logo

Senior DevOps Engineer Infrastructure & Platform Specialist Department: Product and Engineering Location: Remote / Kolkata, WB (On-site) Job Summary: A Senior DevOps Engineer is responsible for designing, implementing, and maintaining the operational aspects of cloud infrastructure. Their goal is to ensure high availability, scalability, performance, and security of cloud-based systems. Key Responsibilities Design and maintain scalable, reliable, and secure cloud infrastructure. Address integration challenges, data consistency. Choose appropriate cloud services (e.g., compute, storage, networking) based on business needs. Define architectural best practices and patterns (e.g., microservices, serverless, containerization). Ensure version control and repeatable deployments of infrastructure. Automate cloud operations tasks (e.g., deployments, patching, backups). Implement CI/CD pipelines using tools like Jenkins, GitHub Actions, GitLab CI, etc. Design and implement cloud monitoring and alerting systems (e.g., CloudWatch, Azure Monitor, Prometheus, Datadog, ManageEngine). Optimize performance, resource utilization, and cost across environments. Capacity planning of resources Resource planning and deployment (HW, SW, Capex). Financial forecasting. Tracking and Management of allotted budget. Cost optimization with proper architecture and open-source technologies. Ensure cloud systems follow security best practices (e.g., encryption, IAM, zero-trust principles, VAPT). Implement compliance controls (e.g., HIPAA, GDPR, ISO 27001). Conduct regular security audits and assessments. Build systems for high availability, failover, disaster recovery, and business continuity. Participate in incident response and post-mortems. Implement and manage Service Level Objectives (SLOs) and Service Level Indicators (SLIs). Work closely with development, security, and IT teams to align cloud operations with business goals. Define governance standards for cloud usage, billing, and resource tagging. Provide guidance and mentorship to DevOps and engineering teams. Keep updating infrastructure/deployment documents. Interacting with prospective customers in pre-sales meetings to showcase architecture and security layer of the product and answering questions. Key Skills & Qualifications: Technical Skills VM provisioning and infrastructure ops on AWS, GCP, or Azure. Experience with API gateways (Kong, AWS API Gateway, NGINX). Experience managing MySQL and MongoDB on self-hosted infrastructure. Operational expertise with Elasticsearch or Solr. Proficient with Kafka, RabbitMQ, or similar message brokers. Hands-on experience with Airflow, Temporal, or other workflow orchestration tools. Familiarity with Apache Spark, Flink, Confluent/Debezium or similar streaming frameworks. Strong skills in Docker, Kubernetes, and deployment automation. Experience writing IaC with Terraform, Ansible, or CloudFormation. Building and maintaining CI/CD pipelines (GitLab, GitHub Actions, Jenkins). Experience with monitoring/logging stacks like Prometheus, Grafana, ELK, or Datadog. Sound knowledge of networking fundamentals (routing, DNS, VPN, TLS/SSL, firewalls). Experience designing and managing HA/DR/BCP infrastructure. Bonus Skills Prior involvement in SOC 2 / ISO 27001 audits or documentation. Hands-on with VAPT processes especially working directly with clients or security partners. Scripting in Go, in addition to Bash/Python. Exposure to service mesh tools like Istio or Linkerd. Experience: Must have 7+ years of experience as DevOps Engineer

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description WHO WE ARE Led by the Chief Information Security Officer (CISO), Technology Risk secures Goldman Sachs against hackers and other cyber threats. We are responsible for detecting and preventing attempted cyber intrusions against the firm, helping the firm develop more secure applications and infrastructure, developing software in support of our efforts, measuring cybersecurity risk, and designing and driving implementation of cybersecurity controls. The team has global presence across the Americas, APAC, India and EMEA. Within Technology Risk, Advisory is the consultative and technology subject matter expertise arm, responsible for assessing new technology initiatives for risk, partnering with engineers to architect and design secure products and services, embedding implementation reviews as part of the SDLC and CI/CD pipeline via code analysis and penetration testing, and guiding technology innovation in terms of security and control across Goldman Sachs. The team plays a critical role in designing and assessing controls for our transition to building native public cloud applications. YOUR IMPACT In this role, you will join the global Secure SDLC (S-SDLC) team within Technology Risk – the team is responsible for the identification of software security flaws, along with providing security assurance advice and guidance to the engineers to help them manage application risks. You will interact with all parts of the firm giving you the opportunity to grow within the Technology Risk team as well as other divisions within the firm. The ideal candidate should have experience of integrating, and tuning, software security controls within continuous deployment SDLC, ability to review, triage and remediate findings by interfacing with the Business Units and help raise developer security awareness. How You Will Fulfill Your Potential The Secure-SDLC team is responsible for the identification of software security flaws, along with providing security assurance advice and guidance to the engineers to help them manage application risks. You will become a highly committed trusted Risk Advisor with the discipline and interpersonal skills to work in a global environment communicating the impact of technology risks and the approach to mitigation and acceptance. You will provide Technology Risk Advisory risk assessment and advisory services to engineers as part of the Technology Risk function. Job Responsibilities Lead and/or support static, dynamic and security awareness services. Drive adoption of application security controls within Software Development Life Cycle (SDLC). Review issues identified by S-SDLC tools, ensuring compliance to established review SLAs. Interface with Business Units, provide advice and consultation, to help remediate issues identified by S-SDLC tools. Develop, and customise rules, to improve detection capability of S-SDLC tools. Help engineer tools and solutions that facilitate the adoption of security controls. Develop Proof-of-Concepts (PoC), to be shown as solutions, and handover to Engineering for broader rollout. Work with engineers to develop customized security testing strategy to complement the existing security testing program managed by Technology Risk. Be responsible to communicate program to broader developers’ community for solutions that might impact Developer Experience (DevEx). Be responsible for the awareness, training and guidance on security related issues. Conduct product evaluation of solutions that may benefit the S-SDLC program. Basic Qualifications You will use your strong technical, interpersonal, organizational, written, and verbal communication skills to interact with your internal clients locally and globally. Your knowledge of Software Development Lifecycle (SDLC), Application Security and Risk Management techniques and methodologies will enable you to be an active member of the team along with your professional experience in one, or more, of the following disciplines: Ability to explain common secure coding practices and application security vulnerabilities, based on guidance from the industry recognised cybersecurity frameworks and standards e.g. NIST Cyber Security Framework and OWASP. Ability to engage technical client base of engineers and communicate security requirements, potential risks, and influence development practices. Ability to communicate security flaws in a clear and concise manner to a broad range of audience from engineers, SMEs to senior management and provide clear remediation guidance. Experience with software development methodologies e.g. Agile, DevOps etc. Fluent in at least one major programming language (e.g. Java, Python, Go etc.) Working knowledge of CI/CD platforms e.g. Gitlab, AWS Code Commit and Deploy (or similar). Intermediate Knowledge of DevSecOps solutions i.e. ability to review identified findings, conduct analysis (e.g. impact, accuracy etc.), develop and customise detection capability of one or more of the following solutions: Static Application Security Testing (SAST) Dynamic/Interactive Application Security Testing (DAST/IAST) Software Composition Analysis (SCA) Infrastructure as Code (IaC) Container Security Mobile Security Preferred Qualifications Project management skills Knowledge of Cloud (AWS, GCP, Azure) and Cloud Security applications #TechRiskCybersecurity About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers . We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2024. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity

Posted 2 days ago

Apply

4.0 - 7.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

At Allstate, great things happen when our people work together to protect families and their belongings from lifes uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. The GenAI Cloud Engineer Engineer is a full stack engineer who builds and operates the cloud application development and hosting platforms for Allstate. This role will have the primary accountability of owning, developing, implementing and operating GenAI Cloud platforms. This role will also encompass developing, building, administering, and deploying self-service tools that enable Allstate developers to build, deploy and operate artificial intelligence applications to solve our most complex business challenges. As a GenAI Engineer, they will be part of an engineering team primarily working in a paired programming team, collaborating with different team members. They will split time evenly in executing operational tasks to maintain the platform and servicing customer requests; and engineering new solutions to automate the build and operational tasks. They will serve as pair anchors being advocates of paired programming, test driven development, infrastructure engineering, and continuous delivery on the team. Key Responsibilities Serves as an anchor to enable Digital product team to GenAI analytics Platform. This includes delivering product and solution briefings, creating demos, executing proof-of-concept projects, and collaborating directly with product management to prioritize solutions that drive consumer adoption of Azure AI Foundry, AI Services, OpenAI, Agentic AI, AWS Sagemaker rand AWS Bedrock. Writes and builds continuous delivery pipelines to manage and automate the lifecycle of the different platform components Builds, manages and operates the infrastructure as a service layer (hosted and cloud-based platforms) that supports the different platform services Leads post mortem activities to identify systemic solutions to improve the overall operations of the platform, and recommends and improves technology related policies and procedures Identifies and troubleshoots any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network and application Evaluates performance trends and expected changes in demand and capacity; and establish the appropriate scalability plans Integrates different components and develops new services with a focus on open source to allow a minimal friction developer interaction with the platform and application services Builds, manages and operates the infrastructure and configuration of the platform infrastructure and application environments with a focus on automation and infrastructure as code Maintain and enhance existing Terraform codebase.Education 4 year Bachelors Degree (Preferred) Experience 3 or more years of experience (Preferred) Knowledge of Prompt Engineering, Agentic and AI open-source ecosystem. Experience and skills with Azure Application and AI services. Experience with AWS bedrock(Preferred) Knowledge of consuming large language model (LLM) and Foundational model APIs. Experience in Terraform development. Skills Cloud PlatformsAzure, AWS Generative AI Programming LanguagesPython Infrastructure as Code (IaC)Terraform (including Tofu/Env0 tools) Primary Skills Shift Time Shift B (India) Recruiter Info Shriya Kumariskuow@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporations Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organizations business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India .

Posted 2 days ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About us: Netcore Cloud is a MarTech platform helping businesses design, execute, and optimize campaigns across multiple channels. With a strong focus on leveraging data, machine learning, and AI, we empower our clients to make smarter marketing decisions and deliver exceptional customer experiences. Our team is passionate about innovation and collaboration, and we are looking for a talented Lead Data Scientist to guide and grow our data science team. Role Summary: As the Lead Data Scientist, you will head our existing data science team, driving the development of advanced machine learning models and AI solutions. You will play a pivotal role in shaping our ML/AI strategy, leading projects across NLP, deep learning, predictive analytics, and recommendation systems, while ensuring alignment with business goals. Exposure to Agentic AI systems and their evolving applications will be a valuable asset in this role, as we continue to explore autonomous, goal-driven AI workflows. This role combines hands-on technical leadership with strategic decision-making to build impactful solutions for our customers. Key Responsibilities: Leadership and Team Management : Lead and mentor the existing data science engineers, fostering skill development and collaboration. Provide technical guidance, code reviews, and ensure best practices in model development and deployment. Model Development and Innovation : Design and build machine learning models for tasks like NLP, recommendation systems, customer segmentation, and predictive analytics. Research and implement state-of-the-art ML/AI techniques to solve real-world problems. Ensure models are scalable, reliable, and optimized for performance in production environments. We operate in AWS and GCP, so it’s mandatory that you have previous experience in setting up the MLOps workflow in either of the cloud service provider. Business Alignment : Collaborate with cross-functional teams (engineering, product, marketing, etc.) to identify opportunities where AI/ML can drive value. Translate business problems into data science solutions and communicate findings to stakeholders effectively. Drive data-driven decision-making to improve user engagement, conversions, and campaign outcomes. Technology and Tools : Work with large-scale datasets, ensuring data quality and scalability of solutions. Leverage cloud platforms like AWS, GCP for model training and deployment. Utilize tools and libraries such as Python, TensorFlow, PyTorch, Scikit-learn, and Spark for development. With so much innovation happening around Gen AI and LLMs, we prefer folks who have already exposed themselves to this exciting opportunity via AWS Bedrock or Google Vertex. Qualifications: Education : Master’s or PhD in Computer Science, Data Science, Mathematics, or a related field. Experience : With an industry experience of more than 8 years with 5+ years of experience in data science, and at least 2 years in a leadership role managing a strong technical team. Proven expertise in machine learning, deep learning, NLP, and recommendation systems. Hands-on experience deploying ML models in production at scale. Experience in Martech or working on customer-facing AI solutions is a plus. Technical Skills : Proficiency in Python, SQL, and ML frameworks like TensorFlow or PyTorch. Strong understanding of statistical methods, predictive modeling, and algorithm design. Familiarity with cloud-based solutions (AWS Sagemaker, GCP AI Platform, or similar). Soft Skills : Excellent communication skills to present complex ideas to both technical and non-technical stakeholders. Strong problem-solving mindset and the ability to think strategically. A passion for innovation and staying up-to-date with the latest trends in AI/ML. Why Join Us: Opportunity to work on cutting-edge AI/ML projects impacting millions of users. Be part of a collaborative, innovation-driven team in a fast-growing Martech company. Competitive salary, benefits, and a culture that values learning and growth. Location Bengaluru

Posted 2 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Job Responsibilities Design, deploy, and maintain security measures to safeguard our cloud infrastructures across AWS, GCP, and Azure. Ensure the security of containerized applications through the implementation of Kubernetes and microservices security best practices. Architect secure container environments, including Kubernetes clusters, Docker setups, and orchestration solutions, emphasizing vulnerability reduction and compliance. Develop and enforce security policies, standards, and procedures for cloud environments and containerized workloads. Collaborate with cross-functional teams to integrate security best practices into the software development lifecycle (SDLC) and continuous integration/continuous deployment (CI/CD) pipelines. Automate security operations and workflows using scripting languages like Python. Partner closely with DevOps teams to fortify container orchestration platforms and containerized workloads. Conduct routine security assessments, vulnerability scans, and penetration tests to identify and address potential security weaknesses. Stay abreast of industry developments, emerging threats, and best practices in cloud security, container security, and DevSecOps methodologies. Qualifications & Experiences Hybrid work environment. Must be based in the WBD’s office, minimum three days/week. Bachelor’s degree in computer science, Information Security, or a related field. 4+ years of experience working as a Cloud Security Engineer or a similar role. In-depth knowledge of cloud computing platforms such as AWS, GCP, and Azure. Proficiency in writing scripts and automation using Python. Strong understanding of DevSecOps principles and practices. Experience with containerization technologies such as Docker and Kubernetes, including securing Kubernetes clusters and containerized workloads. Familiarity with microservices security principles and best practices. Relevant certifications such as AWS, GCP or Azure is desired Excellent communication and collaboration skills. If you: are excited to work in an international, fast-paced, multi-faceted media company. are comfortable ensuring timely escalation, responsiveness and follow through to meet deadlines. are knowledgeable of, and understand, the risk-based business impact approach to cybersecurity. are actively questioning and influencing actions needed to attain goals and targets. are comfortable driving initiatives forward without having direct control of staff. Then help us create the future with one of the world’s largest media & entertainment companies. How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request.

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue Resolution: Working on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue Resolution: Collaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology Integration: Being eager to learn new technologies and implementing the same in feature development. Preferred Education Master's Degree Required Technical And Professional Expertise Java 8 and above SpringBoot, Rest API and Containerization with Docker. Development experience in Java. Good communication skills. Any cloud exposure AWS/GCP/Azure Preferred Technical And Professional Experience Creative Problem solving skills Good communication skills

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies