Home
Jobs

20025 Gcp Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

We are looking for a skilled Senior Data Engineer with 5-8 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement large-scale data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data systems and databases. Ensure data quality, integrity, and security. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve technical issues related to data engineering. Job Requirements Strong knowledge of data engineering principles and practices. Experience with data modeling, database design, and data warehousing. Proficiency in programming languages such as Python, Java, or C++. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.

Posted 2 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Job Title: Technical Specialist Education: MCA/B.E/B.Tech/B.Sc/Any Graduation Experience: 8+years Location: Bangalore/Mumbai/Hyderabad Key Skills: PostgreSQL Open Source and EDB PostgreSQL. AWS, Microsoft Azure and GCP - PostgreSQL. Pgbadger, PgAdmin, Pgbouncer, BARMAN, pgbackrest, repmgr, Patroni and pg_repack Job Description: 7+ years of relevant Experience. Good knowledge of PostgreSQL Database Architecture. Installing, configuring PostgreSQL using source code, RPM, one-click installation on-prem and Cloud Technologies. Strong experience in handling Database Logical and Physical Backups. Hands on experience on restoration techniques on PostgreSQL like pg_restore and Point in time recovery (PITR). Expertise in Applying database patches from lower version to higher version. Major and Minor upgrade of PostgreSQL DB's on multiple platforms (pg_dump/pg_restore & pg_upgrade). Implemented PostgreSQL DR servers and PostgreSQL load balancing using pgpool. Expertise in Streaming Replication (Including Cascading Replication). Configuring pgbouncer for connection pooling. Configuring pgbadger for generating statistic report on the basis PostgreSQL log file. Handling Database corruption issues. Have done Oracle to PostgreSQL Migration for multiple clients. Strong knowledge on PostgreSQL configuration parameter for tuning the DB health. Managed Users and Tablespace on PostgreSQL Servers. Configuring Heterogeneous DB connections between Oracle, PostgreSQL. Expertise in Shell scripts for performing Online Backup and maintenance activity. Strong experience in setting up RDS & Aurora Cluster. Strong experience in Query Tuning and performance improvement. Tuning parameter group and configuring Aurora cluster for Read Replica. Configuring RDS & Aurora PostgreSQL Logs to push to S3 Bucket. Experience on Reviewing performance Metrics and query tuning on Aurora PostgreSQL Configured Query Store and Analytics workspace on MS Azure Cloud. About Us Datavail is a leading provider of data management, application development, analytics, and cloud services, with more than 1,000 professionals helping clients build and manage applications and data via a world-class tech-enabled delivery platform and software solutions across all leading technologies. For more than 17 years, Datavail has worked with thousands of companies spanning different industries and sizes, and is an AWS Advanced Tier Consulting Partner, a Microsoft Solutions Partner for Data & AI and Digital & App Innovation (Azure), an Oracle Partner, and a MySQL Partner. About The Team Datavail’s Team of PostgreSQL Experts Can Save You Time and Money We have extensive experience with just about everything in PostgreSQL. Our consultants, architects, DBAs, and database development team have extensive hands-on experience in traditional on-prem PostgreSQL and in Cloud too. Our experts have an average of 15 years of experience. They’ve overcome every obstacle in helping clients manage everything from databases, analytics, reporting, migrations, and upgrades to monitoring and overall data management. You can free up your IT resources to focus on growing your business rather than fighting fires. Our PostgreSQL experts can guide you through strategic initiatives or support routine database management. Datavail’s Comprehensive PostgreSQL Database Services Datavail offers PostgreSQL consulting services that allow you to take advantage of all the features of the PostgreSQL database. By providing high availability solutions, building high end database systems, architectural support, and managed service support our team can ensure optimal performance for your database on-prem or in the cloud. PostgreSQL Database Managed Services Datavail’s business focuses on helping you use your data to drive business results through cost-saving services. The success of your business depends on how well you understand and manage your data. Our managed cloud services give you the power to unleash your organization’s potential. We provide comprehensive and technically advanced support for PostgreSQL installations to ensure that your databases are safe, secure, and managed with the utmost level of care. Our delivery performance in data management leads the industry. We offer highly trained PostgreSQL database administrators via a 24×7, always on, always available, global delivery model. With the combination of a proven delivery model and top-notch experience ensures that Datavail will remain the database experts on demand you desire. Datavail’s flexible and client focused services always add value to your organization. Are you a seasoned PostgreSQL Database Administrator? Does working in a multi customer, multi domain environment on global scale motivates you? If yes, this role could be a good fit for you. Datavail is seeking a highly skilled, self-motivated, and passionate PostgreSQL DBA to join our PostgreSQL Global Practice. As a PostgreSQL DBA, you will be working on the latest Opensource technologies in the industry. This position is based out of our global delivery centers in Mumbai, Hyderabad, and Bangalore in India. Datavail is one of the largest data-focused services company in North America and provides both professional and managed services and expertise in Database Management, Application Development and Management, Cloud & Infrastructure Management, Packaged Applications and BI/Analytics. Why should you work at Datavail? Learn from a vast pool of global PostgreSQL DBAs with over 500 combined years of industry experience. You would be working with multiple customers in a multi domain environments.Work range: On-Prem core PostgreSQL DBs, AWS RDS & Aurora, and Azure - PostgreSQL. Your Certifications would be on us. Future Opportunity for permanent deputation on H1B to work in US. Leverage the DV Training program to upscale the technical skills.

Posted 2 days ago

Apply

3.0 - 5.0 years

9 - 13 Lacs

Indore, Pune, Chennai

Work from Office

Naukri logo

What will your role look like Perform both Manual and Automated testing of software applications. Write and maintain test scripts using Selenium with Java. Troubleshoot, debug, and resolve software defects, ensuring high-quality software delivery. Participate in test case design, reviewing requirements, and ensuring comprehensive test coverage. Collaborate with development and product teams to understand the product features and ensure quality. Continuously improve testing practices and processes. Work with cloud platforms such as AWS, Azure, or GCP for testing and deployment tasks. Utilize Terraform for automating infrastructure as code. Ensure the application meets all defined functional and performance standards before release. Stay updated with the latest industry trends, tools, and technologies. Why you will love this role Besides a competitive package, an open workspace full of smart and pragmatic team members, with ever-growing opportunities for professional and personal growth Be a part of a learning culture where teamwork and collaboration are encouraged, diversity is valued and excellence, compassion, openness and ownership is rewarded We would like you to bring along Experience in both Manual and Automation Testing Hands-on experience with Java Strong proficiency in Selenium with Java Excellent debugging skills Experience working with at least one cloud platform (AWS/Azure/GCP) In-depth knowledge of the Software Testing Life Cycle (STLC) processes Practical experience with Terraform Familiarity with the Storage domain is a plus Location - Chennai,Indore,Pune,Vadodara

Posted 2 days ago

Apply

5.0 - 10.0 years

80 - 85 Lacs

Noida, Faridabad, Gurugram

Work from Office

Naukri logo

Job Description We are looking for a DevOps Architect to manage the software development process and create an automated delivery pipeline that assists in building software more efficiently. The responsibilities of a DevOps Architect include assessing, supporting, and implementing high-quality Information Technology architecture. To be successful as a DevOps Architect, you should demonstrate a leadership mindset, solid operational experience, and the ability to problem-solve. Ultimately, a top-notch DevOps Architect should have exceptional communication skills, be knowledgeable about the latest industry trends, and highly innovative. Responsibilities Facilitating the development process and operations. Identifying setbacks and shortcomings. Creating suitable DevOps channels across the organization. Establishing continuous build environments to speed up software development. Designing efficient practices. Delivering comprehensive best practices. Managing and reviewing technical operations. Guiding the development team. Good knowledge of AWS, Ubuntu and Linux Skills Degree in Computer Science, Information Systems, or related field. Previous experience working on a 24x7 cloud or SaaS operation team. Experience with infrastructure management and monitoring. Strong knowledge of DevOps Platform tooling ( Chef, Puppet, and Docker.) Working knowledge of Automation service provisioning and middleware configuration. Ability to work independently and as part of a team. Strong analytical skills. Pratibha tanwar

Posted 2 days ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Sr. MDM Developer with Core java Experience Location: India, remote Duration: 6 Months Job Description: What you’ll bring Minimum Bachelor's degree in computer science or a related field. 10+ years of experience in managing Data management applications in Java based technologies, with good exposure tcloud and web development. 5+ years of experience in customization and implementation of IBM MDM AE and SE Strong knowledge of Core Java, Web and API development. Hands-on experience with AWS S3 & Google Cloud Storage Proficiency in SQL (queries, stored procedures). Understanding of batch processing concepts. Understanding of Realtime API Services, both Synchronous & Asynchronous methodologies Familiarity with CI/CD pipelines and collaboration in deployment workflows. Experience with workflow automation tools (e.g., UC4, Airflow). Retail Experience is a plus. Comments for Suppliers: Candidate with IBM MDM AE customizations, data model, batch processing, API Services (both SOAP & REST) Code MDM Match Merge, PME, Tuning Cloud Services Experience What you’ll do Supporting the core of the Customer Data platform that is IBM MDM Advanced Edition (v 12.0) with various customization and extension on it Supporting the API and Batch Interfaces around the MDM Hub Design and lead upcoming migration of the Master Data Hub ta new platform Support various integrations of the MDM Hub with Data Cleansing and Standardization platform like SAP Analyze data quality issues and discrepancies and apply correction on the Data Support various data compliance processes like Data Retention, CCPA, DNS, Data Retrieval etc. Analysis of Master Data with other transactional data in Warehouse (Snowflake) Monitor and support various System maintenance tasks and automation services Technical expertise you need thave IBM MDM Advanced and Standard Edition (12.0) Handson expertise of IBM MDM AE including customizations, data model, batch processing, API Services (both SOAP & REST) Handson expertise of IBM MDM SE for match rules, weight generation, tuning Workbench, WebSphere Application Server administration (for deployment and monitoring) Experience of integrating IBM MDM with source and consuming Systems using Services, MQ and Batch framework Experience with scripting on Linux/Unix Systems is a must thave Database & Warehouse (Oracle & Snowflake) Good understanding of Relational Data Model Ability twrite complex queries primarily tanalyze and identify data anomalies Writing query for Data analysis and summary of transactional data aggregation & reporting Debug and develop Stored Procedures, Functions, Packages, Tables & Views Core Java Strong hands on expertise of Core Java development from scratch Integration with Cloud platforms like AWS/S3 and GCP/GCS using Java API and shell scripts Various authentication mechanism of Java core with cloud services Ability tdevelop and debug authentication with LDAP/AD Other MDM and Data Quality Technologies Good thave exposure with other MDM Technologies like Reltio, Atacama, Semarchy etc Good thave exposure with Data Quality and Standardization technologies like SAP System Integration Work on integrating third-party APIs and services intexisting systems. Support seamless data exchange between components and services. Understand the concepts of file based bulk data import & export process Build and maintain batch jobs for data processing and system integrations. Ensure reliable and efficient execution of recurring tasks. CI/CD Support Assist in maintaining and improving CI/CD pipelines for deployment automation. Collaborate with DevOps teams tstreamline delivery processes. Other Skills Collaboration: Work closely with product managers, QA engineers, and other stakeholders tdefine requirements and deliver high-quality software solutions. Leadership: Provide technical guidance in best practices, code quality, and system design. Code Reviews & Best Practices: Conduct peer code reviews, establish coding standards, and ensure best practices for software development, including testing and deployment. Performance Optimization: Focus on optimizing performance, including database queries, and addressing scaling issues thandle increasing load and traffic efficiently. Agile Methodology : Participate in Agile development processes, including sprint planning, retrospectives, and contribute tcontinuous improvements in engineering practices.

Posted 2 days ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Explore an Exciting Career at Accenture Do you believe in creating an impactAre you a problem solver who enjoys working on transformative strategies for global clientsAre you passionate about being part of an inclusive, diverse and collaborative culture Then, this is the right place for you! Welcome to a host of exciting global opportunities in Accenture Technology Strategy & Advisory. The Practice- A Brief Sketch: The Technology Strategy & Advisory Practice is a part of and focuses on the clients most strategic priorities. We help clients achieve growth and efficiency through innovative R&D transformation, aimed at redefining business models using agile methodologies. As part of this high performing team, you will work on the scaling Data & Analyticsand the data that fuels it allto power every single person and every single process. You will part of our global team of experts who work on the right scalable solutions and services that help clients achieve your business objectives faster. Business Transformation: Assessment of Data & Analytics potential and development of use cases that can transform business Transforming Businesses: Envisioning and Designing customized, next-generation data and analytics products and services that help clients shift to new business models designed for today's connectedlandscape of disruptive technologies Formulation of Guiding Principles and Components: Assessing impact to clients technology landscape/ architecture and ensuring formulation of relevant guiding principles and platform components. Product and Frameworks :Evaluate existing data and analytics products and frameworks available and develop options for proposed solutions. Bring your best skills forward to excel in the role: Leverage your knowledge of technology trends across Data & Analytics and how they can be applied to address real world problems and opportunities. Interact with client stakeholders to understand their Data & Analytics problems, priority use-cases, define a problem statement, understand the scope of the engagement, and also drive projects to deliver value to the client Design & guide development of Enterprise-wide Data & Analytics strategy for our clients that includes Data & Analytics Architecture, Data on Cloud, Data Quality, Metadata and Master Data strategy Establish framework for effective Data Governance across multispeed implementations. Define data ownership, standards, policies and associated processes Define a Data & Analytics operating model to manage data across organization . Establish processes around effective data management ensuring Data Quality & Governance standards as well as roles for Data Stewards Benchmark against global research benchmarks and leading industry peers to understand current & recommend Data & Analytics solutions Conduct discovery workshops and design sessions to elicit Data & Analytics opportunities and client pain areas. Develop and Drive Data Capability Maturity Assessment, Data & Analytics Operating Model & Data Governance exercises for clients A fair understanding of data platform strategy for data on cloud migrations, big data technologies, large scale data lake and DW on cloud solutions. Utilize strong expertise & certification in any of the Data & Analytics on Cloud platforms Google, Azure or AWS Collaborate with business experts for business understanding, working with other consultants and platform engineers for solutions and with technology teams for prototyping and client implementations. Create expert content and use advanced presentation, public speaking, content creation and communication skills for C-Level discussions. Demonstrate strong understanding of a specific industry , client or technology and function as an expert to advise leadership. Manage budgeting and forecasting activities and build financial proposalsQualification Your experience counts! MBA from a tier 1 institute 5 7 years of Strategy Consulting experience at a consulting firm 3+ years of experience on projects showcasing skills across these capabilities- Data Capability Maturity Assessment, Data & Analytics Strategy, Data Operating Model & Governance, Data on Cloud Strategy, Data Architecture Strategy At least 2 years of experience on architecting or designing solutions for any two of these domains - Data Quality, Master Data (MDM), Metadata, data lineage, data catalog. Experience in one or more technologies in the data governance space:Collibra, Talend, Informatica, SAP MDG, Stibo, Alteryx, Alation etc. 3+ years of experience in designing end-to-end Enterprise Data & Analytics Strategic Solutions leveraging Cloud & Non-Cloud platforms like AWS, Azure, GCP, AliCloud, Snowflake, Hadoop, Cloudera, Informatica, Snowflake, Palantir Deep Understanding of data supply chain and building value realization framework for data transformations 3+ years of experience leading or managing teams effectively including planning/structuring analytical work, facilitating team workshops, and developing Data & Analytics strategy recommendations as well as developing POCs Foundational understanding of data privacy is desired Mandatory knowledge of IT & Enterprise architecture concepts through practical experience and knowledge of technology trends e.g. Mobility, Cloud, Digital, Collaboration A strong understanding in any of the following industries is preferred:Financial Services, Retail, Consumer Goods, Telecommunications, Life Sciences, Transportation, Hospitality, Automotive/Industrial, Mining and Resources or equivalent domains CDMP Certification from DAMA preferred Cloud Data & AI Practitioner Certifications (Azure, AWS, Google) desirable but not essential

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

CloudLabs Inc was founded in 2014 with the mission to provide exceptional IT & Business consulting services at a competitive price, to help clients realize the best value from their investments. Within a short span, CloudLabs evolved from pure-play consulting into a transformative partner for Business Acceleration Advisory, Transformative Application Development & Managed Services - enabling digital transformations, M&A transitions, Automation & Process-driven optimizations & complex Integration initiatives for enterprises across the globe. As a Strategic Planning & Implementation Partner for global companies, CloudLabs has seen a 200% uptake in winning high-value, high-impact and high-risk projects that are critical for the business. With offices in the US, Canada, Australia & India and with the team of 150+ experienced specialists, CloudLabs is now at an inflection point and ready for its next curve of progress. Please write/follow us here: Website: cloudlabsit.com LinkedIn: CloudLabs Inc E-Mail: info@cloudlabsit.com What we offer: We welcome candidates rejoining the workforce after career break/parental leave and support their journey to reacclimatize too corporate. Flexible remote work. Competitive pay package. Attractive policy, medical insurance benefits, industry leading trainings. Opportunity to work remotely is available. Job description: We're seeking an experienced Medidata Rave(EDC) Study Designer to support delivering strategic initiatives supporting the Global Clinical IT team. In this role, the individual will be supporting business initiatives within Medidata Rave suites of applications. This individual should have demonstrated prior experience as a member of an Agile team and be highly motivated with excellent analysis and execution skills. Duties and Responsibilities Develop and maintain data management documentation and guidelines in accordance with Good Clinical Practices (GCP) and Good Documentation Practices (GDP). Provide subject matter expertise to project team members during all phases of project life cycle. Develop, test, and maintain data management systems. Provide subject matter expertise prior, during and post internal and external audits and inspections. Collaborate with Data Managers, Study Teams, Vendors, and Site Staff to formulate Data Transfer plans for secondary data sources (e.g., Lab data, Site data). Batch import agreed data sources into the EDC system. Develop, program, validate, and maintain Medidata Rave EDC clinical trial databases in accordance with company standards. Create EDC design specifications encompassing the data dictionary, event definitions, electronic consent, branching logic, edit checks, advanced query rules, calculated fields, and dynamic form and event rules. Work with Data Managers and study teams to design and construct the EDC database based on global eCRF libraries. Configure and optimize multiple patient user interfaces to support varying modes of data collection (eCOA- mobile device or tablet, EDC- laptop/desktop computer). Conduct, test, and produce Rave EDC Migration activities as required. Develop test scripts and coordinate EDC user acceptance testing (UAT) to ensure accuracy of database structure, content, and validation controls aligned with the original specifications. Coordinate and manage the deployment of new or modified EDC databases into production. Assist in mapping the EDC database to the company enterprise data warehouse. As part of continuous improvement efforts, develop and implement EDC design standards to enhance quality and streamline database build processes. Provide input into the development and revision of department SOPs. Maintain compliance with corporate, core and study-specific learning requirements. Experience with Medidata Custom Functions (C#/SQL) would be preferred. Preferred Qualifications Bachelor 's Degree in related field. 5+ years of experience developing, supporting, and maintaining healthcare systems. Prior experience with EDC system (Medidata Rave, Expert) Medidata Rave EDC Certified Study Builder certification is highly preferred Prior experience with MEDS reporter, SQL Advanced system analysis skills and experience with EDC technologies such as (iMedidata, architect modules, reporting modules). Demonstrated experience in systems analysis, SDLC, change management, and requirements gathering. Experience requirement: Minimum 5 years of relevant experience. Location: India only. Job type: Remote.

Posted 2 days ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Pune

Work from Office

Naukri logo

Role Purpose The purpose of this role is to lead DevOps team to facilitate better coordination among operations, development and testing functions by automating and streamlining the integration and deployment processes Do Drive technical solution support to the team to align on continuous integration (CI) and continuous deployment (CD) of technology in applications Design and define the overall DevOps architecture/ framework to for a project/ module delivery as per the client requirement Decide on the DevOps tool & platform and which needs to be deployed aligned to the customers requirement Create a tool deployment model for validating, testing and monitoring performance and align or provision for resources accordingly Define & manage the IT infrastructure as per the requirement of the supported software code Manage and drive the DevOps pipeline that supports the application life cycle across the DevOps toolchain from planning, coding and building, to testing, to staging, to release, configuration and monitoring Work with the team to tackle the coding and scripting needed to connect elements of the code that are required to run the software release with operating systems and production infrastructure with minimum disruptions Ensure on boarding application configuration from planning to release stage Integrate security in the entire dev-ops lifecycle to ensure no cyber risk and data privacy is maintained Provide customer support/ service on the DevOps tools Timely support internal & external customers escalations on multiple platforms Troubleshoot the various problems that arise in implementation of DevOps tools across the project/ module Perform root cause analysis of major incidents/ critical issues which may hamper project timeliness, quality or cost Develop alternate plans/ solutions to be implemented as per root cause analysis of critical problems Follow escalation matrix/ process as soon as a resolution gets complicated or isnt resolved Provide knowledge transfer, sharing best practices with the team and motivate Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipros standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Incase of performance issues, take necessary action with zero tolerance for will based performance issues Ensure that organizational programs like Performance Nxtarewell understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation Deliver No. Performance Parameter Measure Continuous Integration, Deployment & Monitoring 100% error free on boarding & implementation CSAT Manage service tools Troubleshoot queries Customer experience Capability Building & Team Management % trained on new age skills, Team attrition %, Employee satisfaction score Mandatory Skills: Ansible Tower.Experience5-8 Years.

Posted 2 days ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Role Purpose The purpose of this role is to lead DevOps team to facilitate better coordination among operations, development and testing functions by automating and streamlining the integration and deployment processes Do Drive technical solution support to the team to align on continuous integration (CI) and continuous deployment (CD) of technology in applications Design and define the overall DevOps architecture/ framework to for a project/ module delivery as per the client requirement Decide on the DevOps tool & platform and which needs to be deployed aligned to the customers requirement Create a tool deployment model for validating, testing and monitoring performance and align or provision for resources accordingly Define & manage the IT infrastructure as per the requirement of the supported software code Manage and drive the DevOps pipeline that supports the application life cycle across the DevOps toolchain from planning, coding and building, to testing, to staging, to release, configuration and monitoring Work with the team to tackle the coding and scripting needed to connect elements of the code that are required to run the software release with operating systems and production infrastructure with minimum disruptions Ensure on boarding application configuration from planning to release stage Integrate security in the entire dev-ops lifecycle to ensure no cyber risk and data privacy is maintained Provide customer support/ service on the DevOps tools Timely support internal & external customers escalations on multiple platforms Troubleshoot the various problems that arise in implementation of DevOps tools across the project/ module Perform root cause analysis of major incidents/ critical issues which may hamper project timeliness, quality or cost Develop alternate plans/ solutions to be implemented as per root cause analysis of critical problems Follow escalation matrix/ process as soon as a resolution gets complicated or isnt resolved Provide knowledge transfer, sharing best practices with the team and motivate Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipros standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Incase of performance issues, take necessary action with zero tolerance for will based performance issues Ensure that organizational programs like Performance Nxtarewell understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation Deliver No. Performance Parameter Measure Continuous Integration, Deployment & Monitoring 100% error free on boarding & implementation CSAT Manage service tools Troubleshoot queries Customer experience Capability Building & Team Management % trained on new age skills, Team attrition %, Employee satisfaction score Mandatory Skills: DevOps-Terraform.Experience5-8 Years.

Posted 2 days ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Pune

Work from Office

Naukri logo

Role Purpose The purpose of this role is to work with Application teams and developers to facilitate better coordination amongst operations, development and testing functions by automating and streamlining the integration and deployment processes Do Align and focus on continuous integration (CI) and continuous deployment (CD) of technology in applications Plan and Execute the DevOps pipeline that supports the application life cycle across the DevOps toolchain from planning, coding and building, testing, staging, release, configuration and monitoring Manage the IT infrastructure as per the requirement of the supported software code On-board an application on the DevOps tool and configure it as per the clients need Create user access workflows and provide user access as per the defined process Build and engineer the DevOps tool as per the customization suggested by the client Collaborate with development staff to tackle the coding and scripting needed to connect elements of the code that are required to run the software release with operating systems and production infrastructure Leverage and use tools to automate testing & deployment in a Dev-Ops environment Provide customer support/ service on the DevOps tools Timely support internal & external customers on multiple platforms Resolution of the tickets raised on these tools to be addressed & resolved within a specified TAT Ensure adequate resolution with customer satisfaction Follow escalation matrix/ process as soon as a resolution gets complicated or isnt resolved Troubleshoot and perform root cause analysis of critical/ repeatable issues Deliver No Performance Parameter Measure Continuous Integration,Deployment & Monitoring 100% error free on boarding & implementation CSAT Timely customer resolution as per TAT Zero escalation Reinvent your world.We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 days ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Strong automation skills (Devops + Python) CSP platform (AWS or Azure or GCP) Design, implement, and manage security measures for cloud-based systems and applications. Develop and maintain automated security scripts and tools using Python. Utilize Python for defining functions, automation tasks, and integrating with CI/CD pipelines. Build automation solutions using multiple Python libraries such as requests, boto3, pandas, etc. Collaborate with development and operations teams to integrate security best practices into the DevOps pipeline. Implement and manage Terraform-based infrastructure as code (IaC) and CI/CD deployments. Ensure compliance with security policies, standards, and regulatory requirements. Stay updated with the latest security trends, threats, and technologies. Provide guidance and training to team members on security best practices, automation techniques, and Terraform usage. Work on CNAPP tool and build/customize the rulesets as needed including exclusion handling. Work on CNAPP tool integrations and automations. Band - B3 Location - Bangalore CBR - 130K Do Drive technical solution support to the team to align on continuous integration (CI) and continuous deployment (CD) of technology in applications Design and define the overall DevOps architecture/ framework to for a project/ module delivery as per the client requirement Decide on the DevOps tool & platform and which needs to be deployed aligned to the customers requirement Create a tool deployment model for validating, testing and monitoring performance and align or provision for resources accordingly Define & manage the IT infrastructure as per the requirement of the supported software code Manage and drive the DevOps pipeline that supports the application life cycle across the DevOps toolchain from planning, coding and building, to testing, to staging, to release, configuration and monitoring Work with the team to tackle the coding and scripting needed to connect elements of the code that are required to run the software release with operating systems and production infrastructure with minimum disruptions Ensure on boarding application configuration from planning to release stage Integrate security in the entire dev-ops lifecycle to ensure no cyber risk and data privacy is maintained Provide customer support/ service on the DevOps tools Timely support internal & external customers escalations on multiple platforms Troubleshoot the various problems that arise in implementation of DevOps tools across the project/ module Perform root cause analysis of major incidents/ critical issues which may hamper project timeliness, quality or cost Develop alternate plans/ solutions to be implemented as per root cause analysis of critical problems Follow escalation matrix/ process as soon as a resolution gets complicated or isnt resolved Provide knowledge transfer, sharing best practices with the team and motivate Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipros standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Incase of performance issues, take necessary action with zero tolerance for will based performance issues Ensure that organizational programs like Performance Nxtarewell understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation Deliver No. Performance Parameter Measure Continuous Integration, Deployment & Monitoring 100% error free on boarding & implementation CSAT Manage service tools Troubleshoot queries Customer experience Capability Building & Team Management % trained on new age skills, Team attrition %, Employee satisfaction score Mandatory Skills: DevOps-Terraform.Experience5-8 Years.

Posted 2 days ago

Apply

6.0 years

0 Lacs

India

On-site

Linkedin logo

Immediate Joiners Only Need Someone with Python, Google Pub/Sub, CI/CD, Terraform and Vertex AI Experience. Qualifications: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent experience). Experience: 6+ years of experience in cloud architecture, with a focus on GCP. Technical Expertise: Strong knowledge of GCP core services, including compute, storage, networking, and database solutions. Proficiency in Infrastructure as Code (IaC) tools like Terraform, Deployment Manager, or Pulumi. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes, GKE, or Cloud Run). Understanding of DevOps practices, CI/CD pipelines, and automation. Strong command of networking concepts such as VPCs, load balancing, and firewall rules. Familiarity with scripting languages like Python or Bash. Preferred Qualifications: Google Cloud Certified – Professional Cloud Architect or Professional DevOps Engineer. Expertise in engineering and maintaining MLOps and AI applications. Experience in hybrid cloud or multi-cloud environments. Familiarity with monitoring and logging tools such as Cloud Monitoring, ELK Stack, or Datadog.

Posted 2 days ago

Apply

6.0 - 9.0 years

9 - 15 Lacs

Bengaluru

Hybrid

Naukri logo

Job Description: We are hiring a Java Developer with strong GCP experience, or a GCP Engineer proficient in Java. The candidate should be capable of developing scalable cloud-native applications using Google Cloud services. Key Skills: Java, Spring Boot, RESTful APIs Google Cloud Platform (GCP) Cloud functions, Pub/Sub, BigQuery (preferred) CI/CD, Docker, Kubernetes

Posted 2 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

Experience with the full SDLC Strong technical skills and the desire to be hands on, at least in the short to medium term Ability to coach and lead other developers Strong product mindset Comfortable with fast-paced, low-certainty environments Interest in the healthcare industry Technical skills : the ideal candidate will have some experience with most the following Public clouds (e.g. GCP, AWS, Azure), infrastructure-as-code Full technology stack (relational and non-relational databases, back-end, front-end) Automated testing and QA Standard devops practices (CI/CD, monitoring, operations) Experience 5 - 10 Years Salary Not Disclosed Industry IT Hardware / Technical Support / Telecom Engineering Qualification B.E Key Skills SDLC AWS CI/CD Software Architect Automated Testing Work From Home

Posted 2 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: DevOps Engineer Experience: 5 to 8 Years Location: Pune Job Description: We are lookingfor a skilled DevOps Engineer with 5 to 12 years of experience to join our dynamic team. The ideal candidatewill have a strong background in DevOps practices, CI/CD pipeline creation, and experience with GCP services. You will play a crucial role in ensuring smoothdevelopment, deployment, and integration processes. Key Responsibilities: CI/CD Pipeline Creation: Design, implement, and manage CI/CD pipelines using GitHub, ensuring seamless integration and delivery of software. Version Control: Manage and maintain code repositories using GitHub, ensuring best practices forversion control andcollaboration. Infrastructure as Code: Write and maintain infrastructure as code (IaC) using Terraform/YAML, ensuringconsistent and reliabledeployment processes. GCP Services Management: Utilize Google Cloud Platform (GCP) services to build, deploy, and scaleapplications. Manage and optimize cloudresources to ensure cost-effective operations. Automation s Monitoring: Implement automation scripts and monitoring tools to enhance the efficiency, reliability, and performance of our systems. Collaboration: Work closely with development, QA, and operations teams to ensure smooth workflows and resolve issuesefficiently. Security s Compliance: Ensure that all systems andprocesses comply with security and regulatory standards. Required Skills: DevOps Practices: Strong understanding of DevOps principles, including continuous integration, continuous delivery, and continuous deployment. GitHub: Extensive experience with GitHub for version control, collaboration, and pipeline integration. CI/CD: Hands-on experience in creating and managing CI/CD pipelines. GCP Services: Solid experience with GCP services, including compute, storage, networking, and security. Preferred Qualifications: GCP Certification: Google Cloud Platform certification is highly desirable and will be an added advantage. Scripting Languages: Proficiency in scripting languages such as Python, Bash, or similar. Monitoring Tools: Experience with monitoring and logging tools like Prometheus, Grafana, or Stackdriver. Educational Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field.

Posted 2 days ago

Apply

7.0 - 12.0 years

13 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

YOUR IMPACT: We are seeking a highly skilled and experienced Level 3 Site Reliability Engineer (SRE) to join our Cloud Operations team. This role is critical in driving advanced engineering initiatives to ensure infrastructure reliability, scalability, and automation across multi-cloud environments. As an L3 SRE, you will lead complex cloud support operations, troubleshoot infrastructure as code, implement observability frameworks, and guide junior SREs while helping shape future architectural direction. This role demands hands-on expertise in AWS, Azure, or GCP, advanced scripting, and deep observability integrationcontributing directly to uptime, automation maturity, and strategic improvements to cloud infrastructure. WHAT THE ROLE OFFERS: Cloud Infrastructure & Architecture Architect and maintain scalable, resilient systems across AWS, Azure, and GCP. Lead cloud adoption and migration strategies while ensuring minimal disruption and high reliability. Implement security and governance controls including VPC, Security Groups, Route53, ACM, and Security Hub. Perform deep infrastructure troubleshooting and root cause analysis, especially with IaC-based deployments. Infrastructure as Code (IaC) & Configuration Management Design and manage infrastructure using Terraform, Terragrunt, and CloudFormation. Oversee configuration management using tools like AWS SSM, SaltStack, and Packer. Review and remediate issues within Git-based CI/CD workflows for IaC and service deployment. Observability & Monitoring Build and maintain monitoring/alerting pipelines using CloudWatch, EventBridge, SNS, and Hund.io. Develop custom observability tooling for end-to-end visibility and proactive issue detection. Lead incident response and contribute to post-incident reviews and reliability reports. Automation, Scripting & CI/CD Develop and maintain automation tools using Bash, Python, Ruby, or PHP. Integrate deployment pipelines into secure, scalable CI/CD processes. Automate vulnerability assessments and compliance scans with ISO 27001 standards. Containerization & Microservices Support Lead container platform deployments using EKS, ECS, ECR, and Fargate. Guide engineering teams in Kubernetes resource optimization and troubleshooting. Database & Storage Management Provide advanced operational support for RDS, PostgreSQL, and Elasticsearch. Monitor database performance and ensure availability across distributed systems. Mentorship & Strategy Mentor L1 and L2 SREs on technical tasks and troubleshooting best practices. Contribute to cloud architecture planning, operational readiness, and process improvements. Help define and track Key Performance Indicators (KPIs) related to system uptime, MTTR, and automation coverage. WHAT YOU NEED TO SUCCEED: 7-12 years of experience in Site Reliability Engineering or DevOps roles. Advanced expertise in multi-cloud environments (AWS, Azure, GCP). Strong Linux and Windows administration background (Fedora, Debian, Microsoft). Proficiency in Terraform, Terragrunt, CloudFormation, and config management tools. Hands-on with monitoring tools like CloudWatch, SNS, EventBridge, and third-party integrations. Advanced scripting skills in Python, Bash, Ruby, or PHP. Knowledge of container platforms including EKS, ECS, and Fargate. Familiarity with Vulnerability Management, ISO 27001, and audit-readiness practices.

Posted 2 days ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Senior .Net/C# & C/C++ Developer Roles and Responsibilities: Can work independently, take responsibility and get things to a closure Working within the Scrum team designing and implementing new features and services Maintain a high level of quality in your output and test your own work before passing to QA for verification Be able to manage a team that is geographically spread Support of production environments Maintain an up-to-date knowledge of existing and emerging technologies relevant to the role Communicate and escalate issues in a clear and timely manner Required experience and skills: BS in Computer Science, Software Engineering or equivalent. Graduation in Engineering is preferred. Primary Skills .Net/C# & C/C++ developer SQL Secondary Skills Experience with working in both AWS and Azure Cloud environments DevOps Skills for Deployment - Kubernetes, Docker and Helm Tertiary Skills Experience with Mongo DB, SQL DB and writing queries. Experience in software architecture, design and development experience Hands-on experience with Requirements analysis, technical design, software architecture, design principles & considerations and best practices Nice to have knowledge in configuring and developing DevOps CI/CD pipelines with Kubernetes, Docker and with tools like Jenkins and Gitlab, Artifactory/JFrog, etc. Good understanding of application lifecycle management (ALM) Experience of working in an Agile development environment Exposure to oil and gas wells functionality / domain is an added advantage Able to collaborate with a multi-location team environment Excellent analytical, communication and problem-solving skills

Posted 2 days ago

Apply

8.0 - 10.0 years

10 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Senior Data Engineer (Databricks, PySpark, SQL, Cloud Data Platforms, Data Pipelines) Job Summary Synechron is seeking a highly skilled and experienced Data Engineer to join our innovative analytics team in Bangalore. The primary purpose of this role is to design, develop, and maintain scalable data pipelines and architectures that empower data-driven decision making and advanced analytics initiatives. As a critical contributor within our data ecosystem, you will enable the organization to harness large, complex datasets efficiently, supporting strategic business objectives and ensuring high standards of data quality, security, and performance. Your expertise will directly contribute to building robust, efficient, and secure data solutions that drive business value across multiple domains. Software Required Software & Tools: Databricks Platform (Hands-on experience with Databricks notebooks, clusters, and workflows) PySpark (Proficient in developing and optimizing Spark jobs) SQL (Advance proficiency in writing complex SQL queries and optimizing queries) Data Orchestration Tools such as Apache Airflow or similar (Experience in scheduling and managing data workflows) Cloud Data Platforms (Experience with cloud environments such as AWS, Azure, or Google Cloud) Data Warehousing Solutions (Snowflake highly preferred) Preferred Software & Tools: Kafka or other streaming frameworks (e.g., Confluent, MQTT) CI/CD tools for data pipelines (e.g., Jenkins, GitLab CI) DevOps practices for data workflows Programming LanguagesPython (Expert level), and familiarity with other languages like Java or Scala is advantageous Overall Responsibilities Architect, develop, and maintain scalable, resilient data pipelines and architectures supporting business analytics, reporting, and data science use cases. Collaborate closely with data scientists, analysts, and cross-functional teams to gather requirements and deliver optimized data solutions aligned with organizational goals. Ensure data quality, consistency, and security across all data workflows, adhering to best practices and compliance standards. Optimize data processes for enhanced performance, reliability, and cost efficiency. Integrate data from multiple sources, including cloud data services and streaming platforms, ensuring seamless data flow and transformation. Lead efforts in performance tuning and troubleshooting data pipelines to resolve bottlenecks and improve throughput. Stay up-to-date with emerging data engineering technologies and contribute to continuous improvement initiatives within the team. Technical Skills (By Category) Programming Languages: EssentialPython, SQL PreferredScala, Java Databases/Data Management: EssentialData modeling, ETL/ELT processes, data warehousing (Snowflake experience highly preferred) Preferred NoSQL databases, Hadoop ecosystem Cloud Technologies: EssentialExperience with cloud data services (AWS, Azure, GCP) and deployment of data pipelines in cloud environments PreferredCloud native data tools and architecture design Frameworks and Libraries: EssentialPySpark, Spark SQL, Kafka, Airflow PreferredStreaming frameworks, TensorFlow (for data prep) Development Tools and Methodologies: EssentialVersion control (Git), CI/CD pipelines, Agile methodologies PreferredDevOps practices in data engineering, containerization (Docker, Kubernetes) Security Protocols: Familiarity with data security, encryption standards, and compliance best practices Experience Minimum of 8 years of professional experience in Data Engineering or related roles Proven track record of designing and deploying large-scale data pipelines using Databricks, PySpark, and SQL Practical experience in data modeling, data warehousing, and ETL/ELT workflows Experience working with cloud data platforms and streaming data frameworks such as Kafka or equivalent Demonstrated ability to work with cross-functional teams, translating business needs into technical solutions Experience with data orchestration and automation tools is highly valued Prior experience in implementing CI/CD pipelines or DevOps practices for data workflows (preferred) Day-to-Day Activities Design, develop, and troubleshoot data pipelines for ingestion, transformation, and storage of large datasets Collaborate with data scientists and analysts to understand data requirements and optimize existing pipelines Automate data workflows and improve pipeline efficiency through performance tuning and best practices Conduct data quality audits and ensure data security protocols are followed Manage and monitor data workflows, troubleshoot failures, and implement fixes proactively Contribute to documentation, code reviews, and knowledge sharing within the team Stay informed of evolving data engineering tools, techniques, and industry best practices, incorporating them into daily work processes Qualifications Bachelor's or Master's degree in Computer Science, Information Technology, or related field Relevant certifications such as Databricks Certified Data Engineer, AWS Certified Data Analytics, or equivalent (preferred) Continuous learning through courses, workshops, or industry conferences on data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills with a focus on scalable solutions Excellent communication skills to effectively collaborate with technical and non-technical stakeholders Ability to prioritize tasks, manage time effectively, and deliver within tight deadlines Demonstrated leadership in guiding team members and driving project success Adaptability to evolving technological landscapes and innovative thinking Commitment to data privacy, security, and ethical handling of information

Posted 2 days ago

Apply

2.0 years

1 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

We are an early-stage startup using AI to revolutionize the recruitment landscape. We are transforming the recruitment process by adopting an AI-driven approach and candidate-centricity. Our AI platform empowers candidates to refine their interview skills and improve success rates with intelligent feedback from AI-powered Mock interviews. It enables recruiters to conduct interviews more efficiently and at a lower cost with an AI Interview assistant that facilitates smarter interviews, offering deeper insights for better decision-making. Job Description: Woyage.AI is seeking a Software Engineer QA (Stipend Only Initially) to quality test our AI-powered platform for recruitment services with automation. The ideal candidates for this opportunity will work with senior members of the engineering team to test manually and implement the automation test for the web, API, and mobile platforms in a very agile, fun, and exciting environment. This position directly reports to the Chief Technology Officer . Roles & Responsibilities: Write and maintain test plans, test strategies, test cycles, and test cases for functional, regression, performance, and integration testing. Design, develop, and execute automated test scripts for Web and API. Partner with product, UI/UX, and engineering teams to drive QA initiatives from planning through product release. Effectively use tools like SpreadSheet for test case management, Jira for bug tracking, and Confluence for documentation. Job Requirements: Bachelor’s degree in Computer Science or equivalent coursework. 2+ Years of Experience in Automation QA Engineering testing API, Web, and Mobile applications. Knowledge\Experience in automation test frameworks PyTest, Cypress, SuperTest, or similar. Knowledge\Experience in Python \ JavaScript. Knowledge\Experience in test tools like (PostMan, etc). Knowledge\Experience in Testing Methodologies for all types of testing. Knowledge\Experience in Scrum\Agile. Knowledge\Experience in collaboration and development tools (Git, Slack, Confluence, Jira). Knowledge\Experience in Cloud (AWS, GCP) and AI services is a plus. Type: Full Time 6 Months Stipend and then the role will transition into a full-time position based on both organizational performance and individual contributions. The timeline for this decision will depend on revenue growth or the successful completion of the next funding round. In Person, 5 Days, Coimbatore Facility Compensation: Stipend of Rs 10,000/monthly for initial 6 Months. Equity (Stocks) will be assigned after 6 months based on the individual performance. Full Time Compensation will be provided after generating revenue or securing funding through pre-seed or seed rounds, which are expected to happen between 6 months and 9 months.

Posted 2 days ago

Apply

0.0 - 1.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Company Description: Nxcar is transforming used car transactions to be as transparent and delightful as buying a new car. We offer free listings for individuals and dealers, used car loans, extended warranty, RC Transfer, and other used car-specific services. We inspect and digitize vehicle inventory at Nxcar premium dealerships and provide comprehensive CarScope so that you can buy a pre-owned vehicle with confidence. Nxcar is building a next-generation vehicle listing platform with integrated AI models to provide real-time insights and recommendations to customers for each vehicle as they browse. Job Overview: We are seeking an enthusiastic AI Application Engineer to join our innovative team in the newly created department - AI Studio. This role is crucial for building and integrating advanced AI/ML solutions into our existing systems. We are looking for a self-motivated individual with hands-on experience in developing AI/ML models that deliver actionable outputs. You should be comfortable experimenting with new technologies and methodologies in a fast-paced environment. The role comes with high growth opportunities, offering the potential for career advancement as you contribute to building cutting-edge AI solutions. Key Responsibilities: Integrate new technologies and AI-driven solutions to enhance our products and services. Work closely with cross-functional teams to implement AI models that are scalable, reliable, and efficient. Continuously learn and apply the latest AI/ML trends and tools, experimenting with new ideas and technologies. Write clean, efficient, and maintainable code for machine learning models and algorithms. Test and evaluate model performance, fine-tuning as necessary for optimal outcomes. Contribute to the creation of the AI Studio department, bringing in fresh ideas and collaborating on team initiatives. Provide insights and guidance based on data analysis to improve the model-building process. Develop and deploy AI/ML models that address real-world problems and deliver measurable outcomes. Build LLM Based projects from scratch Skills & Qualifications: 0-1 years of hands-on experience in AI/ML model development and deployment. Graduation from Tier 1 Engineering College. Proficiency in Python, R, or other relevant programming languages. Experience with popular AI/ML frameworks like TensorFlow, PyTorch, Scikit-learn, or similar. Strong problem-solving skills with the ability to understand business requirements and translate them into technical solutions. Basics of LLM , Langchain , Lang graph and similar agentic AI tools and Familiarity with Flask/Fast API. Ability to work in a fast-paced environment and willingness to learn and adapt to new technologies. Strong coding skills and experience in software engineering best practices. Familiarity with cloud platforms (AWS, GCP, Azure) for deploying AI models is a plus. A proactive mindset with a passion for innovation and creating impactful solutions.  Location: Gurgaon, Sector 42(In office)

Posted 2 days ago

Apply

6.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Description Job functions Salesforce security and compliance expert for customers and prospects Understand our business and the problems we are trying to solve, deeply, when it comes to our core security services Support the sales and pre-sales teams in responding to customer risk and security questionnaires and queries Build customer trust through managing and hosting in-person customer/prospect security meetings Be the Salesforce field expert for the Salesforce trust story covering security, architecture, reliability, performance, privacy and compliance. Interface with Product Management and Security teams to ensure all the latest security features and capabilities are properly represented in customer responses Collaborate with the Salesforce Legal, Privacy and other teams on customer-specific contract requirements Interface to Salesforce security engineering and product management teams Ensure teams are aware of gaps in our security/compliance capabilities that are impacting customers and prospects Ensure field sales, services and partner teams are consistently enabled with the latest and best positioning around Salesforce security and compliance Gather customer security/compliance requests, and liaison with Salesforce product managers to maintain a security product roadmap Provide input and assist in developing compliance-related documentation: white papers, standard questionnaires, security best practices, etc. Develop SME capabilities for selected Salesforce Services and work with the product teams and global SMEs within the team to stay updated on the latest developments. Support drafting white papers and security collateral Desired Qualifications Bachelor's degree with 6+ years of experience in information security, governance, and compliance Experience with cloud platforms like AWS, GCP, Azure. Understanding the architectural and security nuances. Excellent cross-functional collaboration and communication skills across product, security, Marketing, Field Sales, and more. Excellent communication and presentation skills Desired Skills And Experience Familiarity with one or more security and regulatory frameworks: NIST 800-53, NIST Cybersecurity Framework, PCI-DSS, ISO 27001, ISO 27017, ISO 27018, CSA, Monetary Authority of Singapore (MAS) Outsourcing Guidelines and TRM, Personal Data Protection laws in Singapore, Malaysia, Thailand, Indonesia, Vietnam etc, BNM Outsourcing guidelines and Risk Management in IT (RMiT) etc. Managed one or more compliance certifications/audits, either as an auditor or responder ( PCI-DSS, ISO27001, SOC-1/2, IRAP/ISMS, MTCS, etc.) Experience with completing customer security/compliance questionnaires Familiarity with Data Protection Laws in Australia Experience interpreting the intent of specific customer questions, and mapping them to industry standard controls Familiarity with public cloud architectures, security practices and compliance documentation Experience working in the Financial Services, Insurance, Banking, Superannuation, Telecommunication services industry Strong team player About Salesforce Salesforce, the Customer Success Platform and world's #1 CRM, empowers companies to connect with their customers in a whole new way. We are the fastest growing of the top 10 enterprise software companies, the World's Most Innovative Company according to Forbes, and one of Fortune's 100 Best Companies to Work for six years running. The growth, innovation, and Aloha spirit of Salesforce are driven by our incredible employees who thrive on delivering success for our customers while also finding time to give back through our 1/1/1 model, which leverages 1% of our time, equity, and product to improve communities around the world. Salesforce is a team sport, and we play to win. Join us!

Posted 2 days ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

Position : Senior Java Developer Experience : 10+ Years Location : Remote Description : We are looking for an experienced Java Developer with a strong background in designing and building scalable backend systems and APIs. The ideal candidate should have deep expertise in core Java , Spring Boot , microservices , and cloud platforms. Key Skills : Expertise in Core Java , Java 11+ Strong experience with Spring Boot , REST APIs , Microservices Proficient in SQL/NoSQL , JPA/Hibernate Experience with CI/CD , version control (Git), and testing frameworks Familiarity with Docker , Kubernetes , and cloud platforms (AWS/GCP/Azure) Good understanding of design patterns and scalable architecture Responsibilities : Design and develop robust, scalable Java applications Collaborate with cross-functional teams on architecture and design Write clean, testable, maintainable code Optimize performance and troubleshoot issues Participate in agile development and code reviews

Posted 2 days ago

Apply

6.0 - 11.0 years

18 - 22 Lacs

Chennai, Trivandrum

Work from Office

Naukri logo

Job Description: Key Responsibilities: Design and develop machine learning and deep learning systems using frameworks like TensorFlow, PyTorch, and Scikit-learn. Implement real-time systems for forecasting, optimization, and personalized recommendations. Research and apply advanced AI/ML techniques, including Generative AI models (e.g., GPT, LLaMA) and NLP architectures. Lead end-to-end machine learning projects, from problem framing to deployment and monitoring. Manage model lifecycle activities, including training, testing, retraining, and fine-tuning for optimal performance. Collaborate with analytics teams, practice leads, subject matter experts, and customer teams to understand business objectives and technical requirements. Conduct statistical analysis to address business challenges and drive innovation in AI/ML applications. Deploy machine learning applications on cloud platforms (e.g., GCP, Azure, AWS) and ensure seamless integration with existing systems. Monitor system performance, address data drift, and ensure scalability and adaptability to evolving requirements. Qualifications: Minimum 7+ years for Principal Data Scientist roles (3+ years for Data Scientist roles) with proven expertise in AI/ML, particularly in Generative AI, NLP, and computer vision. Proficient in Python, Java, and R. Strong expertise in TensorFlow, PyTorch, Scikit-learn, Django, Flask, Numpy, and Pandas. Comprehensive knowledge of data structures, SQL, multi-threading, and APIs (RESTful, OData). Hands-on experience with Tableau and Power BI for data visualization. Strong understanding of statistics, probability, algorithms, and optimization techniques. Experience deploying ML applications on GCP, Azure, or AWS. Familiarity with ERP/SAP integration, post-production support, and lifecycle management of ML systems. Industry experience in retail, telecom, or supply chain optimization is a plus. Exceptional problem-solving skills, strong leadership capabilities, and excellent communication skills.

Posted 2 days ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

We are looking for a highly skilled Java Engineer to join our dynamic team supporting AMD onsite in Hyderabad. This role is ideal for someone with strong backend development experience and a deep understanding of Java technologies, microservices, cloud, and CI/CD pipelines. Responsibilities Lead multidimensional projects involving cross-functional teams. Provide architectural input and lead code reviews. Resolve complex engineering issues and collaborate across teams. Design and develop software using Agile methodology. Contribute to detailed design documents and best practices. Mentor junior engineers and uplift technical standards. Apply secure coding practices and integrate cloud-based solutions. Primary Skills Java (v11+) Spring Boot, Spring MVC, Spring Data, Spring Security Microservices & REST APIs Docker & Kubernetes CI/CD (Jenkins, GitHub Actions, GitLab CI/CD) SQL & RDBMS (PostgreSQL, MySQL, Oracle) Cloud Platforms: AWS, GCP, or Azure Messaging Systems: Kafka or RabbitMQ (plus) Financial Systems Integration: Anaplan, Oracle Financials (plus) Qualifications 5+ years of backend development experience Strong problem-solving and communication skills Bachelors/Masters in Computer Science or equivalent

Posted 2 days ago

Apply

12.0 - 13.0 years

14 - 19 Lacs

Bengaluru

Work from Office

Naukri logo

Required Skills: Proven experience as a Node.js Developer or similar role. Strong proficiency in JavaScript/TypeScript and experience with modern JS frameworks (React, Angular, etc.). Solid understanding of web application architecture and RESTful APIs. Experience with databases like MongoDB, MySQL, PostgreSQL, etc. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Excellent communication and leadership skills. Experience with containerization and orchestration tools (Docker, Kubernetes). Knowledge of CI/CD pipelines and automated testing. Bachelor's degree in Computer Science, Engineering, or a related field (Master??s preferred).

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies