Home
Jobs

36116 Azure Jobs - Page 7

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Goa, India

On-site

Linkedin logo

Job Title: Senior System Administrator (DevOps) Location: Goa Experience: 5+ Years Job Overview We are looking for a highly skilled Senior System Administrator with DevOps expertise to manage and optimize our cloud infrastructure, ensure high system availability, and enhance deployment automation. The ideal candidate should have hands-on experience with AWS, Linux administration, server setup, SSL installation, networking, VPC, VPN, and automation tools. Key Responsibilities Cloud Infrastructure Management: Design, implement, and manage AWS services such as EC2, S3, RDS, Route 53, IAM, and VPC. Server Administration: Configure, maintain, and monitor Linux servers for high availability and performance. SSL/TLS Management: Install, renew, and troubleshoot SSL certificates for secure communications. Networking & Security: Set up and manage VPCs, VPNs, firewalls, security groups, and IAM policies to secure the infrastructure. Monitoring & Logging: Set up CloudWatch or other monitoring/logging tools for performance tracking and alerts. Backup & Disaster Recovery: Ensure regular backups, high availability, and disaster recovery strategies. Troubleshooting & Optimization: Identify and resolve system performance issues, security vulnerabilities, and other infrastructure challenges. Required Skills & Qualifications 5+ years of experience as a System Administrator, DevOps Engineer, or related role. Strong expertise in AWS (EC2, RDS, VPC, S3, Route 53, IAM, Lambda, etc.). Proficiency in Linux system administration (Ubuntu, CentOS, RHEL, etc.). Hands-on experience with SSL installation, renewal, and troubleshooting. Expertise in networking, firewalls, DNS, VPN, and security best practices. Familiarity with Docker, Kubernetes, and container orchestration. Proficiency in scripting languages like Bash, Python, or PowerShell for automation. Strong understanding of CI/CD pipelines and DevOps best practices. Experience with monitoring tools such as Datadog. Good knowledge of Git, version control systems, and deployment strategies. Strong problem-solving, troubleshooting, and analytical skills. Preferred Qualifications (Nice To Have) AWS Certification (AWS Solutions Architect, AWS SysOps, or AWS DevOps Engineer). Experience with hybrid cloud environments (AWS, Azure, GCP). Knowledge of database administration (MySQL, PostgreSQL, MongoDB).

Posted 8 hours ago

Apply

10.0 - 12.0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Area(s) of responsibility Job Title – Azure Architect Desired Profile Develop and maintain scalable architecture, database design and data pipelines and build out new Data Source integrations to support continuing increases in data volume and complexity Design, develop and maintain the data ingestion and integration frameworks using Azure Cloud Services Assist in designing end to end data and Analytics solution architecture and perform POCs within Azure Drive the design, sizing, POC setup, etc. of Azure environments and related services for the use cases and the solutions Experience Needed 10 -12 years of industry experience and at least 3 years of experience in architect role is required along with at least 3 to 4 years’ experience designing and building analytics solutions in Azure. Experience in architecting data ingestion/integration frameworks capable of processing structured, semi-structured & unstructured data sets in batch & real-time Hands-on experience in the design of reporting schemas, data marts and development of reporting solutions Develop batch processing, streaming and integration solutions and process Structured and Non-Structured Data Experience in Perform Design, Development & Deployment using Azure Services (Azure Synapse, Data Factory, Azure Data Lake Storage, Python)

Posted 8 hours ago

Apply

0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Area(s) of responsibility We are looking for an experienced Software Engineer with expertise in ASP.Net Core, C# , and clean architecture . The ideal candidate should have hands-on experience with Azure Kubernetes Service (AKS) , CI/CD processes, event-driven programming , and microservices monitoring tools. Strong communication, analytical thinking, and collaboration skills are essential. Responsibilities Develop and maintain software using ASP.Net Core, C#, following domain-driven development principles. Design scalable and efficient solutions with clean architecture. Work with SQL Server, ensuring optimized database performance. Implement and manage CI/CD pipelines using Argo CD, Git Build. Deploy and manage applications on Azure Kubernetes Service (AKS). Utilize Infrastructure as Code (IaC) for efficient cloud infrastructure provisioning. Implement event-driven architecture using event bus over Dapr. Work with Docker Desktop for containerization and deployment. Monitor microservices using tools such as Jaeger tracing, OpenTelemetry, Prometheus, and Grafana. Apply OOPS concepts for effective software development.

Posted 8 hours ago

Apply

12.0 - 14.0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Area(s) of responsibility Job Description - Azure Tech Project Manager Experience Required- 12- 14 years Project Manager will be responsible for driving project management activities in the Azure Cloud Services (ADF, Azure Databricks, PySpark, ADLS Gen2) Strong understanding of Azure Services process execution from acquiring data from source system to visualization Experience in Azure DevOPS Experience in Data Warehouse and Data Lake and Visualizations Project Management skills including time and risk management, resource prioritization and project structuring Responsible for END TO END project execution and delivery across multiple clients Understand ITIL processes related to incident management, problem management, application life cycle management, operational health management. Strong in Agile and Jira Tool Strong customer service, problem solving, organizational and conflict management skills Should be able to prepare weekly/monthly reports both internal and client management Strong in Agile and Jira Tool Should be able to help team members on technical issue Should be good learner and open to learn new functionalities

Posted 8 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are seeking a skilled DevOps Engineer to join our dynamic team. The ideal candidate will have expertise in VMware, Kubernetes, Azure, CI/CD, Docker, Nginx, basic programming, networking, and DevSecOps . This role involves developing, deploying, and maintaining infrastructure, ensuring scalability, and optimizing system performance. Responsibilities Design, implement, and manage CI/CD pipelines for seamless deployment. Deploy, manage, and scale applications using Kubernetes. Configure and manage cloud resources on Microsoft Azure. Automate infrastructure provisioning and management. Ensure security and compliance best practices in cloud and on-premises environments. Manage and configure Nginx as a web server and reverse proxy. Utilize Docker for containerization and microservices deployment. Monitor system performance, troubleshoot issues, and optimize for efficiency. Collaborate with development teams to ensure smooth integration of DevOps processes. Implement and maintain basic networking configurations. Integrate DevSecOps principles to enhance security in the development lifecycle. Requirements Proficiency in VMware for virtualization and resource management. Strong knowledge of Kubernetes for container orchestration. Hands-on experience with Azure cloud services. Experience in building and managing CI/CD pipelines using tools like Jenkins, GitHub Actions, or Azure DevOps. Docker expertise for containerization and microservices. Familiarity with Nginx for load balancing and proxy configuration. Understanding of networking concepts such as DNS, TCP/IP, and firewalls. Basic programming skills in Python, Bash, or any scripting language. Strong problem-solving skills and ability to work in a collaborative team environment. Experience with DevSecOps tools and methodologies for security automation. Preferred Qualifications Certifications in Azure, Kubernetes, VMware, or DevSecOps. Experience with infrastructure-as-code (IaC) tools like Terraform or Ansible. Knowledge of logging and monitoring tools such as Prometheus, Grafana, or ELK Stack. Experience with security best practices in DevOps environments. If you are passionate about automation, cloud technologies, DevOps methodologies, and security , we would love to hear from you! Apply now and be part of a forward-thinking team.

Posted 8 hours ago

Apply

9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description: Senior DevOps Engineer (9-12 Years Experience) - Banking Sector Location: Muscat, Oman Department: Technology/Engineering About Us NotionMindz Technology LLP is a global technology partner dedicated to empowering businesses through diverse service offerings. We collaborate with clients worldwide to support their technology initiatives, including: Technology Resource Staffing: Expert teams tailored to project needs. Offshore Development Center (ODC) Solutions: End-to-end setup, management, and optimization. Testing Services: Establishing and managing Testing Centers of Excellence (CoE) and Testing-as-a-Service (TaaS). Custom Development: Scalable software solutions aligned with business goals. Our mission is to be a trusted partner, enabling innovation and efficiency across industries. Job Summary As a Senior DevOps Engineer , you will design, implement, and manage cloud-native infrastructure, CI/CD pipelines, and automation frameworks to support mission-critical banking applications. Your expertise in DevOps practices, security compliance, and collaboration will drive operational excellence while adhering to stringent regulatory standards (e.g., PCI-DSS, GDPR, SOX). Key Responsibilities Infrastructure & Cloud Management Architect, deploy, and manage secure, scalable cloud infrastructure (AWS/Azure/GCP) for high-availability banking systems. Implement Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or Ansible. Optimize cloud costs while ensuring performance and reliability. CI/CD Pipeline Development Design and maintain robust CI/CD pipelines (Jenkins, GitLab CI, Azure DevOps) for rapid, secure deployments. Integrate automated testing, code quality checks, and security scanning (SAST/DAST). Security & Compliance Ensure infrastructure and pipelines comply with banking regulations (e.g., PCI-DSS, GDPR) and internal audits. Implement secrets management (Hashicorp Vault, AWS Secrets Manager) and role-based access control (RBAC). Monitoring & Incident Response Deploy monitoring/alerting tools (Prometheus, Grafana, ELK, Datadog) for real-time system health insights. Lead incident response, root cause analysis, and post-mortems for critical outages. Collaboration & Leadership Partner with development, QA, and security teams to streamline SDLC in an Agile environment. Mentor junior engineers and evangelize DevOps best practices (e.g., shift-left security, GitOps). Disaster Recovery & Business Continuity Design DR strategies and automate failover processes for zero-downtime deployments. Qualifications Technical Skills 9-12 years of DevOps/SRE experience, including 3+ years in banking/financial services. Expertise in: Cloud Platforms: AWS (preferred), Azure, or GCP with focus on serverless, Kubernetes (EKS/AKS/GKE), and microservices. Automation Tools: Terraform, Ansible, Jenkins, ArgoCD, Helm. Scripting: Python, Bash, or Go. Security: Vulnerability management, penetration testing, and compliance frameworks. Databases: SQL, NoSQL, and in-memory systems (e.g., Oracle, PostgreSQL, Redis). Soft Skills Strong communication skills to collaborate with cross-functional teams. Problem-solving mindset with a focus on delivering business value. Leadership experience in guiding DevOps transformations. Certifications (Preferred) AWS/Azure DevOps Engineer, CKA/CKAD, Hashicorp Terraform Associate, or CISSP. Education Bachelor’s/Master’s in Computer Science, Engineering, or related field. Why Join Us? Impact: Shape the future of banking technology in a secure, innovative environment. Growth: Access to cutting-edge tools, certifications, and global projects. Culture: Collaborative, inclusive workplace with a focus on work-life balance.

Posted 8 hours ago

Apply

6.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Data Scientist Location: Bangalore Business & Team: BB Advanced Analytics and Artificial Intelligence COE Impact & contribution: As a Senior Data Scientist, you will be instrumental in pioneering Gen AI and multi-agentic systems at scale within CommBank. You will architect, build, and operationalize advanced generative AI solutions—leveraging large language models (LLMs), collaborative agentic frameworks, and state-of-the-art toolchains. You will drive innovation, helping set the organizational strategy for advanced AI, multi-agent collaboration, and responsible next-gen model deployment. Roles & Responsibilities: Gen AI Solution Development: Lead end-to-end development, fine-tuning, and evaluation of state-of-the-art LLMs and multi-modal generative models (e.g., transformers, GANs, VAEs, Diffusion Models) tailored for financial domains. Multi-Agentic System Engineering: Architect, implement, and optimize multi-agent systems, enabling swarms of AI agents (utilizing frameworks like Lang chain, Lang graph, and MCP) to dynamically collaborate, chain, reason, critique, and autonomously execute tasks. LLM-Backed Application Design: Develop robust, scalable GenAI-powered APIs and agent workflows using Fast API, Semantic Kernel, and orchestration tools. Integrate observability and evaluation using Lang fuse for tracing, analytics, and prompt/response feedback loops. Guardrails & Responsible AI: Employ frameworks like Guardrails AI to enforce robust safety, compliance, and reliability in LLM deployments. Establish programmatic checks for prompt injections, hallucinations, and output boundaries. Enterprise-Grade Deployment: Productionize and manage at-scale Gen AI and agent systems with cloud infrastructure (GCP/AWS/Azure), utilizing model optimization (quantization, pruning, knowledge distillation) for latency/throughput trade offs. Toolchain Innovation: Leverage and contribute to open source projects in the Gen AI ecosystem (e.g., Lang Chain, Lang Graph, Semantic Kernel, Lang fuse, Hugging face, Fast API). Continuously experiment with emerging frameworks and research. Stakeholder Collaboration: Partner with product, engineering, and business teams to define high-impact use cases for Gen AI and agentic automation; communicate actionable technical strategies and drive proof-of-value experiments into production. Mentorship & Thought Leadership: Guide junior team members in best practices for Gen AI, prompt engineering, agentic orchestration, responsible deployment, and continuous learning. Represent CommBank in the broader AI community through papers, patents, talks, and open-source. Essential Skills: 6+ years of hands-on experience in Machine Learning, Deep Learning, or Generative AI domains, including practical expertise with LLMs, multi-agent frameworks, and prompt engineering. Proficient in building and scaling multi-agent AI systems using Lang Chain, Lang Graph, Semantic Kernel, MCP, or similar agentic orchestration tools. Advanced experience developing and deploying Gen AI APIs using Fast API; operational familiarity with Lang fuse for LLM evaluation, tracing, and error analytics. Demonstrated ability to apply Guardrails to enforce model safety, explainability, and compliance in production environments. Experience with transformer architectures (BERT/GPT, etc.), fine-tuning LLMs, and model optimization (distillation/quantization/pruning). Strong software engineering background (Python), with experience in enterprise-grade codebases and cloud-native AI deployments. Experience integrating open and commercial LLM APIs and building retrieval-augmented generation (RAG) pipelines. Exposure to agent-based reinforcement learning, agent simulation, and swarm-based collaborative AI. Familiarity with robust experimentation using tools like Lang Smith, GitHub Copilot, and experiment tracking systems. Proven track record of driving Gen AI innovation and adoption in cross-functional teams. Papers, patents, or open-source contributions to the Gen AI/LLM/Agentic AI ecosystem. Experience with financial services or regulated industries for secure and responsible deployment of AI. Education Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 01/07/2025

Posted 8 hours ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Augnito is the next-gen Voice AI, powering the healthcare industry. We empower medical professionals and streamline clinical workflows with cloud-based, AI speech recognition that offers ergonomic data entry with 99%+ accuracy, without the need for voice profile training, from any device, anywhere. Augnito helps streamline clinical workflows, makes healthcare intelligence securely accessible, and ensures that physicians have more time to concentrate on their primary concern: patient care. Their solutions are currently in use at more than 500 hospitals, across more than 25 countries. We don't adhere to the traditional 9-to-5 work style; instead, we are a closely-knit group of dreamers, builders, and innovators firmly committed to pushing the boundaries of what's possible in healthcare. What You’ll Do Manage Cloud & Containerized Environments – Administer and optimize multi-cloud infrastructure, leveraging Docker and Kubernetes to ensure scalability, security, and high availability. Automate & Implement Infrastructure as Code (IaC) – Streamline on-premises setups through automation and adopt IaC (Terraform, CFT) for efficient provisioning and configuration management. Oversee & Troubleshoot On-Prem Infrastructure – Deploy, configure, and resolve issues across Windows & Linux environments, ensuring system stability and minimal downtime. Enhance Monitoring & Incident Response – Set up robust monitoring and alerting systems (Prometheus, Grafana) to improve observability and respond proactively to incidents. Drive Continuous Learning & Innovation – Expand expertise in cloud operations, automation, and DevOps while exploring new tools and best practices under mentorship. What You Bring Educational Background – Bachelor's/Master’s degree in Software Engineering, Computer Science, IT, or a related field. On-Prem & Cloud Expertise – 2+ years of experience managing Windows & Linux systems along with cloud infrastructure (AWS/GCP/Azure hands-on required). Containerization & Orchestration – Strong knowledge of Docker, Kubernetes, Terraform/CFT, Kops, and monitoring tools like Prometheus & Grafana. Automation & Scripting – Proficiency in Bash & Python, with experience in CI/CD pipelines (Jenkins, GitOps, etc.). Cloud & Infrastructure Services – Solid understanding of serverless architectures, cloud networking, storage, and automation across major cloud platforms Augnito India Pvt. Ltd. is an equal opportunities employer .We are committed to providing equal opportunities throughout employment including in the recruitment, training and development of employees (including promotion, transfers, assignments and beliefs). Augnito will not tolerate any act of discrimination in the workplace including but not limited to Gender, Gender identity, National or ethnic origins, Marital or Domestic Partnership status, Pregnancy Status, Carer’s responsibilities, Sexual orientation, Race, Color, Religious belief, Disability, Age, Any other grounds of discrimination. In order to provide equal employment and advancement opportunities to all individuals, employment decisions at Augnito will be based on merit, qualifications, and abilities. Our objective is to attract job applications and applications for development from the best possible candidates and to retain the best people

Posted 8 hours ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Agile Project Manager / SAFe / Scrum Master Experience: 5-10 years Qualifications: Mandatory Agile Certifications: Preferably SAFe Agile or Scrum.org (PSM I + PSM II) or Scrum Alliance (Certified Scrum Master). Mandatory Project Management Certification: PMP Education: Bachelor’s or Master’s degree in Software Engineering, Information Technology, MBA, or MCA or any other related Masters. Experience: 5-10 years of relevant Agile experience (Scrum Master, Agile Coach/Mentor) with at least 3-7 years of hands-on project management experience delivering technology solutions. Exceptional communication and stakeholder management skills. Job Description: We are seeking a skilled Agile Project Manager / SAFe / Scrum Master to join our team. The ideal candidate will have a strong background in managing IT and technology projects, focusing on delivering end-to-end applications. This role requires a certified Agile professional with extensive experience in Scrum and SAFe Agile frameworks, as well as project management certifications. Key Responsibilities: Agile Project Management: Manage multiple critical projects, requiring matrix management of activities across all functional areas. Plan and supervise activities for small and large-scale projects. Drive project planning activities, including Statement of Work, Stakeholder Identification, Risk & Issues Management, Communication Management, and regular status reporting. Set and manage program expectations, ensuring all functional areas are engaged. Create and maintain project schedules, identifying resource estimates, timelines, milestones, task dependencies, and critical paths. Track project performance in terms of Time, Cost, and Quality, evaluating progress, conducting status meetings, reporting to management, resolving issues, and maintaining documentation. Scrum Master Responsibilities: Partner with the Product Owner to prioritize work through the backlog and manage Scrum Artefacts. Ensure the Product and Sprint Backlogs are up-to-date and reflect the latest work status. Enable teams to achieve their objectives and deliver on KPIs. Define process metrics within the Scrum team to ensure seamless communication with stakeholders. Measure team progress using metrics like burn-down charts. Track dependencies with other Scrum Teams for seamless delivery. Encourage team members to self-organize by resolving potential blockers. Identify continuous improvement opportunities and best practices. Engage with team members to explore areas of improvement in Agile practices. Partner with Agile Coaches and Process heads to foster training requirements. Promote Agile practices and behaviours to attain process maturity. Provide thought leadership and constructive feedback to drive Agile maturity. Facilitate Agile ceremonies, including Sprint Planning, Daily Stand-ups, Sprint Reviews, Retrospectives, and Backlog Refinement. Work with senior leadership to embed Agile principles in day-to-day scenarios. Curate a culture of continuous improvement, transparency, and empowerment. Assist team members and stakeholders in adopting an Agile mindset. Collaborate with other leaders to drive organizational change. Skills Desired: Substantial experience working as part of Agile teams. Ability to embed and foster Agile ways of working at the team level. Proactively upskill the team in Agile practices. Identify opportunities for continuous improvement and share best practices. Communicate, influence, and negotiate with Product Owners and stakeholders. Navigate the organization to remove impediments impacting team progress. Analyse and refine existing processes. Coach and mentor team members to drive continuous improvement. Deep understanding of agile software delivery and operational aspects. Knowledge of Agile frameworks (DevOps, etc.). Experience with JIRA/Azure DevOps or similar software. Understanding of technology-enabled business transformation and delivering enterprise-level IT projects. Exceptional communication and stakeholder management skills. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 8 hours ago

Apply

3.5 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

We're Hiring! Machine Learning Engineer for a B2B SaaS based startup in Noida. 🔹 Position: Machine Learning Engineer 🔹 Experience: 3.5 Years to 5 Years 🔹 Location: Noida, Sector 90 🔹 Work Mode: 5 Days | Work From Office 🔹 Notice Period: Immediate to 30 Days Key Responsibilities Design, develop, and optimize machine learning models for various business applications. Build and maintain scalable AI feature pipelines for efficient data processing and model training. Develop robust data ingestion, transformation, and storage solutions for big data. Implement and optimize ML workflows, ensuring scalability and efficiency. Monitor and maintain deployed models, ensuring performance, reliability, and retraining when necessary. Qualifications and Experience Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 3.5+ years of experience in machine learning, deep learning, or data science roles. Proficiency in Python and ML frameworks/tools such as PyTorch, Langchain Experience with data processing frameworks like Spark, Dask, Airflow, and Dagster Hands-on experience with cloud platforms (AWS, GCP, Azure) and ML services. Experience with MLOps tools like MLflow, Kubeflow Familiarity with containerisation and orchestration tools like Docker and Kubernetes. Excellent problem-solving skills and ability to work in a fast-paced environment. Strong communication and collaboration skills. If you’re ready to take on a leadership role and thrive in a dynamic startup environment, share your profile at gautam@mounttalent.com.

Posted 8 hours ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Key Responsibilities • Build and optimize ETL/ELT pipelines using Databricks and ADF , ingesting data from diverse sources including APIs, flat files, and operational databases. • Develop and maintain scalable PySpark jobs for batch and incremental data processing across Bronze, Silver, and Gold layers. • Write clean, production-ready Python code for data processing, orchestration, and integration tasks. • Contribute to the medallion architecture design and help implement data governance patterns across data layers. • Collaborate with analytics, data science, and business teams to design pipelines that meet performance and data quality expectations. • Monitor, troubleshoot, and continuously improve pipeline performance and reliability. • Support CI/CD for data workflows using Git , Databricks Repos , and optionally Terraform for infrastructure-as-code. • Document pipeline logic, data sources, schema transformations, and operational playbooks. ⸻ Required Qualifications • 3–5 years of experience in data engineering roles with increasing scope and complexity. • Strong hands-on experience with Databricks , including Spark, Delta Lake, and SQL-based transformations. • Proficiency in PySpark and Python for large-scale data manipulation and pipeline development. • Hands-on experience with Azure Data Factory for orchestrating data workflows and integrating with Azure services. • Solid understanding of data modeling concepts and modern warehousing principles (e.g., star schema, slowly changing dimensions). • Comfortable with Git-based development workflows and collaborative coding practices. ⸻ Preferred / Bonus Qualifications • Experience with Terraform to manage infrastructure such as Databricks workspaces, ADF pipelines, or storage resources. • Familiarity with Unity Catalog , Databricks Asset Bundles (DAB) , or Delta Live Tables (DLT) . • Experience with Azure DevOps or GitHub Actions for CI/CD in a data environment. • Knowledge of data governance , role-based access control , or data quality frameworks . • Exposure to real-time ingestion using tools like Event Hubs , Azure Functions , or Autoloader .

Posted 8 hours ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI

Posted 8 hours ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About The Role We are seeking a skilled DevOps Engineer to join our team and drive automation, scalability, and efficiency in our development and deployment processes. The ideal candidate will have strong experience in cloud infrastructure, CI/CD pipelines, and monitoring solutions to ensure seamless operations across our technology stack. Key Responsibilities Design, implement, and maintain CI/CD pipelines to streamline development workflows. Automate infrastructure provisioning, configuration, and deployment using tools like Terraform, Ansible, or CloudFormation. Monitor system performance, troubleshoot issues, and optimize reliability and scalability. Manage cloud services (AWS, Azure, GCP) and ensure best practices for security, cost efficiency, and availability. Implement and maintain containerization and orchestration solutions (Docker, Kubernetes). Collaborate with development, security, and operations teams to enhance system resilience and security. Set up and manage logging, monitoring, and alerting solutions using Prometheus, Grafana, ELK, or similar tools. Improve incident management, disaster recovery, and fault tolerance strategies. Stay up to date with emerging DevOps trends and recommend process improvements. Requirements 2+ years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation. Proficiency in cloud platforms (AWS, Azure, GCP) and infrastructure-as-code (IaC) tools. Strong knowledge of containerization (Docker) and orchestration (Kubernetes). Hands-on experience with CI/CD tools such as Jenkins, GitLab CI, GitHub Actions, or CircleCI. Experience with scripting languages like Bash, Python, or Go for automation. Understanding of networking, security best practices, and system administration. Familiarity with monitoring and logging tools (Prometheus, Grafana, Splunk, ELK). Strong problem-solving skills and ability to work in fast-paced environments. Skills: cd,grafana,kubernetes,docker,jenkins,prometheus,containerization,automation,security,devops,cloud infrastructure,gitlab ci,aws,cloud,ci/cd pipelines,bash,system administration,python,networking,github actions,elk,security best practices,terraform,cloudformation,ansible,go,infrastructure,azure,gcp,monitoring solutions,ci,circleci

Posted 8 hours ago

Apply

1.0 - 4.0 years

0 Lacs

Dehradun, Uttarakhand, India

Remote

Linkedin logo

Position: Data Engineer Desired Experience: 1-4 years Job location: Dehradun You will play a key role in collaborating with partners and senior client stakeholders to design and implement advanced big data and analytics solutions. Strong communication, organizational skills, and a problem-solving mindset are essential for this position. What is in it for you: · Work alongside a world-class team of business consultants and engineers to solve complex business challenges using data and analytics. · Accelerate your career in a fast-paced, entrepreneurial work environment. · Receive an industry- leading remuneration package. Desirable Skills: · Proficiency in any of the emerging Big Data technologies: Python, Spark, Hadoop, Clojure, Git, SQL, Databricks, and visualization tools like Tableau and Power BI. · Familiarity with cloud platforms, containerization, and microservice architectures will be a plus. · Hands-on experience in data modeling, query optimization, and complexity analysis will be preferred. · Understanding of agile methodologies (e.g., Scrum) and prior experience in agile work environments will be a plus. · Ability to collaborate with development teams and product owners to gather and interpret requirements. · Certifications in any of the above-mentioned will be a plus. Your duties will include: · Develop data solutions in Big Data environments, particularly on Azure or other cloud platforms. · Work with diverse datasets to support Data Science and Analytics teams. · Design and build data architectures using tools such as Azure Data Factory, Databricks, Data Lake, and Synapse. · Collaborate with the CTO, Product Owners, and Operations teams to develop engineering roadmaps, including upgrades, technical refreshes, and new implementations. · Perform data mapping activities to define source data, target data, and the necessary transformations. · Support the Data Analytics team in creating KPIs and reports using tools like Power BI and Tableau. · Execute data integration, transformation, and modeling tasks. · Maintain comprehensive documentation and knowledge bases. · Research and recommend new database products, services, and protocols. Essential Personal Traits: · You should be able to work independently and communicate effectively with remote teams. · Timely communication/escalation of issues/dependencies to higher management. · Curiosity to learn and apply emerging technologies to solve business problems · A strong willingness and eagerness to learn, adapt, and continuously improve by exploring new tools, technologies, and methodologies in the ever-evolving data engineering landscape.

Posted 8 hours ago

Apply

170.0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Area(s) of responsibility About Birlasoft Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. About the jo bYou will be responsible for designing ,developing, and maintaining integrating solutions using Microsoft Azure functions, API Management, Azure Logic app, Service Bus Title - Azure Integration Senior Develope rNotice Period – 0-30 day SExperience Required – 8-10 Year sLocation – Noid aJ DResponsibilities of an Azure Integration Senior Developer :5+ years of experience in software development and integration .Strong knowledge of Microsoft Azure, including Azure Integration Services, Azure Logic Apps, Azure Functions, and Azure Service Bus .Experience with cloud computing concepts and technologies .Experience with enterprise integration patterns and best practices .Excellent analytical and problem-solving skills .Strong communication and interpersonal skills Mandatory skills Microsoft Azure Azure Integration Service Azure Logic App Azure FUnctions Service Bus

Posted 8 hours ago

Apply

170.0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Area(s) of responsibility About Us Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Job Summary The Windows and VMware Architect is responsible for the design, implementation, administration, and support of enterprise-grade Microsoft Windows Server and VMware environments. This role plays a critical part in ensuring infrastructure stability, performance, and scalability, with a strong focus on migration projects, virtualization, and automation. ________________________________________ Job Description Windows Server Architecture & Design Architect and oversee the deployment, configuration, and lifecycle management of Windows Server environments (2012–2022). Design and lead in-place and parallel upgrade strategies to minimize downtime and risk. Define standards for Active Directory, DNS, DHCP, Group Policy, and system hardening. Architect and implement Windows Server Clustering for high availability of application and database workloads. Establish performance baselines and ensure system reliability through proactive monitoring and tuning. Define patching, backup, and security policies aligned with enterprise standards. VMware Infrastructure Strategy Architect and manage enterprise-grade VMware environments including vSphere, ESXi, vCenter, NSX, and SRM. Design and optimize HA, DRS, vMotion, and Storage vMotion configurations for performance and availability. Lead VMware infrastructure upgrades, patching cycles, and capacity planning. Provide L4-L5-level support and root cause analysis for complex virtualization issues. Infrastructure Modernization & Migration Lead end-to-end planning and execution of legacy system migrations, hardware refreshes, and data center builds. Design and execute P2V and V2V migrations using tools like VMware Converter and PlateSpin. Collaborate on cloud migration strategies (Azure, AWS, hybrid models) and integration with on-prem infrastructure. Business Continuity, Security & Automation Define and implement backup and disaster recovery architectures. Ensure compliance with regulatory and security frameworks (PCI-DSS, ISO, DISA STIGs). Collaborate with InfoSec teams to apply baselines, perform vulnerability remediation, and enforce access controls. Develop and maintain automation scripts using PowerShell and PowerCLI to streamline operations. Documentation, Governance & Collaboration Produce and maintain high-level and low-level design documents, runbooks, and operational procedures. Participate in architectural reviews, change advisory boards, and incident response planning. Act as a technical liaison between infrastructure, application, network, and database teams. ________________________________________ Qualifications Bachelor’s degree in computer science, Information Technology, or a related field. 15–25 years of experience in enterprise Windows Server and VMware environments. Proven track record in infrastructure architecture, modernization, and migration projects. Strong scripting and automation skills (PowerShell, PowerCLI). Preferred Certifications: VMware VCP-DCV / VCAP-DCV, Microsoft MCSE / Azure Architect, ITIL Foundation

Posted 8 hours ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Area(s) of responsibility Job Title: Sr Architect 6B, Java AWS Project Overview Identity and Access management is a cybersecurity discipline focused on managing user identities and access permission on user network. Contractor’s Role Responsible for major and minor enhancements, evaluates and makes recommendation on techniques, practices, or technologies that would enhance business needs, also does work to ensure continuous process improvements. The individual needs to have a good understanding of application topography and the ability to collaborate between all parties involved to maintain a stable application environment. Education: Bachelors in engineering or equivalent. Experience Level: 10+ years. Qualifications Hands-on Experience in developing Java Spring boot microservices Strong experience in designing and implementing web service architecture patterns. Strong experience in designing and implementing microservice patterns Strong experience in front end development and design using React/JS Strong data Modeling skills for web service payloads Experience in logging and workflow processing in web services (Nice to have using Spring technologies) Working experience of Cloud Services for BI and data driven solutions like NoSQL and related next-generation data modeling approaches Experience in cloud data storage retrieval, integration and distribution for accessing on/off-premises, hybrids and cloud-based web-based solutions Experience in designing and building and supporting non-functional service standards and guidelines like security, performance tuning, configuration management, Data Quality, code quality Experience with Azure, AWS cloud platforms are a plus Experience in Azure Dev Ops Experience in Streaming technologies such as Kafka Knowledge about SSDLC, CI/CD pipeline and Cyber Security Engineering 5+/10 – on SQL/Stored Procedures skillset Nice to have: Experience in Dev Test Labs from Azure, Python programming Must Have Skills Minium 8 years of experience Java Web Services REST Web Services, JSON, SQL, Stored Procedures, Java Script Nice to Haves: NoSQL, Power BI, Kafka, SOAP UI, React Js, Azure Tasks & Responsibilities Advanced knowledge in designing and developing web services (Microservices) compliant with non-functional enterprise requirements, such as security and performance guidelines Collaborate with business, interface application development teams and Front-End Apps developers in ensuring data strategies, data flows & database models development and utilization match service needs Knowledge in fine tune caching and access path optimization for data retrieval and performance needs. Development and maintenance of Identity and Access Management Technology systems / products is a plus

Posted 8 hours ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

This role is for one of Weekday's clients Min Experience: 0 years Location: Bengaluru JobType: full-time Requirements About the Role: We are seeking a motivated and passionate Machine Learning Engineer to join our growing AI/ML team. This is an excellent opportunity for recent graduates or early-career professionals to work on real-world applications of Machine Learning (ML) , Artificial Intelligence (AI) , and Natural Language Processing (NLP) technologies. You'll be part of a team that builds intelligent systems to solve complex problems and deliver value to our customers across domains. If you're a self-starter who's eager to apply theoretical knowledge into practice, experiment with state-of-the-art tools like TensorFlow , and work collaboratively on challenging problems in NLP and AI, this role is for you. Key Responsibilities: Assist in the development, training, and evaluation of machine learning models using Python and libraries such as TensorFlow and Scikit-learn. Support data collection, preprocessing, and transformation tasks to build robust ML pipelines. Collaborate with data scientists, software engineers, and product teams to understand problem requirements and deliver ML-based solutions. Work on natural language processing tasks such as text classification, sentiment analysis, named entity recognition, and language modeling. Implement ML models into production environments and monitor model performance. Conduct research and stay updated with the latest developments in AI/ML and NLP. Optimize and tune models for accuracy, speed, and scalability. Prepare documentation, reports, and presentations to communicate results and findings to stakeholders. Required Skills: Basic understanding of machine learning concepts such as supervised/unsupervised learning, classification, regression, and clustering. Familiarity with Python programming and common ML libraries like TensorFlow, Keras, Scikit-learn, and Pandas. Exposure to natural language processing (NLP) tasks and basic algorithms. Understanding of data preprocessing, feature engineering, and model evaluation techniques. Enthusiasm to learn and work with deep learning models and frameworks. Strong analytical thinking and problem-solving skills. Good communication skills and the ability to work in a collaborative environment. Nice to Have: Internship or project experience in ML, AI, or NLP. Knowledge of neural networks, RNNs, CNNs, or transformer models like BERT or GPT. Experience working with large datasets or on cloud platforms such as AWS, GCP, or Azure. Understanding of version control (Git) and deployment tools (Docker, MLflow, etc.). Participation in ML competitions (e.g., Kaggle) or open-source contributions. Educational Qualification: Bachelor's or Master's degree in Computer Science, Data Science, Artificial Intelligence, Statistics, or a related field.

Posted 8 hours ago

Apply

7.0 - 10.0 years

8 - 12 Lacs

Gurugram

Work from Office

Naukri logo

Hiring a Senior GenAI Engineer with 712 years of experience in Python, Machine Learning, and Large Language Models (LLMs) for a 6-month engagement based in Gurugram. This hands-on role involves building intelligent systems using Langchain and RAG, developing agent workflows, and defining technical roadmaps. The ideal candidate will be proficient in LLM architecture, prompt engineering, vector databases, and cloud platforms (AWS, Azure, GCP). The position demands strong collaboration skills, a system design mindset, and a focus on production-grade AI/ML solutions.

Posted 8 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Area(s) of responsibility Coding and Development: Write clean, using autogen and crew.ai frameworks efficient, and maintainable code for GenAI applications using Python and open-source frameworks. Fine-Tuning Models: Fine-tune LLMs and SLMs using techniques like PEFT, LoRA, and QLoRA for specific use cases. Open-Source Frameworks: Work with frameworks like Hugging Face, LangChain, LlamaIndex, and others to build GenAI solutions. Cloud Tools Integration: Use cloud platforms (Azure, GCP, AWS) to deploy and manage GenAI models and applications. Prototyping: Quickly prototype and demonstrate GenAI applications to showcase capabilities and gather feedback. Data Preprocessing: Build and maintain data preprocessing pipelines for training and fine-tuning models. API Integration: Integrate REST, SOAP, and other APIs for data ingestion, processing, and output delivery. Model Evaluation: Evaluate model performance using metrics and benchmarks, and iterate to improve results.

Posted 8 hours ago

Apply

0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Area(s) of responsibility In depth knowledge in SQL Server. Experience in designing and tuning database tables, views, stored procedures, user defined functions, and triggers using SQL Server. Expertise in Monitoring and addressing database server performance issues, including running SQL profiler, identifying long-running SQL queries, and advising development teams on performance improvements. Proficient in creating and maintaining SQL server Jobs. Experience in effectively building data transformations with SSIS, including importing data from files as well as moving data between databases platforms. Experience in developing Client/Server based applications using C#. Experience in working with .NET Framework 4.5, 4.0, 3.5, 3.0 and 2.0. Good knowledge on Web API and SOA services Good Knowledge in Azure (Azure Functions, Azure Service bus, Good to have Angular/React JS/VUE JS.

Posted 8 hours ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About the Company We are looking for a highly experienced Sr. Linux Engineer to join our Managed Services team at SHI | LOCUZ. The ideal candidate will have deep expertise in Linux systems (RedHat, Ubuntu, Debian), with strong troubleshooting skills and a passion for building and maintaining highly available and secure enterprise infrastructure environments. About the Role We are looking for a highly experienced Sr. Linux Engineer to join our Managed Services team at SHI | LOCUZ. The ideal candidate will have deep expertise in Linux systems (RedHat, Ubuntu, Debian), with strong troubleshooting skills and a passion for building and maintaining highly available and secure enterprise infrastructure environments. Responsibilities Handle L3 escalations and client-facing incidents Design, manage, and support Linux-based infrastructure environments Provide consultative and technical leadership in managing complex IT issues Automate operational tasks using scripting (Shell, Python, etc.) Implement system security, compliance, and governance best practices Monitor system performance, conduct root cause analysis (RCA), and ensure 24x7 availability Triage technical issues and recommend effective solutions Ensure change management processes are followed with minimal disruption Represent the team in major incident reviews and vendor escalations Provide mentorship to junior engineers and foster team development Work closely with application vendors and cross-functional IT teams Evaluate and onboard new technologies and tools for automation and efficiency Interface with ITSM platforms (e.g., ServiceNow, Autotask) for issue tracking Qualifications 8+ years of experience in Linux system engineering (RedHat/Ubuntu/Debian) Hands-on experience in Managed Services environments (L3 support level) Strong exposure to cloud platforms like AWS and/or Azure Experience in automation and scripting (Shell, Bash, Python, Ansible, etc.) Excellent troubleshooting and analytical skills Familiarity with monitoring tools and ticketing systems Strong understanding of system security and compliance frameworks Preferred Skills RHCE / RHCSA AWS SysOps Administrator / Azure Administrator ITIL Foundation

Posted 8 hours ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 11 The Team Service Management is a global team that provides specialized technical support across the suite of trade processing and workflow solutions that support all participants in the Data & Research group. The Service Management team works collaboratively, both internally and across our customer base, operating in a sharing and learning culture with a view to build continuous improvement in our processes. Impact We are seeking an experienced Service Management professional with a minimum of 10 years' work experience to join the team in India. The role encompasses 2nd line technical application support & Cloud Infrastructure Management for our Issuer Solutions Platforms within the Data & Research group of Market Intelligence. This person will report directly to the Global Manager responsible for application support and will work closely with the global team contributing to the quality of our support. Key Management Responsibilities Partner with functional areas within Technology such as Architecture and Engineering, Business Systems and Service Delivery (1st and 2nd line) to ensure Global Technology provides efficient and effective IT services and support to our clients. Building a culture of collaboration, repeatable quality processes with cost efficiency, and dedication to improving quality of services delivered through strong working relationships with various stakeholders. Drive Major Incidents from fault logging to resolution and follow up Root Cause Analysis. Accountability for service reviews with business and other technology partners looking for area where services can be improved. Responsible for all aspects of the team's training, management, appraisals and all aspects of recruitment. Implement and enhance robust observability frameworks to monitor system health, performance metrics, and logging across multiple platforms, ensuring high availability and proactive issue detection. Manage disaster recovery strategies and incident response plans, conducting regular drills to ensure team readiness and system resilience. Provide mentorship and technical leadership to junior SREs and other engineering teams, sharing knowledge and promoting SRE best practices across the organization. Duties & Accountabilities The candidate should handle all support requests; incident, problem and change management, and business continuity activities, to ensure flawless and quality delivery of services to end users. This is a critical role requiring a highly dedicated individual who can take ownership and provide procedural and technical support to various teams and internal/external stakeholders. Provide second line client-facing technical support for issues escalated by first line support teams. Apply strong technical skills and good business knowledge together with investigative techniques and problem-solving skills to identify and resolve issues efficiently and in a timely manner. Work collaboratively with development team required for third line escalation. Coordinate with product and delivery teams to ensure the Service Management team is ready for new releases and engaged in early design of new enhancements. Work on initiatives and continuous improvement process around proactive application health monitoring, reporting, and technical support. Key Areas Of The Teams Responsibilities Are Proactive monitoring and management of business critical 24x7 real-time. Where required to rectify issues in a timely fashion to restore application functionality. Ensure incidents are correctly processed, assessing business and technical impact and severity. Taking ownership of application incidents and ensuring that they are resolved, this includes retaining ownership of incidents that require 3rd Line or IT Change activity to resolve. Ensuring the communication to the business community remains active. Application responsibilities will cover Application Infrastructure, Data Fixes, User Queries, User Education and Incident Investigation. Monitoring of application events alerts, job schedules, capacity monitors and performance KPI's. Creation and ownership of change requests raised to address any of the above issues. Working with the Functional and Technical teams, to understand future application deliverables. Proactively share knowledge with the team and update the knowledge base with support documentation (Confluence). Work to provide services to agreed Service Level Targets and Operating Level Agreements. Education And Hands On Experience Required. University Graduate of Computer Science or Engineering degree. 8-13 yrs of direct experience in Site Reliability Engineering or DevOps roles, experience implementing disaster recovery, high availability, and incident response in AWS or Azure or GCP. Minimum of 5 years of direct managerial experience, preferably of global teams across multiple time zones. Proficiency with cloud computing environments (AWS / GCP/ Azure). Good understanding of Application Support processes Ideally familiar with monitoring tools such as Splunk, Cloudwatch, Dotcom and Monolith. Expertise in SQL Server/PostgreSQL: Proficiency in advanced SQL techniques, query optimization, and experience with complex database systems. Experience with advanced observability tools (e.g., Prometheus, Grafana, Splunk, DataDog) for monitoring, logging, and tracing. Experience in leading post-mortem analyses and implementing preventative measures to avoid recurrence of incidents. Excellent problem-solving skills and the capacity to lead effectively under pressure during incident response and outage management. Must understand operating systems most especially Windows and Linux. Good scripting experience (preferably including python) an advantage. Must be knowledgeable in programming languages, SDLC and experience in raising development bugs – including priority assessment, high quality analysis, and detailed investigation. Understanding of agile methodology an advantage Ideally would have experience of working in the Finance Industry and/or experience of S&P Global product About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 103 - Middle Management (EEO Job Group) (inactive), 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 316135 Posted On: 2025-06-30 Location: Hyderabad, Telangana, India

Posted 8 hours ago

Apply

170.0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

About Birlasoft Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. About the Job – We are looking for SAP BASIS Senior LEAD CONSULTANT. Educational Background – Any Graduate. Experience- 10+ years. Location-Noida Job Description SAP Basis technical experience of 10-12 years. Good hands-on installation experience in OS/DB Migrtion (Oracle to HANA), SAP ECC, SAP S/4HANA, SAP Solution Manager, SAP GRC, SAP BW, SAP BW/4, SAP SLT etc., Need to have good enterprise architectural exposure on SAP, non-SAP, etc., technologies. Good hands-on experience in managing Operating systems such as – Windows Server, SUSE Linux, RedHat Linux, - etc., Good hands-on experience in handling Databases such as – MS SQL, Max DB, Oracle, Sybase, HANA etc., Need to have understanding about reducing efforts by optimized system sizing for on-prem and cloud applications. Knowledge in SAP application/user licensing for S/4HANA is mandatory. Experience on SAP S/4HANA Brownfield implementations, S/4HANA on RISE. Ability to guide delivery projects. Also, must be having experience in supporting RFP responses for System sizing/BASIS/ HANA etc., for Practice/Presales teams. Project Management/Delivery Management Experience – AMS, Roll-out, Upgrades, New implementations Cloud Exposure/Hands-on on Azure/GCP/AWS SAP workload migration to Hyperscalar as well as on SAP RISE. Develop technical architecture ensuring that the solutions show high levels of performance, security, scalability (HA/DR) in line with SAP and Hyperscalar guidelines. Build solutions around S/4HANA brownfield conversion, Bluefield migration, Greenfield implementation, Cloud migration.

Posted 8 hours ago

Apply

170.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Area(s) of responsibility About Us Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities Job Description Years of experience 6 to 10 Years – Experience in Perform Design, Development & Deployment using Azure Services (Data Factory, Databricks, PySpark, SQL) Develop and maintain scalable data pipelines and build new Data Source integrations to support increasing data volume and complexity. Experience in creating Technical Specification Design, Application Interface Design. Developing Modern Data Warehouse solutions using Azure Stack (Azure Data Lake, Azure Databricks) and PySpark Develop batch processing and integration solutions and process Structured and Non-Structured Data Demonstrated in-depth skills with Azure Databricks and PySpark, and SQL Collaborate and engage with BI & analytics and the business team Minimum 2 year of Project experience in Azure Databricks Minimum 2 years of experience in ADF Minimum 2 years of experience in PySpark

Posted 8 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies