Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
4 - 6 Lacs
Hyderābād
On-site
Position Overview: ShyftLabs is seeking a skilled Databricks Engineer to support in designing, developing, and optimizing big data solutions using the Databricks Unified Analytics Platform. This role requires strong expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to drive data-driven insights and ensure scalable, high-performance data architectures. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsiblities Design, implement, and optimize big data pipelines in Databricks. Develop scalable ETL workflows to process large datasets. Leverage Apache Spark for distributed data processing and real-time analytics. Implement data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable advanced AI/ML workflows. Monitor and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability in all pipelines. Basic Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. 5+ years of hands-on experience with Databricks and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analysis. Experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills with an ability to work in a fast-paced environment. Preferred Qualifications Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 1 week ago
15.0 years
0 Lacs
Hyderābād
On-site
Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft Azure Architecture Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an Application Designer, you will assist in defining requirements and designing applications to meet business process and application requirements. Your typical day will involve collaborating with various stakeholders to gather and analyze requirements, creating application designs that align with business objectives, and ensuring that the applications are user-friendly and efficient. You will also participate in team meetings to discuss project progress and contribute innovative ideas to enhance application functionality and performance. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Engage in continuous learning to stay updated with industry trends and technologies. - Collaborate with cross-functional teams to ensure alignment on project goals and deliverables. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks. - Good To Have Skills: Experience with Microsoft Azure Architecture. - Strong understanding of application design principles and methodologies. - Experience in developing and deploying applications on cloud platforms. - Familiarity with programming languages relevant to application development. Additional Information: - The candidate should have minimum 3 years of experience in Microsoft Azure Databricks. - This position is based at our Hyderabad office. - A 15 years full time education is required. 15 years full time education
Posted 1 week ago
10.0 - 12.0 years
0 Lacs
Hyderābād
On-site
Overview: PepsiCo Data BI & Integration Platforms is seeking an experienced Cloud Platform technology leader, responsible for overseeing the design, deployment, and maintenance of Enterprise Data Foundation cloud infrastructure initiative on Azure/AWS. The ideal candidate will have hands-on experience with AWS/GCP services – Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities: Provide guidance and support for application migration, modernization, and transformation projects, leveraging cloud-native technologies and methodologies. Implement cloud infrastructure policies, standards, and best practices, ensuring cloud environment adherence to security and regulatory requirements. Design, deploy and optimize cloud-based infrastructure using AWS/GCP services that meet the performance, availability, scalability, and reliability needs of our applications and services. Drive troubleshooting of cloud infrastructure issues, ensuring timely resolution and root cause analysis by partnering with global cloud center of excellence & enterprise application teams, and PepsiCo premium cloud partners (AWS,GCP). Establish and maintain effective communication and collaboration with internal and external stakeholders, including business leaders, developers, customers, and vendors. Develop Infrastructure as Code (IaC) to automate provisioning and management of cloud resources. Write and maintain scripts for automation and deployment using PowerShell, Python, or GCP/AWS CLI. Work with stakeholders to document architectures, configurations, and best practices. Knowledge of cloud security principles around data protection, identity and access Management (IAM), compliance and regulatory, threat detection and prevention, disaster recovery and business continuity. Performance Tuning: Monitor performance, identify bottlenecks, and implement optimizations. Capacity Planning: Plan and manage cloud resources to ensure scalability and availability. Database Design and Development: Design, develop, and implement databases in Azure/AWS. Manage cloud platform operations with a focus on FinOps support, optimizing resource utilization, cost visibility, and governance across multi-cloud environments. Qualifications: Bachelor’s degree in computer science. At least 10 to 12 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 8 years in a technical leadership role Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in AWS/GCP big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Strong understanding of cloud cost management, with hands-on experience in usage analytics, budgeting, and cost optimization strategies across multi-cloud platforms. Proficiency along with hands experience on google cloud integration tools, GCP platform, workspace administration, Apigee integration management, Security Saas tools, Big Query and other GA related tools. Deep expertise in AWS/GCP networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in AWS/GCP platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.
Posted 1 week ago
4.0 years
5 - 7 Lacs
Gurgaon
On-site
About Gartner Digital Markets: Gartner Digital Markets is a business unit within Gartner. Our mission is to empower organizations to accelerate growth by helping them embrace the right technology and services. Gartner Digital Markets is the world’s largest platform for finding software and services. With more than 100 million annual visitors across four buyer destinations—Capterra, GetApp, Software Advice, and UpCity—and 70 localized sites, Gartner Digital Markets helps software and service providers build their brand, capture demand, and understand their market. As the only destination for software and services driven by independent, objective research and verified customer reviews, we help connect providers with in-market buyers to fuel growth across the full funnel. For candidates interested in taking their next career step, Gartner Digital Markets offers the best of two worlds—the stability and resources of a large, established organization combined with the fast pace and excitement of working for a dynamic growth business. Our team is on the front lines of innovation in an industry that is always transforming, providing an incredible opportunity for you to grow and learn throughout your career. About the Role: We are seeking a Senior/Lead Data Platform Engineer to join our Data Platform team, who will play a key role in enabling and empowering data practitioners such as data engineers, analytics engineers, and analysts by providing robust, scalable, and self-service platform capabilities. You will focus on building and maintaining the foundational infrastructure, tools, and frameworks that support data ingestion, transformation, and analytics. Your work will abstract away complexity, enforce standards, and reduce friction for teams consuming or producing data. What you’ll do: Design, develop, and maintain a scalable and secure data platform that supports ingestion, transformation, orchestration, cataloging, and governance. Build tools, libraries, and services that allow other teams to own and manage their own pipelines and workflows independently. Provide self-service infrastructure (e.g., templates, SDKs, CI/CD patterns) to support repeatable and consistent data engineering practices. Implement and manage data platform components: orchestration frameworks, data catalog, access control layers, and metadata systems. Collaborate with stakeholders to define SLAs, monitoring, and observability across the data stack. Champion infrastructure as code, automation, and standardization across the platform. Ensure data security, compliance, and cost efficiency across environments. What you’ll need: 4 to 6 years of hand-on experience working in data infrastructure, data platform engineering, or related roles with a bachelor’s degree Proficiency in Python and experience building backend services or CLI tools. Proficiency in cloud data platforms like Snowflake or Databricks etc. Understanding of core cloud services, preferably AWS (S3, EC2, Glue, IAM, etc.). Hands-on experience with orchestration tools (Airflow, Prefect etc.). Hands on with CI/CD, infrastructure as code (Terraform). Familiarity with Kubernetes, Docker, and container-based deployment models. Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. #LI-VG1 Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:95310 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 1 week ago
5.0 years
5 - 9 Lacs
Gurgaon
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary MLOps Engineering Director Overview: Horizontal Data Science Enablement Team within SSO Data Science is looking for a MLOps Engineering Director who can help solve MLOps problems, manage the Databricks platform for the entire organization, build CI/CD or automation pipelines, and lead best practices. Role and responsibilities: Oversee the administration, configuration, and maintenance of Databricks clusters and workspaces. Continuously monitor Databricks clusters for high workloads or excessive usage costs, and promptly alert relevant stakeholders to address issues impacting overall cluster health. Implement and manage security protocols, including access controls and data encryption, to safeguard sensitive information in adherence with Mastercard standards. Facilitate the integration of various data sources into Databricks, ensuring seamless data flow and consistency. Identify and resolve issues related to Databricks infrastructure, providing timely support to users and stakeholders. Work closely with data engineers, data scientists, and other stakeholders to support their data processing and analytics needs. Maintain comprehensive documentation of Databricks configurations, processes, and best practices and lead participation in security and architecture reviews of the infrastructure Bring MLOps expertise to the table, namely within the scope of, but not limited to: Model monitoring Feature catalog/store Model lineage maintenance CI/CD pipelines to gatekeep model lifecycle from development to production Own and maintain MLOps solutions either by leveraging open-sourced solutions or with a 3rd party vendor Build LLMOps pipelines using open-source solutions. Recommend alternatives and onboard products to the solution Maintain services once they are live by measuring and monitoring availability, latency and overall system health. Manage a small team of MLOps engineers All about you: Master’s degree in computer science, software engineering, or a similar field. Strong experience with Databricks and its management of roles and resources Experience in cloud technologies and operations Experience supporting API’s and Cloud technologies Experience with MLOps solutions like MLFlow Experience with performing data analysis, data observability, data ingestion and data integration. 7+ Yrs DevOps, SRE, or general systems engineering experience. 5+ years of hands-on experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef. Experience architecting and implementing data governance processes and tooling (such as data catalogs, lineage tools, role-based access control, PII handling) Strong coding ability in Python or other languages like Java, and C++, plus a solid grasp of SQL fundamentals Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. What could set you apart: SQL tuning experience. Strong automation experience Strong Data Observability experience. Operations experience in supporting highly scalable systems. Ability to operate in a 24x7 environment encompassing global time zones Self-Motivating and creatively solves software problems and effectively keep the lights on for modeling systems. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 1 week ago
6.0 years
5 - 7 Lacs
Gurgaon
On-site
About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About the role: Lead Analytics Engineer will provide technical expertise in designing and building Modern Data warehouse in Azure Cloud to meet the data needs for various BU in Gartner. You will be part of the Ingestion Team to bring data from multiple sources into the Data warehouse. Collaborate with Dashboard, Analytic & Business Team to build end to end scalable data pipelines. What you will do: Responsible for reviewing and analysis of business requirements and design technical mapping document Build new ETL pipelines using Azure Data Factory and Synapse Design, build, and automate data pipelines and applications to support data scientists and business users with their reporting and analytics needs Collaborate on Data warehouse architecture and technical design discussions Perform and participate in code reviews, peer inspections and technical design and specifications, as well as document and review detailed designs. Provide status reports to the higher management. Help build defining best practices & processes. Maintain Service Levels and department goals for problem resolution. Design and build tabular data models in Azure Analysis Services for seamless integration with Power BI Write efficient SQL queries and DAX (Data Analysis Expressions) to support robust data models, reports, and dashboards Tune and optimize data models and queries for maximum performance and efficient data retrieval. What you will need: 6-8 years experience in Data warehouse design & development Experience in ETL using Azure Data Factory (ADF) Experience in writing complex TSQL procedures in Synapse / SQL Data warehouse. Experience in analyzing complex code and performance tune pipelines. Good knowledge of Azure cloud technology and exposure in Azure cloud components Good understanding of business process and analyzing underlying data Understanding of dimensional and relational modeling Nice to Have: Experience with version control systems (e.g., Git, Subversion) Power BI and AAS Experience for Tabular model design. Experience with Data Intelligence platforms like Databricks Who you are: Effective time management skills and ability to meet deadlines Excellent communications skills interacting with technical and business audience’s Excellent organization, multitasking, and prioritization skills Must possess a willingness and aptitude to embrace new technologies/ideas and master concepts rapidly. Intellectual curiosity, passion for technology and keeping up with new trends Delivering project work on-time within budget with high quality Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. #LI-PM3 Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:101545 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 1 week ago
2.0 years
8 - 10 Lacs
Bengaluru
On-site
Do you want to work on complex and pressing challenges—the kind that bring together curious, ambitious, and determined leaders who strive to become better every day? If this sounds like you, you’ve come to the right place. Your Impact As a Data Engineer at QuantumBlack, you will collaborate with stakeholders, data scientists, and internal teams to develop and implement data products and solutions. Key responsibilities include building and maintaining technical platforms for advanced analytics, designing scalable and reproducible data pipelines for machine learning, and ensuring information security in data environments. You will assess clients' data quality, map data fields to hypotheses, and prepare data for analytics models. Additionally, you will contribute to R&D projects, internal asset development, and cross-functional problem-solving sessions with various stakeholders, including C-level executives, to create impactful analytics solutions. You will be based in Gurugram or Bengaluru as part of a global data engineering community and you will work in cross-functional and Agile project teams alongside project managers, data scientists, machine learning engineers, other data engineers and industry experts. You will work hand in hand with our clients from data owners and users to C-level executives. You will be aligned to one of our practice Pharma & Medical Products (PMP) or Global Energy & Materials (GEM) to work on similar industry clients. Our PMP practice focuses on helping clients bring life-saving medicines and medical treatments to patients. You will work with the Advanced Analytics team across Research & Development (R&D), Operations, and Commercial to build and scale digital and analytical approaches. This practice is one of the fastest growing practices and is comprised of a tight-knit community of consultants, research, solution, data, and practice operations colleagues across the firm. PMP is also one of the most globally connected sector practices, offering ample global exposure. Our GEM practice supports clients in a wide range of industries including chemicals, steel, mining, pulp & paper, electric power and oil & gas, among other on the way to operational excellence. Energy and materials industries are big and important to the world’s economy and you as a key player will face significant challenges such as meeting growing demands, operating productively, and managing gigantic capital investments. McKinsey has an unparalleled reputation in these industries across the world, and today serves most of the top players globally. GEMx and PMPx is the practice’s assetization arm focused on creating reusable digital and analytics assets to support our client work. They work directly with clients to build and scale digital and analytical approaches to addressing their most persistent priorities e.g., PMPx builds and operates tools that support senior executives in pharma and device manufacturers, for whom evidence-based decision-making and competitive intelligence are paramount. As part of our group, you will join a global practice solving problems for large organizations in our GEM practice and building their capabilities for sustained impact. You will work on the frameworks and libraries that our teams of Data Scientists and Data Engineers use to progress from data to impact. You will guide global companies through data science solutions to transform their businesses and enhance performance across industries including healthcare, automotive, energy and elite sport. Real-World Impact: We provide unique learning and development opportunities internationally. Fusing Tech & Leadership: We work with the latest technologies and methodologies and offer first class learning programs at all levels. Multidisciplinary Teamwork: Our teams include data scientists, engineers, project managers, UX and visual designers who work collaboratively to enhance performance. Innovative Work Culture: Creativity, insight and passion come from being balanced. We cultivate a modern work environment through an emphasis on wellness, insightful talks and training sessions. Striving for Diversity: With colleagues from over 40 nationalities, we recognize the benefits of working with people from all walks of life. While we advocate for using the right tech for the right task, we often leverage the following technologies: Python, PySpark, the PyData stack, SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS, container technologies such as Docker and Kubernetes, cloud solutions such as AWS, GCP, and Azure, and more! As a Data Engineer, you will: Collaborate with business stakeholders, data scientists and internal teams to build and implement extraordinary domain focused data products (re-usable asset) and solutions and delivering them right to the client Develop deep domain understanding Use new and creative techniques to deliver impact for our clients as well as R&D projects Help to build and maintain the technical platform for advanced analytics engagements, spanning data science and data engineering work Design and build data pipelines for machine learning that are robust, modular, scalable, deployable, reproducible, and versioned Create and manage data environments and ensure information security standards are maintained at all times Understand clients data landscape and assess data quality Map data fields to hypotheses and curate, wrangle, and prepare data for use in advanced analytics models Have the opportunity to contribute to R&D projects and internal asset development Contribute to cross-functional problem-solving sessions with your team and our clients, from data owners and users to C-level executives, to address their needs and build impactful analytics solutions Your Growth Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package to enable holistic well-being for you and your family. Your qualifications and skills Bachelor's degree in computer science or related field; master's degree is a plus 2+ years of relevant work experience Experience with at least one of the following technologies: Python, Scala, Java Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very much expected; commercial client- facing project experience is helpful, including working in close-knit teams Ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets Proven ability in clearly communicating complex solutions; strong attention to detail Understanding of information security principles to ensure compliant handling and management of client data Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks Good to have experience in CI/CD using GitHub Actions or CircleCI or any other CI/CD tech stack and experience in end to end pipeline development including application deployment
Posted 1 week ago
10.0 years
1 - 8 Lacs
Chennai
Remote
Title : Senior Data Architect Years of Experience : 10+ years Job Description The Senior Data Architect will design, govern, and optimize the entire data ecosystem for advanced analytics and AI workloads. This role ensures data is collected, stored, processed, and made accessible in a secure, performant, and scalable manner. The candidate will drive architecture design for structured/unstructured data, build data governance frameworks, and support the evolution of modern data platforms across cloud environments. Key responsibilities · Architect enterprise data platforms using Azure/AWS/GCP and modern data lake/data mesh patterns · Design logical and physical data models, semantic layers, and metadata frameworks · Establish data quality, lineage, governance, and security policies · Guide the development of ETL/ELT pipelines using modern tools and streaming frameworks · Integrate AI and analytics solutions with operational data platforms · Enable self-service BI and ML pipelines through Databricks, Synapse, or Snowflake · Lead architecture reviews, design sessions, and CoE reference architecture development Technical Skills · Cloud Platforms: Azure Synapse, Databricks, Azure Data Lake, AWS Redshift · Data Modeling: ERWin, dbt, Power Designer · Storage & Processing: Delta Lake, Cosmos DB, PostgreSQL, Hadoop, Spark · Integration: Azure Data Factory, Kafka, Event Grid, SSIS · Metadata/Lineage: Purview, Collibra, Informatica · BI Platforms: Power BI, Tableau, Looker · Security & Compliance: RBAC, encryption at rest/in transit, NIST/FISMA Qualification · Bachelor’s or Master’s in Computer Science, Information Systems, or Data Engineering · Microsoft Certified: Azure Data Engineer / Azure Solutions Architect · Strong experience building cloud-native data architectures · Demonstrated ability to create data blueprints aligned with business strategy and compliance. Job Types: Full-time, Permanent Work Location: Hybrid remote in Chennai, Tamil Nadu Expected Start Date: 12/07/2025
Posted 1 week ago
4.0 years
0 Lacs
India
On-site
Job Information Date Opened 07/11/2025 City Saidapet Country India Job Role Data Engineering State/Province Tamil Nadu Industry IT Services Job Type Full time Zip/Postal Code 600096 Job Description Introduction to the Role: Are you passionate about unlocking the power of data to drive innovation and transform business outcomes? Join our cutting-edge Data Engineering team and be a key player in delivering scalable, secure, and high-performing data solutions across the enterprise. As a Data Engineer , you will play a central role in designing and developing modern data pipelines and platforms that support data-driven decision-making and AI-powered products. With a focus on Python , SQL , AWS , PySpark , and Databricks , you'll enable the transformation of raw data into valuable insights by applying engineering best practices in a cloud-first environment. We are looking for a highly motivated professional who can work across teams to build and manage robust, efficient, and secure data ecosystems that support both analytical and operational workloads. Accountabilities: Design, build, and optimize scalable data pipelines using PySpark , Databricks , and SQL on AWS cloud platforms . Collaborate with data analysts, data scientists, and business users to understand data requirements and ensure reliable, high-quality data delivery. Implement batch and streaming data ingestion frameworks from a variety of sources (structured, semi-structured, and unstructured data). Develop reusable, parameterized ETL/ELT components and data ingestion frameworks. Perform data transformation, cleansing, validation, and enrichment using Python and PySpark . Build and maintain data models, data marts, and logical/physical data structures that support BI, analytics, and AI initiatives. Apply best practices in software engineering, version control (Git), code reviews, and agile development processes. Ensure data pipelines are well-tested, monitored, and robust with proper logging and alerting mechanisms. Optimize performance of distributed data processing workflows and large datasets. Leverage AWS services (such as S3, Glue, Lambda, EMR, Redshift, Athena) for data orchestration and lakehouse architecture design. Participate in data governance practices and ensure compliance with data privacy, security, and quality standards. Contribute to documentation of processes, workflows, metadata, and lineage using tools such as Data Catalogs or Collibra (if applicable). Drive continuous improvement in engineering practices, tools, and automation to increase productivity and delivery quality. Essential Skills / Experience: 4 to 6 years of professional experience in Data Engineering or a related field. Strong programming experience with Python and experience using Python for data wrangling, pipeline automation, and scripting. Deep expertise in writing complex and optimized SQL queries on large-scale datasets. Solid hands-on experience with PySpark and distributed data processing frameworks. Expertise working with Databricks for developing and orchestrating data pipelines. Experience with AWS cloud services such as S3 , Glue , EMR , Athena , Redshift , and Lambda . Practical understanding of ETL/ELT development patterns and data modeling principles (Star/Snowflake schemas). Experience with job orchestration tools like Airflow , Databricks Jobs , or AWS Step Functions . Understanding of data lake, lakehouse, and data warehouse architectures. Familiarity with DevOps and CI/CD tools for code deployment (e.g., Git, Jenkins, GitHub Actions). Strong troubleshooting and performance optimization skills in large-scale data processing environments. Excellent communication and collaboration skills, with the ability to work in cross-functional agile teams. Desirable Skills / Experience: AWS or Databricks certifications (e.g., AWS Certified Data Analytics, Databricks Data Engineer Associate/Professional). Exposure to data observability , monitoring , and alerting frameworks (e.g., Monte Carlo, Datadog, CloudWatch). Experience working in healthcare, life sciences, finance, or another regulated industry. Familiarity with data governance and compliance standards (GDPR, HIPAA, etc.). Knowledge of modern data architectures (Data Mesh, Data Fabric). Exposure to streaming data tools like Kafka, Kinesis, or Spark Structured Streaming. Experience with data visualization tools such as Power BI, Tableau, or QuickSight. Work Environment & Collaboration: We value a hybrid, collaborative environment that encourages shared learning and innovation. You will work closely with product owners, architects, analysts, and data scientists across geographies to solve real-world business problems using cutting-edge technologies and methodologies. We encourage flexibility while maintaining a strong in-office presence for better team synergy and innovation. About Agilisium - Agilisium, is an AWS technology Advanced Consulting Partner that enables companies to accelerate their "Data-to-Insights-Leap. With $25+ million in annual revenue and over 40% year-over-year growth, Agilisium is one of the fastest-growing IT solution providers in Southern California. Our most important asset? People. Talent management plays a vital role in our business strategy. We’re looking for “drivers”; big thinkers with growth and strategic mindset — people who are committed to customer obsession, aren’t afraid to experiment with new ideas. And we are all about finding and nurturing individuals who are ready to do great work. At Agilisium, you’ll collaborate with great minds while being challenged to meet and exceed your potential
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Chennai
On-site
Description The opportunity: To software development using Power Apps, power automate & SharePoint as per job description. How you’ll make an impact: To develop complex applications with Microsoft power Apps, Power Automate using SharePoint / Dataverse or SQL as backend. Propose and guide team to establish App’s data storage and retrieval in Enterprise data platform (using data lake, data bricks) To Connect with business to gather requirement and set priorities for development. Connect with subject matter experts to understand the business processes. Organize the change requests in a structured manner with excellent traceability. Convert the business requirements into process flow chart. Shall work independently in developing Power App Applications. To conduct periodic design review meetings to ensure the development is progressing as per agreed timeline. Follow up with business to ensure required inputs are received on time. Support business users during user acceptance test and ensure. Undertake change requests Responsible to ensure compliance with applicable external and internal regulations, procedures, and guidelines. Living Hitachi Energy’s core values of safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Your background: B.Tech / MCA 4-8years of experience Should have executed at least 5 projects using Power Apps and Power Automate platform in lead role. Should have good technical and working knowledge in SQL server. Should have expertise in Canvas apps and model driven apps Expertise in creating complex Power automate flows. To have exposure to Enterprise data platform, data lake, Databricks concepts. To have expertise in interfacing with software platforms such as SAP, Salesforce etc. To have knowledge in Artificial intelligence / Machine learning concepts and implementation methods Qualified individuals with a disability may request a reasonable accommodation if you are unable or limited in your ability to use or access the Hitachi Energy career site as a result of your disability. You may request reasonable accommodations by completing a general inquiry form on our website. Please include your contact information and specific details about your required accommodation to support you during the job application process. This is solely for job seekers with disabilities requiring accessibility assistance or an accommodation in the job application process. Messages left for other purposes will not receive a response.
Posted 1 week ago
4.0 - 8.0 years
7 - 10 Lacs
Chennai
On-site
Position: Senior Data Analyst Location: Nagpur/Pune/Chennai/Bangalore Type of Employment: Fulltime Purpose of Position: As a Data Analyst, this role requires a candidate who is enthusiastic about data and driven to help solve the organisation’s analytics challenges. As a member of the team, you will support our clients on their data and analytics journey by analyzing data, identifying trends, and generating insights to drive informed decision-making. You will also work on improving data consistency and ensuring that the necessary infrastructure and pipelines are in place to support analytics platforms. Key Result Areas and Activities: Data Analysis and Interpretation: Aggregate, clean, and analyze large datasets to identify trends and insights that address key business challenges. Process Improvement: Identify opportunities for process optimization through data analysis, recommending improvements based on findings. Reporting and Visualization: Create comprehensive reports and dashboards that effectively communicate findings to stakeholders at all levels. Present data-driven insights through compelling visualizations and narratives to support decision-making. Stakeholder Engagement: Work closely with internal and external clients to understand their analytics needs and deliver tailored solutions. Communicate complex analytical concepts in a clear and concise manner to non-technical stakeholders. Work and Technical Expertise: Essential Skills: Proficiency with SQL for data extraction, transformation, and analysis Hands-on experience with Databricks or similar cloud-based data platforms Excellent verbal and written communication skills to present findings effectively Ability to translate complex datasets into meaningful business insights A problem-solving mindset with attention to detail Desirable Skills: Familiarity with Python or other scripting languages for data manipulation Strong Experience in Retail industry is preferred Qualifications: Bachelor’s degree in computer science, engineering, or related field (Master’s degree is a plus) Demonstrated continued learning through one or more technical certifications or related methods Qualities: Self-motivated and focused on delivering outcomes for a fast-growing team and firm Able to communicate persuasively through speaking, writing, and client presentations Able to consult, write, and present persuasively Able to work in a self-organized and cross-functional team Location India Years Of Exp 4 to 8 years
Posted 1 week ago
4.0 years
4 - 6 Lacs
Chennai
On-site
Job Title: Data Engineer – C11/Officer (India) The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. - Job Family Group: Technology - Job Family: Digital Software Engineering - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
7.0 years
4 - 9 Lacs
Noida
On-site
Posted On: 11 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description Technical Expertise: Must Have: Experience in Emma orchestration engine is a must Proficient in Python programming with experience in agentic platforms from procode (e.g., Autogen, Semantic Kernel, LangGraph) to low code (e.g., Crew.ai, EMA.ai). Hands-on experience with Azure open AI and related tools and services. Fluent in GenAI packages like Llamaindex and Langchain. Soft Skills: Excellent communication and collaboration skills, with the ability to work effectively with stakeholders across business and technical teams. Strong problem-solving and analytical skills. Attention to detail. Ability to work with teams in a dynamic, fast-paced environment. Experience: 7 to 10 years of experience in software development, with 3+ years in AI/ML or Generative AI projects. Demonstrated experience in deploying and managing AI applications in production environments. Key Responsibilities: Design, develop, and implement complex Generative AI solutions with high accuracy and for complex use cases. Utilize agentic platforms from procode (e.g., Autogen, Semantic Kernel, LangGraph) to low code (e.g., Crew.ai, EMA.ai). Leverage Azure OpenAI ecosystems and tooling, including training models, advanced prompting, Assistant API, and agent curation. Write efficient, clean, and maintainable Python code for AI applications. Develop and deploy RESTful APIs using frameworks like Flask or Django for model integration and consumption. Fine-tune and optimize AI models for business use cases. Mandatory Competencies Programming Language - Python - Django Programming Language - Python - Flask Data Science and Machine Learning - Data Science and Machine Learning - Gen AI Data Science and Machine Learning - Data Science and Machine Learning - AI/ML Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.
Posted 1 week ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Quantanite (www.quantanite.com) is searching for an exceptional Data Engineer to support design and development of interfaces with multiple data sources to extract, transform and store data into a single uniform data store. In the future, this role will be key to build an enterprise wide data lake for the organization. About Quantanite Quantanite is a customer experience (CX) solutions company that helps fast-growing companies and leading global brands to transform and grow. We do this through a collaborative and consultative approach, rethinking business processes and ensuring our clients employ the optimal mix of automation and human intelligence. We are an ambitious team of professionals spread across four continents and looking to disrupt our industry by delivering seamless customer experiences for our clients, backed-up with exceptional results. We have big dreams, and are constantly looking for new colleagues to join us who share our values, passion and appreciation for diversity. About The Role We are seeking a AI Engineer to join our team and play a critical role in the design and develop a cognitive data solution. The broader vision is to develop an AI-based platform that will crawl through unstructured data sources and extract meaningful information. The ideal candidate will possess full-stack development skills along with a strong understanding of database structures, SQL queries, ETL tools and Azure data technologies. Key Responsibilities Implement Architecture and design from definition phase to go-live phase. Work with the business analyst and SMEs to understand the current landscape priorities. Define conceptual and low-level model of using AI technology. Review design to make sure design is aligned with Architecture. Hands-on development of AI lead solution Implement entire data pipeline of data crawling, ETL, creating Fact Tables, Data quality management etc. Integrate with multiple system using API or Web Services or data exchange mechanism Build interfaces that gather data from various data sources such as: flat files, data extracts & incoming feeds from various data sources as well as directly interfacing with enterprise applications Ensure that the solution is scalable, maintainable, and meet the best practices for security, performance and data management Owning research assignments and development Leading, developing and assisting developers & other team members Collaborate, validate, and provide frequent updates to internal stakeholders throughout the project Define and deliver against the solution benefits statement Positively and constructively engage with clients and operations teams efforts where required. About The Candidate You will possess : A Bachelor's degree in Computer Science, Software Engineering, or a related field Minimum 5 years of IT experience including 3+ years of experience as Full stack developer preferably using Python skills 2+ years of hands-on experience in Azure Data factory, Azure Databricks / Spark (familiarity with fabric), Azure Data Lake storage (Gen1/Gen2), Azure Synapse/SQL DW. Expertise in designing/deploying data pipeline, from data crawling, ETL, Data warehousing, data applications on Azure. Experienced in AI technology including: Machine Learning algorithms, Natural Language Processing, Deep Learning, Image Recognition, Speech Recognition etc. Proficient in programming languages like Python (Full Stack exposure) Proficient in dealing with all the layers in solution; multi-channel presentation, business logic in middleware, data access layer, RDBMS | NO-SQL; E.g. MySQL, MongoDB, Cassendra, SQL Server DBs Familiar with Vector DB such as: FAISS, CromaDB, PineCone, Weaveate, Feature Store Experience in implementing and deploying applications on Azure. Proficient in creating technical documents like Architecture views, Technology Architecture blueprint and design specification. Experienced in using tools like Rational suite, Enterprise Architect, Eclipse, and Source code versioning systems like Git Experience with different development methodologies (RUP | Scrum | XP) Strong analytical and problem-solving skills Excellent communication and interpersonal skills Ability to work effectively in a fast-paced environment Ability to lead and motivate cross-functional teams High personal resilience High achievement orientation Work experience in IT Outsourcing and/or Business Process Outsourcing companies is a big plus Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress : Formal attire from Monday to Thursday. Smart business casuals on Friday. Employee Engagement : Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development : We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events : Regular team and organization-wide get-togethers and events. Value orientation : Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. (ref:hirist.tech)
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview PepsiCo Data BI & Integration Platforms is seeking a Midlevel Cloud Platform technology leader, responsible for overseeing the deployment, and maintenance of big data and analytics cloud infrastructure projects on Azure/AWS for its North America PepsiCo Foods/Beverages business. The ideal candidate will have hands-on experience with Azure/AWS services - Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities Cloud Infrastructure & Automation Implement and support application migration, modernization, and transformation projects, leveraging cloud-native technologies and methodologies. Implement cloud infrastructure policies, standards, and best practices, ensuring cloud environment adherence to security and regulatory requirements. Design, deploy and optimize cloud-based infrastructure using Azure/AWS services that meet the performance, availability, scalability, and reliability needs of our applications and services. Drive troubleshooting of cloud infrastructure issues, ensuring timely resolution and root cause analysis by partnering with global cloud center of excellence & enterprise application teams, and PepsiCo premium cloud partners (Microsoft, AWS). Establish and maintain effective communication and collaboration with internal and external stakeholders, including business leaders, developers, customers, and vendors. Develop Infrastructure as Code (IaC) to automate provisioning and management of cloud resources. Write and maintain scripts for automation and deployment using PowerShell, Python, or Azure/AWS CLI. Work with stakeholders to document architectures, configurations, and best practices. Knowledge of cloud security principles around data protection, identity and access Management (IAM), compliance and regulatory, threat detection and prevention, disaster recovery and business continuity. Qualifications Bachelor’s degree in computer science. At least 6 to 8 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 4 years in a technical leadership role Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in Azure/AWS platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview PepsiCo Data BI & Integration Platforms is seeking a Midlevel Cloud Platform technology leader, responsible for overseeing the deployment, and maintenance of big data and analytics cloud infrastructure projects on Azure/AWS for its Global corporate functions (Finance, Integrated Business Planning). The ideal candidate will have hands-on experience with Azure/AWS services - Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities Cloud Infrastructure & Automation Implement and support application migration, modernization, and transformation projects, leveraging cloud-native technologies and methodologies. Implement cloud infrastructure policies, standards, and best practices, ensuring cloud environment adherence to security and regulatory requirements. Design, deploy and optimize cloud-based infrastructure using Azure/AWS services that meet the performance, availability, scalability, and reliability needs of our applications and services. Drive troubleshooting of cloud infrastructure issues, ensuring timely resolution and root cause analysis by partnering with global cloud center of excellence & enterprise application teams, and PepsiCo premium cloud partners (Microsoft, AWS). Establish and maintain effective communication and collaboration with internal and external stakeholders, including business leaders, developers, customers, and vendors. Develop Infrastructure as Code (IaC) to automate provisioning and management of cloud resources. Write and maintain scripts for automation and deployment using PowerShell, Python, or Azure/AWS CLI. Work with stakeholders to document architectures, configurations, and best practices. Knowledge of cloud security principles around data protection, identity and access Management (IAM), compliance and regulatory, threat detection and prevention, disaster recovery and business continuity. Qualifications Bachelor’s degree in computer science. At least 6 to 8 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 4 years in a technical leadership role Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in Azure/AWS platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview PepsiCo Data BI & Integration Platforms is seeking a Midlevel Cloud Platform technology leader, responsible for overseeing the deployment, and maintenance of big data and analytics cloud infrastructure projects on Azure/AWS for its AMESA, APAC PepsiCo Foods/Beverages business. The ideal candidate will have hands-on experience with Azure/AWS services - Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities Cloud Infrastructure & Automation Implement and support application migration, modernization, and transformation projects, leveraging cloud-native technologies and methodologies. Implement cloud infrastructure policies, standards, and best practices, ensuring cloud environment adherence to security and regulatory requirements. Deploy and optimize cloud-based infrastructure using Azure/AWS services that meet the performance, availability, scalability, and reliability needs of our applications and services. Drive troubleshooting of cloud infrastructure issues, ensuring timely resolution and root cause analysis by partnering with global cloud center of excellence & enterprise application teams, and PepsiCo premium cloud partners (Microsoft, AWS). Develop Infrastructure as Code (IaC) to automate provisioning and management of cloud resources. Write and maintain scripts for automation and deployment using PowerShell, Python, or Azure/AWS CLI. Work with stakeholders to document architectures, configurations, and best practices. Knowledge of cloud security principles around data protection, identity and access Management (IAM), compliance and regulatory, threat detection and prevention, disaster recovery and business continuity. Qualifications Bachelor’s degree in computer science. At least 4 to 6 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 2 years in a technical leadership role Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in Azure/AWS platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Overview We are seeking a dynamic and strategic Director of Data to lead our data function in Hyderabad. This role is pivotal in shaping our data strategy, building a high-performing team, and fostering a strong data community within the organization. The Director of Data will oversee data engineering, data analytics, and data science professionals, driving cohesion in ways of working and instilling a shared sense of purpose. The role is not a traditional line management position but will incorporate two key aspects of the role; leading and line managing the Flutter Functions Data Platform team and acting as the Data capability lead for Flutter Hyderabad office through focusing on leadership, influence, and strategic direction—creating career pathways, professional growth opportunities, and an inclusive and innovative culture. The Director of Data will also play a key role in expanding our Global Capability Center (GCC) in Hyderabad and establishing new teams for other businesses within the group as required. As part of the Hyderabad Leadership Team, the role holder will contribute to broader site leadership, culture, and operational excellence. Key Responsibilities Leadership & Strategy Define and drive the data strategy for Flutter Functions, ensuring alignment with product, architecture and the organization’s business objectives. Establish and grow the Global Capability Center (GCC) in Hyderabad, ensuring it becomes a centre of excellence for data. Lead a community of data professionals (engineering, analytics, and data science), creating a culture of collaboration, learning, and innovation. Serve as a key member of the Hyderabad Leadership Team, contributing to broader site leadership initiatives. Champion best practices in all aspects of data engineering from data governance, data management through to ethical AI/ML adoption. Partner with global and regional leaders to scale data capabilities across different businesses in the group as needed. Team Building & Development Foster an environment that attracts, develops, and retains top data talent. Build career pathways and professional development opportunities for data professionals. Drive cross-functional collaboration between data teams, engineering, and business units. Advocate for a diverse, inclusive, and high-performance culture. Operational Excellence & Ways of Working Enhance cohesion and standardization in data practices, tools, and methodologies across teams. Lead initiatives that improve efficiency, collaboration, and knowledge sharing across data teams. Ensure alignment with cloud-first, scalable technologies, leveraging Databricks, AWS, and other modern data platforms. Establish mechanisms to measure and demonstrate the business value of data-driven initiatives. Skills & Experience Essential Proven experience in a senior data leadership role, with a track record of influencing and shaping data strategy. Strong leadership skills with a people-first approach, able to inspire, mentor, and build a thriving data community. Experience working in global, matrixed organizations, driving collaboration across multiple teams. Deep understanding of data engineering, analytics, and data science disciplines (without requiring hands-on technical execution). Experience with cloud-based data technologies, particularly AWS, Databricks. Experience with streaming platforms such as Kafka, Apache Pulsar. Experience with a combination of Python, Scala, Spark and Java. Ability to scale teams and establish new functions, especially in a GCC or offshore model. Strong stakeholder management, capable of influencing senior executives and business leaders. Desirable Experience in building or scaling data teams in a Global Capability Center (GCC). Familiarity with data governance, security, and compliance best practices. Previous experience working in a hybrid or global delivery model.
Posted 1 week ago
8.0 - 20.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Dear Aspirant , Greetings from TCS ! TCS presents excellent opportunity for Data Science & AI/ML Architect (Traditional AI & Generative AI) Exp: 8- 20 Years Job Location: Chennai / Bangalore / Hyderabad / Mumbai / Pune / Kolkata / Delhi / Noida / Gurgaon ●Develop scalable AI/ML solutions that integrate seamlessly with existing systems and align with business objectives ●Experience in defining and designing robust AI/ML architectures on cloud platforms such as Azure, AWS, or Google Cloud. ●Handson experience on implementing solutions using RAG, Agentic AI, Langchain, MLOps ●Experience in implementing ethical AI practices and ensuring responsible AI usage in solutions. ●Proficient in using tools like TensorFlow, PyTorch, Hugging Face Transformers, OpenAI GPT, Stable Diffusion, DALL-E, and AWS SageMaker, Azure ML, Azure Databricks to develop and deploy generative AI models across cloud environments. ●Experience in some of the industry renowned tools for AI/ML workload implementation like Dataiku, Datarobot, Rapidminer etc. ●Exposure to complex AI/ML solutions with computer vison/NLP etc. ●Collaborates with Infrastructure and Security Architects to ensure alignment with Enterprise standards and designs ●Strong oral and written Communication skills. Good presentation skills ●Analytical Skills Business orientation & acumen (exposure)
Posted 1 week ago
12.0 years
0 Lacs
Gandhinagar, Gujarat, India
Remote
Job Title: Product Manager – Content Development & Management Location: Bangalore (Hybrid/Remote options available) Experience Required: 12+ Years (preferably in EdTech, Higher Education, or Technical Training) Job Type: Full-Time About the Role: We are looking for a seasoned Product Manager to lead the development and management of technical learning content across our AI, Data, and Software certification programs. You will be responsible for building high-quality curriculum and managing a team of Subject Matter Experts (SMEs), instructional designers, and content developers. This role requires strong technical depth, instructional design sensibility, and leadership skills to deliver content that meets both academic and industry standards. Key Responsibilities: End-to-End Content Management: Own the full lifecycle of content products—from concept to delivery—across AI, Data Science, Software Engineering, and emerging tech areas. Curriculum Design: Develop and structure modular, scalable course content aligned with certification standards and market demand. Project Leadership: Manage timelines, quality assurance, and team output for multiple concurrent content projects. Team Management: Lead and mentor SMEs, trainers, editors, and technical writers to maintain consistency and excellence in output. Hands-On Learning Development: Guide creation of hands-on labs, real-time projects, assessments, and case studies. Content Review & QA: Conduct quality checks to ensure accuracy, relevance, and pedagogical effectiveness of content. Collaboration: Work with Product, Marketing, Tech, and Academic teams to align content with platform features and learner outcomes. Technology Integration: Oversee LMS deployments and content integration with tools like Azure Synapse, Databricks, Spark, Kafka, and Power BI. Required Qualifications: Minimum 12 years of experience in EdTech, technical training, or curriculum development roles. Strong domain expertise in: Data Science, Machine Learning, Deep Learning Programming: Python, Java, C/C++ Azure Data Engineering tools: Synapse, Databricks, Snowflake, Kafka, Spark Experience leading technical teams or SME groups. Proven track record of designing and delivering academic/industry-focused content and training programs. Excellent communication and stakeholder management skills. Preferred Qualifications: Ph.D./M.Tech in Computer Science, IT, or related fields (PhD submission/ongoing is acceptable). Experience working with academic institutions and EdTech platforms. Knowledge of instructional design principles and outcome-based learning. Familiarity with tools like Power BI, Tableau, and LMS platforms. Published research papers in AI/ML or EdTech fields (optional but valued). What We Offer: An opportunity to shape the learning experiences of thousands globally. Freedom to innovate and create impactful educational content. A collaborative environment with a passionate team. Competitive salary and performance-based bonuses. Flexible work arrangements and growth opportunities.
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client: Wipro Limited is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our clients, colleagues, and communities thrive in an ever-changing world. For additional information Job Title : SAP ABAP HR Job Locations : Pune Experience : 8+ Years. Education Qualification: Any Graduation. Work Mode : Hybrid Employment Type : Contract. Notice Period : Immediate - 15 Day.. Job Description: The technical consultant professionals will provide technical expertise to plan, analyse, define and support the delivery of technical capabilities for client's transformation, enhancement, advanced problem-solving skills, and support projects. They will use a mix of consultative skills, technical expertise to effectively integrate packaged technology into our clients' business environment and achieve business results. Candidates in this role will also contribute to pre-sales support and practice development activities for their respective technical area of expertise. This role requires experienced candidates and seniority levels Must: • ABAP Workbench development. • SAP Customization (Schema and Rules) • OOP / ABAP / Software Engineering background. • Data Dictionaries, Reports and Forms. Should: • SAP HCM modules (Personnel Administration and Payroll Accounting) • SAP Customization (Wage Types) • SQL databases Could: • SAP Fiori • NoSQL databases • MS Azure Databricks / Azure Data Factory • Scaled Agile • GitLab • TDD • DevOps and CI/CD Pipelines
Posted 1 week ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job description Role : Lead Cloud Infrastructure SRE Location : Pune , Hyd Experience: 5 -12 years Required Skills The successful candidate will be a student of the Google Site Reliability Engineering SRE philosophy as applied to managing largescale cloud infrastructure possess skills and experience within one or more of the following areas and demonstrate a willingness to learn additional skills via certification andor onthejob learning where required Programming Software Network Principles Experience with SRE and Azure DevOps Ability to script BashPowerShell Azure CLI code Python C Java query SQL Kusto query language coupled with experience with software versioning control systems eg GitHub and CICD systems Programming experience in the following languages PowerShell Terraform Python Windows command prompt and object orientated programming languages Demonstrable experience of Linux administration and scripting preferably Red Hat Systems Understanding of hardware and software principles and storage technologies SSD HDD NVMe CPU architectures and Memory Operating system principles especially network stack fundamentals Understanding of network protocols and network design Cloud Infrastructure Platform Engineering Cloud Infrastructure Platform Engineering Azure preferred Cloud Technologies Ability to build operate maintain and support cloud infrastructure data services at scale Experience of engineering and deploying a range of Services in Azure Experience in dealing with multiple support groups that contribute to a Service Experience working with highly availablehighload web infrastructure eg web proxies reverse proxies Data Quality Data Management Data Controls Data Governance Security and Compliance eg IAM and cloud complianceauditingmonitoring tools Troubleshootingservice support experience A track record of constantly looking for ways to do things better and an excellent understanding of the mechanisms necessary to successfully implement change Demonstrated experience troubleshooting complex problems especially those resulting from interactions across a cloud services and application stack Strong documentation change management and agile development ethos Technology Stack Technical knowledge and breadth of Azure technology services Identity Networking Compute Storage Web Containers Databases Cloud Big Data Technologies such as Azure Cloud Azure IAM Azure Active Directory Azure AD Azure Data Factory Azure Databricks Azure Functions Azure Kubernetes Service Azure Logic App Azure Monitor Azure Log Analytics Azure Compute Azure Storage Azure Data Lake Store S3 Synapse Analytics andor PowerBI Experience with server operating system and infrastructure technologies such as NginxApache CosmosDB Linux Bash PowerShell Prometheus Grafana Elasticsearch Experience with InfrastructureasCode and Automation tools such as Terraform Chef Ansible CloudFormationAzure Resource Manager ARM Streaming platforms such as Azure Event Hubs or Kafka and stream processing services such as Spark streaming Experience with Security Information Event Management SIEM and Security Orchestration Automation Response SOAR technologies especially cloud based is a significant asset Skills Mandatory Skills : Ansible,AWS CodeDeploy,AWS Code Pipeline,AWS CloudFormation,AWS DevOps Services,AWS Automation Services
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About company: Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. · Job Title: SAP ABAP HRConsultant · Location: Pune/Bangalore(Hybrid) · Experience: 7 - 10 yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills: The technical consultant professionals will provide technical expertise to plan, analyse, define and support the delivery of technical capabilities for client's transformation, enhancement, advanced problem-solving skills, and support projects. They will use a mix of consultative skills, technical expertise to effectively integrate packaged technology into our clients' business environment and achieve business results. Candidates in this role will also contribute to pre-sales support and practice development activities for their respective technical area of expertise. This role requires experienced candidates and seniority levels Must: • ABAP Workbench development. • SAP Customization (Schema and Rules) • OOP / ABAP / Software Engineering background. • Data Dictionaries, Reports and Forms. Should: • SAP HCM modules (Personnel Administration and Payroll Accounting) • SAP Customization (Wage Types) • SQL databases Could: • SAP Fiori • NoSQL databases • MS Azure Databricks / Azure Data Factory • Scaled Agile • GitLab • TDD • DevOps and CI/CD Pipelines
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
********************** | | SAP ABAP - HR | | ******************** About Company: Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. Job Title : SAP Abap - HR. Location : Pune, Bengaluru (Hybrid). Experience : 7+ Years. Job Type : Full Time. Notice Period : Immediate Joiners. Client : MNC Client. ****************************************************************** ---> Mandatory Skills [Panel Feed Back] : • ABAP Workbench development. • SAP Customization (Schema and Rules) • OOP / ABAP / Software Engineering background. • Data Dictionaries, Reports and Forms. ____________________ | | SAP ABAP - HR | | ______________________ Job Description: The technical consultant professionals will provide technical expertise to plan, analyse, define and support the delivery of technical capabilities for client's transformation, enhancement, advanced problem-solving skills, and support projects. They will use a mix of consultative skills, technical expertise to effectively integrate packaged technology into our clients' business environment and achieve business results. Candidates in this role will also contribute to pre-sales support and practice development activities for their respective technical area of expertise. This role requires experienced candidates and seniority levels Must: • ABAP Workbench development. • SAP Customization (Schema and Rules) • OOP / ABAP / Software Engineering background. • Data Dictionaries, Reports and Forms. Should: • SAP HCM modules (Personnel Administration and Payroll Accounting) • SAP Customization (Wage Types) • SQL databases Could: • SAP Fiori • NoSQL databases • MS Azure Databricks / Azure Data Factory • Scaled Agile • GitLab • TDD • DevOps and CI/CD Pipelines ______________________________________________________________
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
As an Azure Data Engineering Director, you will play a pivotal role in leading the data strategy and operations for our EDP Cloud Fabric. Your expertise will be essential in establishing resilience through a multi-cloud model and enabling key capabilities such as PowerBI and OpenAI from Microsoft. Collaborating with leads across GTIS, CSO, and CTO, you will accelerate the introduction and adoption of new designs on Azure. Your key responsibilities will include defining and executing a comprehensive data strategy aligned with business objectives, leveraging Azure services for innovation in data processing, analytics, and insights delivery. You will architect and manage large-scale data platforms using Azure tools like Azure Data Factory, Azure Synapse Analytics, Databricks, and Cosmos DB, optimizing data engineering pipelines for performance, scalability, and cost-efficiency. Furthermore, you will establish robust data governance frameworks to ensure compliance with industry regulations, oversee data quality, security, and consistency across all platforms, and build, mentor, and retain a high-performing data engineering team. Collaboration with cross-functional stakeholders to bridge technical and business objectives will be a key aspect of your role. You will also ensure data readiness for AI/ML initiatives, drive the adoption of real-time insights through event-driven architectures, streamline ETL/ELT processes for faster data processing and reduced downtime, and identify and implement cutting-edge Azure technologies to create new revenue streams through data-driven innovation. In this role, you will be accountable for building and maintaining data architectures pipelines, designing and implementing data warehouses and data lakes, developing processing and analysis algorithms, and collaborating with data scientists to build and deploy machine learning models. You will manage a business function, provide input to strategic initiatives, and lead a large team or sub-function, embedding a performance culture aligned with the organization's values. Additionally, you will provide expert advice to senior management, manage resourcing and budgeting, and foster compliance within the function. As a Senior Leader, you are expected to demonstrate a clear set of leadership behaviors, including listening and authenticity, energizing and inspiring others, aligning across the enterprise, and developing colleagues. Upholding the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, along with the Barclays Mindset of Empower, Challenge, and Drive, will be essential in creating an environment for colleagues to thrive and deliver to an excellent standard.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France