Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
170.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Area(s) of responsibility About Us: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Mandatory Skillset: Snowflake, DBT, Data Architecture Design experience in Data Warehouse. Good to have Informatica or any ETL Knowledge or Hands-On Experience Good to have Databricks understanding 9 - 11 years of IT experience with 3+ years of Data Architecture experience in Data Warehouse, 4+ years in Snowflake Responsibilities Design, implement, and manage cloud-based solutions on AWS and Snowflake. Work with stakeholders to gather requirements and design solutions that meet their needs. Develop and execute test plans for new solutions Oversee and design the information architecture for the data warehouse, including all information structures such as staging area, data warehouse, data marts, and operational data stores. Ability to Optimize Snowflake configurations and data pipelines to improve performance, scalability, and overall efficiency. Deep understanding of Data Warehousing, Enterprise Architectures, Dimensional Modeling, Star & Snowflake schema design, Reference DW Architectures, ETL Architect, ETL (Extract, Transform, Load), Data Analysis, Data Conversion, Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support
Posted 2 weeks ago
162.0 years
0 Lacs
Greater Hyderabad Area
On-site
Area(s) of responsibility About Birlasoft Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job –The Azure Data Developer is responsible for developing and managing scalable and secure data solutions on Microsoft Azure cloud platform. This role requires a good understanding of data transformation, data cleansing, data profiling, data architecture principles, cloud technologies and data engineering practices helping build an optimal data ecosystem for performance and scalability. Job Title - Azure Data Developer Location: Noida Educational Background: Bachelor’s degree in computer science, Information Technology, or related field. Mode of Work- Hybrid Experience Required - 3+ years Key Responsibilities Solution Development: Understand business requirements and translate them to technical solutions Azure Platform Expertise: Leverage Azure services like Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, Azure Databricks, Azure Cosmos DB, and Azure SQL Database. Knowledge on optimization and cost of Azure Data Solutions Data Integration and ETL/ELT pipelines: Develop and implement data pipeline for real-time and batch processing SQL skills to write complex queries Must have knowledge in establishing a one way or two-way communication channels while integration between various systems. Data Governance and Security: Implement data quality assurance. Should have knowledge in different Authentication methods used in Cloud solutions Performance Optimization: Monitor, troubleshoot and improve data solution performance Implement best practice for data solutions Mandatory Skills Required Hands on experience in Azure services like Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage Gen2, Azure Keyvault Services, Azure SQL Database, Azure Databricks. Hands on experience in data migration/data transformation. Data cleansings. Data profiling Experience in Logic Apps Soft Skills Communicates effectively Problem solving – analytical skills Adapt evolving technologies
Posted 2 weeks ago
0 years
0 Lacs
Greater Hyderabad Area
On-site
Area(s) of responsibility Here Are The Key Qualifications We Are Seeking Proven expertise in managing Azure DevOps pipelines for CI/CD processes. Demonstrated proficiency in Infrastructure as Code (IaC) with Terraform. Solid experience in platform security management and Azure service best practices. Practical knowledge in automating deployments across multiple environments. Familiarity with GitHub repository management is desirable. Key Responsibilities DevOps Pipeline Management Design and implement DevOps pipelines for CI/CD workflows. Automate deployment processes across various environments. Ensure efficient and reliable pipeline operations. CI/CD Workflow Integration Integrate and maintain CI/CD workflows for continuous delivery. Ensure smooth deployment processes in development, testing, and production environments. Maintain high availability of services through effective workflow management. Platform Security Management Collaborate with development and operations teams to manage platform security. Handle access control for data analytics platforms including Azure Data Factory, Databricks, Synapse, and SQL. Implement and enforce security best practices across the platform. Infrastructure as Code (IaC) Implement and manage infrastructure as code using Terraform. Ensure consistent and repeatable deployments across different environments. Maintain infrastructure integrity through automated IaC processes. Integration Runtime Optimization Monitor integration runtimes to ensure efficient data flow. Optimize runtime configurations for improved process execution. Troubleshoot and resolve issues related to integration runtimes.
Posted 2 weeks ago
170.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Area(s) of responsibility About US Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Enterprise Architect Provide guidance and support to teams and analysts. Expert on Databricks and AWS, with minimum 10+ years of experience Databrick certification is a must Collaborate on data strategy with business and IT partners. Identify and address data issues. Lead architecture, design, and implementation of scalable and efficient data pipelines and analytics solutions on Databricks. Work closely with data engineering, platform, and analytics teams to integrate Databricks with AWS S3 Data Lake, Redshift, Talend, SAP BW/HANA, Oracle, and other tools. Architect solutions supporting batch and real-time data processing using Delta Lake, Spark, and ML Flow. Collaborate with business stakeholders to understand data requirements, assess current state, and provide strategic direction for future-state architecture. Ensure best practices for data quality, governance, and security are implemented. Own design, development, and delivery of pipelines and models as per the project plan. Ensure best practices are followed in performance, security, and governance. Provide project documentation and conduct KT sessions Ensure following acceptance criteria is met, by working along with current development and pool partners Successful ingestion and transformation of in-scope objects. Validated and reconciled transformed data within defined thresholds. Fully operational Snowflake warehouse with business-validated datasets. Documented operational playbooks and KT completed. No critical issues post go-live during hyper-care window
Posted 2 weeks ago
0 years
0 Lacs
Kochi, Kerala, India
Remote
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Security Consultant EY Technology: Technology has always been at the heart of what we do and deliver at EY. We need technology to keep an organization the size of ours working efficiently. We have 250,000 people in more than 140 countries, all of whom rely on secure technology to be able to do their job every single day. Everything from the laptops we use, to the ability to work remotely on our mobile devices and connecting our people and our clients, to enabling hundreds of internal tools and external solutions delivered to our clients. Technology solutions are integrated in the client services we deliver and is key to us being more innovative as an organization. EY Technology supports our technology needs through three business units: Client Technology (CT) - focuses on developing new technology services for our clients. It enables EY to identify new technology-based opportunities faster and pursue those opportunities more rapidly. Enterprise Workplace Technology (EWT) – EWT supports our Core Business Services functions and will deliver fit-for-purpose technology infrastructure at the cheapest possible cost for quality services. EWT will also support our internal technology needs by focusing on a better user experience. Information Security (Info Sec) - Info Sec prevents, detects, responds and mitigates cyber-risk, protecting EY and client data, and our information management systems. The opportunity As a Security Consultant within EY’s internal Global Information Security team, the individual will be a trusted security advisor to the Client Technology Platforms Delivery organization within IT Services. The Client Technology Platforms delivery organization is responsible for end-to-end delivery of technology programs and projects supporting EY’s Client Techmology service lines including delivery of a global managed services platform, big data and analytics solutions as well as individual line of business solutions and services. This role will directly engage in delivery on programs and projects, defining security architectures, providing security guidance, identifying and prioritizing security-related requirements, promoting secure-by-default designs and facilitating delivery of information security services throughout the system development life cycle (SDLC). The role will also direct consultants in developing appropriate risk treatment and mitigation options to address security vulnerabilities to translate these vulnerabilities into business risk terminology for communication to business stake holders. Your Key Responsibilities Define security architectures and provide pragmatic security guidance that balance business benefit and risks. Design and develop cloud platform-specific security policies, standards, and procedures for management group and account/subscription management and configuration (e.g. Azure Policy, Azure Security Center, AWS Config), identity management and access control, firewall management, auditing and monitoring, security incident and event management, data protection, user and administrator account management, SSO, conditional access controls and password/secrets management. Engage IT project teams throughout the SDLC to identify and prioritize applicable security controls and provide guidance on how to implement these controls Perform risk assessments of information systems and infrastructure Maintain and enhance the Information Security risk assessment methodology Define security configuration standards for platforms and technologies Develop appropriate risk treatment and mitigation options to address security risks identified during security review or audit Translate technical vulnerabilities into business risk terminology for business units and recommend corrective actions to customers and project stake-holders Provide knowledge sharing and technical assistance to other team members Act as Subject Matter Expert (SME) in responsible technologies and have deep technical understanding of responsible portfolios Skills And Attributes For Success Experience with Cloud Identity and Access management solutions (AAD, Federation services, SAML, Ping) in implementation and operations. experience with Big Data and advanced analytics, AI/ML services (such as /Azure SQL/Google Cloud SQL /Azure HDInsight/Key management solutions, Storage and backup, Load balancing, Security Management, Databases and EC2 or VM machine hosting Databricks, Data Factory, Data Lake Storage/BigQuery, Azure Analysis Services, Synapse Analytics, Machine Learning, etc.) Experience in working with different Cloud platforms (Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS)) and environments (Public, Private, Hybrid) in a security role. hands-on technical experience implementing security solutions for leading Cloud service providers e.g., Amazon AWS, Microsoft Azure, Google Cloud. Ability to appropriately balance firm security needs with business impact & benefit Ability to facilitate compromise to incrementally advance security strategy and objectives Ability to team well with others to facilitate and enhance the understanding & compliance to security policies Experience facilitating meetings with multiple customers and technical staff, including building consensus and mediating compromise Five or more years Working experience with the architecture, design and engineering of web-based multi-tier information systems or network infrastructures Experience conducting risk assessments, vulnerability assessments, vendor and third party risk assessments and recommending risk remediation strategies Experience working with common information security standards, such as: ISO 27001/27002, NIST, PCI DSS, ITIL, COBIT To qualify for the role, you must have Five or more years of experience in the management of a significant Information Security risk management function 5 or more years of experience in an Information Security or Information Technology discipline Experience in managing the communication of security findings and recommendations to IT project teams and management Ideally, you’ll also have Exceptional judgment, tact, and decision-making ability Flexibility to adjust to multiple demands, shifting priorities, ambiguity, and rapid change Outstanding management, interpersonal, communication, organizational, and decision-making skills Strong English language skills are required Candidates are preferred to hold or be actively pursuing related professional certifications within the GIAC family of certifications or CISSP, CISM or Azure certifications (AZ500, AZ303, AZ304, AZ900) What Working At EY Offers We offer a competitive remuneration package where you’ll be rewarded for your individual and team performance. Our comprehensive Total Rewards package includes support for flexible working and career development, and with FlexEY you can select benefits that suit your needs, covering holidays, health and well-being, insurance, savings and a wide range of discounts, offers and promotions. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY is committed to be an inclusive employer and we are happy to consider flexible working arrangements. We strive to achieve the right balance for our people, enabling us to deliver excellent client service whilst allowing you to build your career without sacrificing your personal priorities. While our client-facing professionals can be required to travel regularly, and at times be based at client sites, our flexible working arrangements can help you to achieve a lifestyle balance. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 2 weeks ago
0 years
0 Lacs
Trivandrum, Kerala, India
Remote
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Security Consultant EY Technology: Technology has always been at the heart of what we do and deliver at EY. We need technology to keep an organization the size of ours working efficiently. We have 250,000 people in more than 140 countries, all of whom rely on secure technology to be able to do their job every single day. Everything from the laptops we use, to the ability to work remotely on our mobile devices and connecting our people and our clients, to enabling hundreds of internal tools and external solutions delivered to our clients. Technology solutions are integrated in the client services we deliver and is key to us being more innovative as an organization. EY Technology supports our technology needs through three business units: Client Technology (CT) - focuses on developing new technology services for our clients. It enables EY to identify new technology-based opportunities faster and pursue those opportunities more rapidly. Enterprise Workplace Technology (EWT) – EWT supports our Core Business Services functions and will deliver fit-for-purpose technology infrastructure at the cheapest possible cost for quality services. EWT will also support our internal technology needs by focusing on a better user experience. Information Security (Info Sec) - Info Sec prevents, detects, responds and mitigates cyber-risk, protecting EY and client data, and our information management systems. The opportunity As a Security Consultant within EY’s internal Global Information Security team, the individual will be a trusted security advisor to the Client Technology Platforms Delivery organization within IT Services. The Client Technology Platforms delivery organization is responsible for end-to-end delivery of technology programs and projects supporting EY’s Client Techmology service lines including delivery of a global managed services platform, big data and analytics solutions as well as individual line of business solutions and services. This role will directly engage in delivery on programs and projects, defining security architectures, providing security guidance, identifying and prioritizing security-related requirements, promoting secure-by-default designs and facilitating delivery of information security services throughout the system development life cycle (SDLC). The role will also direct consultants in developing appropriate risk treatment and mitigation options to address security vulnerabilities to translate these vulnerabilities into business risk terminology for communication to business stake holders. Your Key Responsibilities Define security architectures and provide pragmatic security guidance that balance business benefit and risks. Design and develop cloud platform-specific security policies, standards, and procedures for management group and account/subscription management and configuration (e.g. Azure Policy, Azure Security Center, AWS Config), identity management and access control, firewall management, auditing and monitoring, security incident and event management, data protection, user and administrator account management, SSO, conditional access controls and password/secrets management. Engage IT project teams throughout the SDLC to identify and prioritize applicable security controls and provide guidance on how to implement these controls Perform risk assessments of information systems and infrastructure Maintain and enhance the Information Security risk assessment methodology Define security configuration standards for platforms and technologies Develop appropriate risk treatment and mitigation options to address security risks identified during security review or audit Translate technical vulnerabilities into business risk terminology for business units and recommend corrective actions to customers and project stake-holders Provide knowledge sharing and technical assistance to other team members Act as Subject Matter Expert (SME) in responsible technologies and have deep technical understanding of responsible portfolios Skills And Attributes For Success Experience with Cloud Identity and Access management solutions (AAD, Federation services, SAML, Ping) in implementation and operations. experience with Big Data and advanced analytics, AI/ML services (such as /Azure SQL/Google Cloud SQL /Azure HDInsight/Key management solutions, Storage and backup, Load balancing, Security Management, Databases and EC2 or VM machine hosting Databricks, Data Factory, Data Lake Storage/BigQuery, Azure Analysis Services, Synapse Analytics, Machine Learning, etc.) Experience in working with different Cloud platforms (Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS)) and environments (Public, Private, Hybrid) in a security role. hands-on technical experience implementing security solutions for leading Cloud service providers e.g., Amazon AWS, Microsoft Azure, Google Cloud. Ability to appropriately balance firm security needs with business impact & benefit Ability to facilitate compromise to incrementally advance security strategy and objectives Ability to team well with others to facilitate and enhance the understanding & compliance to security policies Experience facilitating meetings with multiple customers and technical staff, including building consensus and mediating compromise Five or more years Working experience with the architecture, design and engineering of web-based multi-tier information systems or network infrastructures Experience conducting risk assessments, vulnerability assessments, vendor and third party risk assessments and recommending risk remediation strategies Experience working with common information security standards, such as: ISO 27001/27002, NIST, PCI DSS, ITIL, COBIT To qualify for the role, you must have Five or more years of experience in the management of a significant Information Security risk management function 5 or more years of experience in an Information Security or Information Technology discipline Experience in managing the communication of security findings and recommendations to IT project teams and management Ideally, you’ll also have Exceptional judgment, tact, and decision-making ability Flexibility to adjust to multiple demands, shifting priorities, ambiguity, and rapid change Outstanding management, interpersonal, communication, organizational, and decision-making skills Strong English language skills are required Candidates are preferred to hold or be actively pursuing related professional certifications within the GIAC family of certifications or CISSP, CISM or Azure certifications (AZ500, AZ303, AZ304, AZ900) What Working At EY Offers We offer a competitive remuneration package where you’ll be rewarded for your individual and team performance. Our comprehensive Total Rewards package includes support for flexible working and career development, and with FlexEY you can select benefits that suit your needs, covering holidays, health and well-being, insurance, savings and a wide range of discounts, offers and promotions. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY is committed to be an inclusive employer and we are happy to consider flexible working arrangements. We strive to achieve the right balance for our people, enabling us to deliver excellent client service whilst allowing you to build your career without sacrificing your personal priorities. While our client-facing professionals can be required to travel regularly, and at times be based at client sites, our flexible working arrangements can help you to achieve a lifestyle balance. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 2 weeks ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About Gartner Digital Markets: Gartner Digital Markets is a business unit within Gartner. Our mission is to empower organizations to accelerate growth by helping them embrace the right technology and services. Gartner Digital Markets is the world’s largest platform for finding software and services. With more than 100 million annual visitors across four buyer destinations—Capterra, GetApp, Software Advice, and UpCity—and 70 localized sites, Gartner Digital Markets helps software and service providers build their brand, capture demand, and understand their market. As the only destination for software and services driven by independent, objective research and verified customer reviews, we help connect providers with in-market buyers to fuel growth across the full funnel. For candidates interested in taking their next career step, Gartner Digital Markets offers the best of two worlds—the stability and resources of a large, established organization combined with the fast pace and excitement of working for a dynamic growth business. Our team is on the front lines of innovation in an industry that is always transforming, providing an incredible opportunity for you to grow and learn throughout your career. About the Role: We are seeking a Senior/Lead Data Platform Engineer to join our Data Platform team, who will play a key role in enabling and empowering data practitioners such as data engineers, analytics engineers, and analysts by providing robust, scalable, and self-service platform capabilities. You will focus on building and maintaining the foundational infrastructure, tools, and frameworks that support data ingestion, transformation, and analytics. Your work will abstract away complexity, enforce standards, and reduce friction for teams consuming or producing data. What you’ll do: Design, develop, and maintain a scalable and secure data platform that supports ingestion, transformation, orchestration, cataloging, and governance. Build tools, libraries, and services that allow other teams to own and manage their own pipelines and workflows independently. Provide self-service infrastructure (e.g., templates, SDKs, CI/CD patterns) to support repeatable and consistent data engineering practices. Implement and manage data platform components: orchestration frameworks, data catalog, access control layers, and metadata systems. Collaborate with stakeholders to define SLAs, monitoring, and observability across the data stack. Champion infrastructure as code, automation, and standardization across the platform. Ensure data security, compliance, and cost efficiency across environments. What you’ll need: 4 to 6 years of hand-on experience working in data infrastructure, data platform engineering, or related roles with a bachelor’s degree Proficiency in Python and experience building backend services or CLI tools. Proficiency in cloud data platforms like Snowflake or Databricks etc. Understanding of core cloud services, preferably AWS (S3, EC2, Glue, IAM, etc.). Hands-on experience with orchestration tools (Airflow, Prefect etc.). Hands on with CI/CD, infrastructure as code (Terraform). Familiarity with Kubernetes, Docker, and container-based deployment models. Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:95308 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 2 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Delhivery: Delhivery is India’s leading fulfillment platform for digital commerce. With a vast logistics network spanning 18,000+ pin codes and over 2,500 cities, Delhivery provides a comprehensive suite of services including express parcel transportation, freight solutions, reverse logistics, cross-border commerce, warehousing, and cutting-edge technology services. Since 2011, we’ve fulfilled over 550 million transactions and empowered 10,000+ businesses, from startups to large enterprises. Vision :To become the operating system for commerce in India by combining world-class infrastructure, robust logistics operations, and technology excellence . About the Role: Senior Data Enginee r We're looking for a Senior Data Engine er who can design, optimize, and own our high-throughput data infrastructure. You’ll work across batch and real-time pipelines, scale distributed processing on petabyte-scale data, and bring AI-assisted tooling into your workflow for debugging, testing, and documentatio n.This is a hands-on role where you'll work with a wide range of big data technologies (Spark, Kafka, Hive, Hudi/Iceberg, Databricks, EMR), data modeling best practices, and real-time systems to power analytics, data products, and machine learnin g.As a senior engineer, you'll review complex pipelines, manage SLAs, and mentor junior team members — while leveraging GenAI tools to scale your impac t. What You’ll Do:Build and optimize scalable batch and streaming data pipelines using Apache Spark, Kafka, Flink, Hive, and Airfl ow.Design and implement efficient data lake architectures with Hudi, Iceberg, or Delta for versioning, compaction, schema evolution, and time trav el.Architect and maintain cloud-native data systems (AWS EMR, S3, Glue, Lambda, Athena), focusing on cost, performance, and availabili ty.Model complex analytical and operational data workflows for warehouse and data lake environmen ts.Own pipeline observability — define and monitor SLAs, alerts, and lineage across batch and real-time syste ms.Debug performance bottlenecks across Spark, Hive, Kafka, and S3 — optimizing jobs with broadcast joins, file formats, resource configs, and partitioning strategi es.Leverage AI tools (e.g., Cursor AI, Copilot, Gemini, Windsurf) f or:Code generation and refactoring of DAGs or Spark j obsDebugging logs, stack traces, and SQL err orsGenerating tests for data pipeli nesDocumenting complex pipeline dependencies and architect ureCollaborate with product, analytics, data science, and platform teams to deliver end-to-end data produc ts.Mentor junior engineers and establish AI-native development workflows, including prompt libraries and automation best practic es. What We’re Looking For:Experience in building and maintaining large-scale data syst ems.Strong hands-on experience with Apache Spark, Kafka, Hive, and Airflow in product ion.Deep knowledge of Hadoop ecosystem (HDFS, YARN, MapReduce tuning, NameNode HA).Expert in SQL (windowing, recursive queries, tuning) and experience with NoSQL stores (e.g., DynamoDB, HBa se).Experience with trino/pr estoExperience with cloud-native data platforms — especially AWS Glue, S3 lifecycle policies, EMR, and Ath ena.Working knowledge of file formats and internals like Parquet, Avro, and best practices for efficient stor age.Familiarity with modern Lakehouse formats (Hudi, Iceberg, Delta Lake) and their compaction, versioning, and schema evolut ion.Hands-on experience managing Databricks or EMR.Solid grounding in data modeling, DWH design, and slowly changing dimensions (S CD).Strong programming in Python/Scala/Java, and ability to write clean, modular, testable c ode.Proficiency with CI/CD practices, Git, Jenkins/GitHub Actions for data engineering workfl ows.Bonus: Experience with distributed systems, consensus protocols, and real-time data guarant ees.Passion for AI-native engineering — using and evolving prompt-based workflows for greater efficiency and qual ity.
Posted 2 weeks ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Location MUMBAI GENERAL OFFICE Job Description Job Description Template – Data Analyst Overview Of The Job Data Analyst – P&G India This role reports to Director, India Data Platforms, P&G About India Data Solutions Team: We take pride in managing the most-valuable asset of company in Digital World, called Data. Our vision is to deliver Data as a competitive advantage for India Business, by building unified data platforms, delivering customized BI tools for managers & empowering insightful business decisions through AI in Data. As a data solutions specialist, you'll be working closely with business stakeholders, collaborating to understand their needs and develop solutions to solve problems in area of supply chain, Sales & Distribution, Consumer Insights & Market performance. In this role, you'll be constantly learning, staying up to date with industry trends and emerging technologies in data solutions. You'll have the chance to work with a variety of tools and technologies, including big data platforms, machine learning frameworks, and data visualization tools, to build innovative and effective solutions. So, if you're excited about the possibilities of data, and eager to make a real impact in the world of business, a career in data solutions might be just what you're looking for. Join us and become a part of the future of digital transformation. Click here to hear from the Functional Leader! About P&G IT: Digital is at the core of P&G’s accelerated growth strategy. With this vision, IT in P&G is deeply embedded into every critical process across business organizations comprising 11+ category units globally creating impactful value through Transformation, Simplification & Innovation. IT in P&G is sub-divided into teams that engage strongly for revolutionizing the business processes to deliver exceptional value & growth - Digital GTM, Digital Manufacturing, Marketing Technologist, Ecommerce, Data Sciences & Analytics, Data Solutions & Engineering, Product Supply. Responsibilities: Develop and maintain Power BI reports and dashboards to meet business requirements. Analytical mindset for understanding business requirement. Design and implement data models, transformations, and calculations in Power BI. Create visually appealing and interactive data visualizations using Power BI & SQL. Perform data analysis and provide actionable insights to support decision-making. Conduct thorough testing and quality assurance of Power BI dashboards. Develop and execute test plans, test cases, and test scenarios for Power BI reports and visualizations. Identify and report issues, bugs, or discrepancies found during testing and work with the development team to address them. Validate data sources and connections to ensure data accuracy and consistency in Power BI reports. Collaborate with stakeholders to understand their testing requirements and provide insights and recommendations for dashboard improvements. Requirements: At min 3+ years of experience working with Power BI, including report/dashboard development and data modelling. Proficiency in SQL and data manipulation for data extraction and transformation. Strong understanding of data visualization principles and best practices. Basic proficiency in Azure Data Factory, Azure Databricks, ADLS, and Azure SQL Database is a plus. Ability to translate business requirements into technical solutions using Power BI. Proficiency in DAX & latest features of PBI (Calculated Measures, Calculated Groups, PBI Pro/Premium Licenses details etc) to develop the best in class product Excellent analytical and problem-solving skills to derive insights from data. Strong communication and collaboration skills to work effectively with stakeholders. Experience in testing methodologies and best practices for Power BI. Detail-oriented mindset with a focus on data accuracy and quality. Familiarity with data warehousing concepts and dimensional data modelling. Relevant certifications in Power BI (like Microsoft PBI Associate) or data analytics are advantageous. About Us We produce globally recognized brands, and we grow the best business leaders in the industry. With a portfolio of trusted brands as diverse as ours, it is paramount our leaders are able to lead with courage the vast array of brands, categories and functions. We serve consumers around the world with one of the strongest portfolios of trusted, quality, leadership brands, including Always®, Ariel®, Gillette®, Head & Shoulders®, Herbal Essences®, Oral-B®, Pampers®, Pantene®, Tampax® and more. Our community includes operations in approximately 70 countries worldwide. Visit http://www.pg.com to know more. We are an equal opportunity employer and value diversity at our company. We do not discriminate against individuals on the basis of race, color, gender, age, national origin, religion, sexual orientation, gender identity or expression, marital status, citizenship, disability, HIV/AIDS status, or any other legally protected factor. Job Qualifications At min 3+ years of experience working with Power BI, including report/dashboard development and data modelling. Proficiency in SQL and data manipulation for data extraction and transformation. Strong understanding of data visualization principles and best practices. Basic proficiency in Azure Data Factory, Azure Databricks, ADLS, and Azure SQL Database is a plus. Ability to translate business requirements into technical solutions using Power BI. Proficiency in DAX & latest features of PBI (Calculated Measures, Calculated Groups, PBI Pro/Premium Licenses details etc) to develop the best in class product Excellent analytical and problem-solving skills to derive insights from data. Strong communication and collaboration skills to work effectively with stakeholders. Experience in testing methodologies and best practices for Power BI. Detail-oriented mindset with a focus on data accuracy and quality. Familiarity with data warehousing concepts and dimensional data modelling. Relevant certifications in Power BI (like Microsoft PBI Associate) or data analytics are advantageous. Job Schedule Full time Job Number R000128262 Job Segmentation Experienced Professionals (Job Segmentation)
Posted 2 weeks ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Overview: We’re looking for a hands-on Data Lead to architect and deliver end-to-end data features that power our booking and content management systems. You’ll lead a team, own the data pipeline lifecycle, and play a critical role in shaping and scaling our data infrastructure — spanning batch, real-time, and NoSQL environments. ShyftLabs is a growing data product company founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsibilities: Lead the development and implementation of booking and CMS data features according to the roadmap. Build, optimize, and manage robust data pipelines and ETL/ELT processes using tools like Airflow, DBT, and Databricks. Oversee infrastructure and storage layers across distributed systems (e.g., Cassandra, MongoDB, Postgres), ensuring scalability, availability, and performance. Support and partner with Data PMs to deliver clean, actionable data for reporting, analytics, and experimentation. Handle client data queries, investigate anomalies, and proactively improve data quality and debugging workflows. Manage a team of data engineers and analysts: provide architectural direction, review code, and support career growth. Collaborate with DevOps/Platform teams on deployment, monitoring, and performance optimization of data services. Champion best practices in data governance, version control, security, and documentation. Basic Qualifications: 8+ years of experience in data engineering or analytics engineering with a strong focus on both data modeling and infrastructure. 2+ years of experience of managing and guiding data team to realize complicated data features. Proficient in SQL, Python, and working with modern data stack tools (e.g., DBT, Databricks, Airflow). Experience managing distributed and NoSQL databases (e.g., Cassandra, MongoDB), and cloud data warehouses (e.g., Snowflake, BigQuery). Strong understanding of scalable data architecture and real-time streaming pipelines is a plus. Experience in leading teams, setting code standards, and mentoring junior developers. Ability to translate business requirements into scalable, maintainable data systems. Familiarity with booking platforms, CMS architectures, or event-based tracking systems is a plus! We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Area(s) of responsibility Job Requirements Bachelor’s degree in Computer Science, Engineering, or related field. Proven experience as a Databricks administrator or similar role. Strong expertise in Databricks platform and its components, including workspaces, clusters, and jobs. Experience in configuring and optimizing Databricks clusters for big data processing. Familiarity with security measures and access controls within the Databricks platform. Understanding of data governance principles and experience with data cataloging tools like Unity Catalog. Experience with Infrastructure as Code (IaC) and automation tools, preferably Terraform. Knowledge of data catalog tools (e.g., Microsoft Purview). Excellent problem-solving skills and ability to work in a collaborative environment. Relevant certifications in Databricks or related technologies are a plus.
Posted 2 weeks ago
162.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Area(s) of responsibility About Birlasoft Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CK Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job – Ability to relate the product functionality to business processes, and thus offer implementation advice to customers on how to meet their various business scenarios using Oracle SCM. Job Title – Data Engineer Location: Bangalore/Pune Educational Background – BE/Btech Key Responsibilities – Must Have Skills: Must have skills: Minimum of 3 years of Data Engineering Experience Must be an expert in SQL Must be an expert in data modeling Incorta experience is a plus Excellent oral and written communication skills Self-starter with analytical, organizational, and problem-solving skills Must be highly flexible and adaptable to change Power BI Strong programming skills in languages such as SQL, Python Experience with data modelling, data warehousing, and dimensional modelling concept Familiarity with data governance, data security. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Nice to Haves: Knowledge of big data technologies such as Apache Hadoop, Spark, or Kafka Azure Databricks & Azure Cosmos DB Relevant certifications such as Microsoft Certified: Azure Data Engineer Associate is a plus. Tasks & responsibilities Collaborate with team members and stakeholders to understand data and metric requirements Pull data from a variety of sources into a central data platform Structure datasets and calculate metrics based on requirements Develop automated scripts to automate otherwise manual processes Communicate concerns and data findings as they arise to the appropriate team members or stakeholders Present data findings and solutions in a way that is easy for non-technical team members and stakeholders to understand
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary We are looking for experienced Data Modelers to support large-scale data engineering and analytics initiatives. The role involves developing logical and physical data models, working closely with business and engineering teams to define data requirements, and ensuring alignment with enterprise standards. • Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, Spark, Data Bricks Delta Lakehouse or other Cloud data warehousing technologies. • Governs data design/modelling – documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. • Develop a deep understanding of the business domains like Customer, Sales, Finance, Supplier, and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Drive collaborative reviews of data model design, code, data, security features to drive data product development. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; SAP Data Model. Develop reusable data models based on cloud-centric, code-first approaches to data management and data mapping. Partner with the data stewards team for data discovery and action by business customers and stakeholders. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Assist with data planning, sourcing, collection, profiling, and transformation. Support data lineage and mapping of source system data to canonical data stores. Create Source to Target Mappings (STTM) for ETL and BI developers. Skills needed: Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models, CPG / Manufacturing/Sales/Finance/Supplier/Customer domains ). Experience with at least one MPP database technology such as Databricks Lakehouse, Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Working knowledge of SAP data models, particularly in the context of HANA and S/4HANA, Retails Data like IRI, Nielsen Retail. C5i is proud to be an equal opportunity employer. We are committed to equal employment opportunity regardless of race, color, religion, sex, sexual orientation, age, marital status, disability, gender identity, etc. If you have a disability or special need that requires accommodation, please keep us informed about the same at the hiring stages for us to factor necessary accommodations.
Posted 2 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Role: We are seeking an experienced Telecom Senior Data Modeller to join our team. In this role, you will be responsible for designing and standardization of enterprise-wide data models across multiple domains such as Customer, Product, Billing, and Network. The ideal candidate will work closely with cross-functional teams to translate business needs into scalable and governed data structures. You will work closely with customers, and technology partners to deliver data solutions that address complex telecommunications business requirements including customer experience management, network optimization, revenue assurance, and digital transformation initiatives. Responsibilities: Design logical and physical data models aligned with enterprise and industry standards Develop comprehensive data models aligned with TMforum guidelines for telecommunications domains such as Customer, Product, Service, Resource, and Partner management Create and maintain data models for Customer, Product, Usage, and Service domains Align models with TM Forum SID, telecom standards, and data mesh principles Translate business requirements into normalized and analytical schemas (Star/Snowflake) Define and maintain entity relationships, hierarchy levels (Customer - Account - MSISDN), and attribute lineage Standardize attribute definitions across systems and simplify legacy structures Collaborate with engineering teams to implement models in cloud data platforms (e.g., Databricks) Collaborate with domain stewards to simplify and standardize legacy data structures Work with governance teams to tag attributes for privacy, compliance, and data quality Document metadata, lineage, and maintain version control of data models Support analytics, reporting, and machine learning teams by enabling standardized data access Design solutions leveraging Microsoft Azure and Databricks for telecom data processing and analytics Qualifications: Bachelors or Masters degree in Computer Science, Telecommunications Engineering, Data Science, or a related technical field 7+ years of experience in data modelling roles with at least 3-4 years in telecommunications industry Hands-on experience building data models and platforms aligned with TMforum standards and telecommunications business processes Excellent understanding of TM Forum SID / eTOM / ODA Strong experience with data modeling tools (Azure Analysis services, SSAS, dbt, informatica) Hands-on experience with modern cloud data platforms (Databricks, Azure Synapse, Snowflake) Deep understanding of data warehousing concepts and normalized/denormalized models Proven experience in telecom data modeling (CRM, billing, network usage, campaigns) Expertise in SQL, data profiling, schema design, and metadata documentation Familiarity with domain-driven design, data mesh and modular architecture Experience in large-scale transformation or modernization programs Knowledge of regulatory frameworks such as GDPR or data privacy-by-design Background in telecom, networking or other data-rich industries
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Details: Job Title: GenAI Engineer Experience : 2+ years / 4+ years / 6+ years Location Options: Pune / Gurgaon / Bangalore [Manyata Tech Park] Skills : Python, Langchain, Langgraph, Autogen Overview: We are actively seeking highly skilled GenAI Engineers . The ideal candidates will bring hands-on experience in developing and deploying Generative AI (GenAI) solutions from MVP to production. Key Skills: Python, Langchain, Langgraph, Autogen Ai agents build (multi-Agent orchestration) Build and orchestrate AI agents, including multi-agent systems using frameworks like LangChain, LangGraph, and AutoGen. Collaborate with cross-functional teams to integrate agents with external tools (e.g., GitHub, web UIs). Worked on integrating tools with agent ex., GitHub, ui etc Work in an agile environment to rapidly prototype and transition MVPs to production. Worked on mvp to production for genai solution Support and optimize RAG (Retrieval-Augmented Generation) pipelines. Contribute to prompt engineering strategies and scalable GenAI infrastructure. Experience on prompt engineering and rag setup and support Leverage Databricks for model development, data preparation, and orchestration. Proficiency in Databricks for ML/AI workflows.
Posted 2 weeks ago
10.0 years
20 - 45 Lacs
Delhi, India
Remote
📍 Location: Remote (Pan India) | 🕑 Shift: 2:00 PM – 11:00 PM IST 💼 Job Type: Full-Time | Permanent Role 🔍 About the Role We are looking for a Senior Data Engineer who is passionate about building scalable data solutions using modern tools and technologies. This role is ideal for someone who enjoys working with Big Data, creating efficient pipelines, and collaborating with cross-functional teams. Note: Proficiency in Scala is mandatory for this role. 🎯 Key Responsibilities Design, build, and maintain robust data pipelines (ETL/ELT). Optimize data workflows and automate data-related processes. Collaborate with engineering, product, and analytics teams to support data infrastructure. Work with a variety of data formats (Delta, Parquet, JSON, CSV). Improve scalability, performance, and reliability of the data platform. ✅ Must-Have Skills 10+ years of experience in data engineering or related roles. 5+ years of hands-on experience with Apache Spark. Strong coding skills in Scala (mandatory), along with Python or Java. Advanced SQL skills and experience with PostgreSQL/MySQL. Experience working with Databricks and cloud environments. Comfortable with Linux shell scripting and command line. 💬 Nice-to-Have Familiarity with machine learning data pipelines. Agile development experience. Excellent communication and teamwork skills. 🌟 What We Offer 🏠 100% Remote Work 💰 Competitive Salary + Incentives 🩺 Medical Benefits & Paid Leaves 📚 Training Programs for Upskilling 🌐 Diverse, Inclusive & Supportive Culture 🏆 Growth Opportunities + Work-Life Balance 🚀 Join Us If You Are A self-starter who can work independently. Excited about solving data challenges at scale. Looking for a remote-first culture with real impact. 👉 Apply now and be part of an innovative, tech-driven environment! Skills: delta,elt,postgresql,linux shell scripting,big data,json,databricks,apache spark,linux shell,mysql,sql,etl,java,parquet,python,csv,data engineer,scala,data engineering,cloud environments
Posted 2 weeks ago
2.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, PySpark Good to have skills : NA Minimum 2 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project specifications, developing application features, and ensuring that the applications are optimized for performance and usability. You will also engage in testing and debugging processes to deliver high-quality solutions that align with business objectives. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in continuous learning to stay updated with industry trends and technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, PySpark. - Strong understanding of data integration techniques and ETL processes. - Experience with cloud-based data storage solutions and data management. - Familiarity with programming languages such as Python or Scala. - Ability to troubleshoot and optimize application performance. Additional Information: - The candidate should have minimum 2 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required.
Posted 2 weeks ago
10.0 years
20 - 45 Lacs
Noida, Uttar Pradesh, India
Remote
📍 Location: Remote (Pan India) | 🕑 Shift: 2:00 PM – 11:00 PM IST 💼 Job Type: Full-Time | Permanent Role 🔍 About the Role We are looking for a Senior Data Engineer who is passionate about building scalable data solutions using modern tools and technologies. This role is ideal for someone who enjoys working with Big Data, creating efficient pipelines, and collaborating with cross-functional teams. Note: Proficiency in Scala is mandatory for this role. 🎯 Key Responsibilities Design, build, and maintain robust data pipelines (ETL/ELT). Optimize data workflows and automate data-related processes. Collaborate with engineering, product, and analytics teams to support data infrastructure. Work with a variety of data formats (Delta, Parquet, JSON, CSV). Improve scalability, performance, and reliability of the data platform. ✅ Must-Have Skills 10+ years of experience in data engineering or related roles. 5+ years of hands-on experience with Apache Spark. Strong coding skills in Scala (mandatory), along with Python or Java. Advanced SQL skills and experience with PostgreSQL/MySQL. Experience working with Databricks and cloud environments. Comfortable with Linux shell scripting and command line. 💬 Nice-to-Have Familiarity with machine learning data pipelines. Agile development experience. Excellent communication and teamwork skills. 🌟 What We Offer 🏠 100% Remote Work 💰 Competitive Salary + Incentives 🩺 Medical Benefits & Paid Leaves 📚 Training Programs for Upskilling 🌐 Diverse, Inclusive & Supportive Culture 🏆 Growth Opportunities + Work-Life Balance 🚀 Join Us If You Are A self-starter who can work independently. Excited about solving data challenges at scale. Looking for a remote-first culture with real impact. 👉 Apply now and be part of an innovative, tech-driven environment! Skills: delta,elt,postgresql,linux shell scripting,big data,json,databricks,apache spark,linux shell,mysql,sql,etl,java,parquet,python,csv,data engineer,scala,data engineering,cloud environments
Posted 2 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : standard 15 years experience Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Your role will also include monitoring data workflows and troubleshooting any issues that arise, ensuring that data is accessible and reliable for stakeholders. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Develop and optimize data pipelines to enhance data processing efficiency. - Collaborate with data scientists and analysts to understand data needs and provide necessary support. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A standard 15 years experience is required.
Posted 2 weeks ago
10.0 years
20 - 45 Lacs
Ahmedabad, Gujarat, India
Remote
📍 Location: Remote (Pan India) | 🕑 Shift: 2:00 PM – 11:00 PM IST 💼 Job Type: Full-Time | Permanent Role 🔍 About the Role We are looking for a Senior Data Engineer who is passionate about building scalable data solutions using modern tools and technologies. This role is ideal for someone who enjoys working with Big Data, creating efficient pipelines, and collaborating with cross-functional teams. Note: Proficiency in Scala is mandatory for this role. 🎯 Key Responsibilities Design, build, and maintain robust data pipelines (ETL/ELT). Optimize data workflows and automate data-related processes. Collaborate with engineering, product, and analytics teams to support data infrastructure. Work with a variety of data formats (Delta, Parquet, JSON, CSV). Improve scalability, performance, and reliability of the data platform. ✅ Must-Have Skills 10+ years of experience in data engineering or related roles. 5+ years of hands-on experience with Apache Spark. Strong coding skills in Scala (mandatory), along with Python or Java. Advanced SQL skills and experience with PostgreSQL/MySQL. Experience working with Databricks and cloud environments. Comfortable with Linux shell scripting and command line. 💬 Nice-to-Have Familiarity with machine learning data pipelines. Agile development experience. Excellent communication and teamwork skills. 🌟 What We Offer 🏠 100% Remote Work 💰 Competitive Salary + Incentives 🩺 Medical Benefits & Paid Leaves 📚 Training Programs for Upskilling 🌐 Diverse, Inclusive & Supportive Culture 🏆 Growth Opportunities + Work-Life Balance 🚀 Join Us If You Are A self-starter who can work independently. Excited about solving data challenges at scale. Looking for a remote-first culture with real impact. 👉 Apply now and be part of an innovative, tech-driven environment! Skills: delta,elt,postgresql,linux shell scripting,big data,json,databricks,apache spark,linux shell,mysql,sql,etl,java,parquet,python,csv,data engineer,scala,data engineering,cloud environments
Posted 2 weeks ago
10.0 years
20 - 45 Lacs
Jaipur, Rajasthan, India
Remote
📍 Location: Remote (Pan India) | 🕑 Shift: 2:00 PM – 11:00 PM IST 💼 Job Type: Full-Time | Permanent Role 🔍 About the Role We are looking for a Senior Data Engineer who is passionate about building scalable data solutions using modern tools and technologies. This role is ideal for someone who enjoys working with Big Data, creating efficient pipelines, and collaborating with cross-functional teams. Note: Proficiency in Scala is mandatory for this role. 🎯 Key Responsibilities Design, build, and maintain robust data pipelines (ETL/ELT). Optimize data workflows and automate data-related processes. Collaborate with engineering, product, and analytics teams to support data infrastructure. Work with a variety of data formats (Delta, Parquet, JSON, CSV). Improve scalability, performance, and reliability of the data platform. ✅ Must-Have Skills 10+ years of experience in data engineering or related roles. 5+ years of hands-on experience with Apache Spark. Strong coding skills in Scala (mandatory), along with Python or Java. Advanced SQL skills and experience with PostgreSQL/MySQL. Experience working with Databricks and cloud environments. Comfortable with Linux shell scripting and command line. 💬 Nice-to-Have Familiarity with machine learning data pipelines. Agile development experience. Excellent communication and teamwork skills. 🌟 What We Offer 🏠 100% Remote Work 💰 Competitive Salary + Incentives 🩺 Medical Benefits & Paid Leaves 📚 Training Programs for Upskilling 🌐 Diverse, Inclusive & Supportive Culture 🏆 Growth Opportunities + Work-Life Balance 🚀 Join Us If You Are A self-starter who can work independently. Excited about solving data challenges at scale. Looking for a remote-first culture with real impact. 👉 Apply now and be part of an innovative, tech-driven environment! Skills: delta,elt,postgresql,linux shell scripting,big data,json,databricks,apache spark,linux shell,mysql,sql,etl,java,parquet,python,csv,data engineer,scala,data engineering,cloud environments
Posted 2 weeks ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Title: Data Architect - AI & Azure - Lead & Coach Teams We are looking for a person having Technical Expertise: In-depth knowledge of the Microsoft Azure data platform (Azure Synapse Analytics, Azure Data Factory, Azure SQL, Azure Data Lake Storage). · Modern Data Platforms: Hands-on experience with Databricks and/or Snowflake. · AI Acumen: Strong understanding of AI workflows and data requirements. Must have a solid grasp of Gen AI applications and concepts. · Leadership: Experience in mentoring, coaching, or leading technical teams or project initiation phases. · Solutioning: Proven ability to create high-quality technical proposals, respond to RFPs, and design end-to-end data solutions. · Communication: Exceptional English communication and presentation skills are essential for this client-facing role. Experience: · Total Experience: 5+ years in data architecture and implementation. · Pre-Sales Experience: Minimum 1 year in a client-facing pre-sales or technical solutioning role is mandatory.: · Data Architecture, Pre-sales Solutioning, Team Leadership, Microsoft Azure,· Databricks, Azure Synapse, Snowflake, Generative AI,AI Enablement, Solution Architecture, Client Facing, Proposal Writing, Data Modeling About Programmers.io Programmers.io is a US-based, ISO 27001 and pioneering ISO 42001 certified software development company. We are at the forefront of the AI revolution, helping leading enterprises harness the power of their data. As a certified Great Place to Work®, we pride ourselves on a culture of innovation, a "Happiness Guarantee" for our clients, and fostering growth for our 1000+ experts. We are seeking a highly ambitious Data Architect to be a key player in our growth. You will not only design transformative data solutions but also guide the teams that bring your vision to life. This role requires close collaboration with our US teams and
Posted 2 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Title: Databricks engineer Work type: Remote work Experience : 4+years Job Description: We are looking for Databricks engineer should be in our direct Payroll and sometimes need to visit Bangalore client office. Internally, can have strong agreement, so you don’t need to fully release developer* Data engineers having expertise in Python, Databricks (PySpark, DLT, Unity Catalog, Structural Streaming, Performance Optimization) and should have great mindset and behaviours Key Responsibilities: •Design, develop, and maintain scalable data pipelines using Azure Data Factory and Azure Databricks. •Implement data transformation workflows using PySpark and structural streaming using Delta Live Tables (DLT). •Manage and govern data assets using Unity Catalog. •Write efficient and optimized SQL queries for data extraction, transformation, and analysis. •Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions. •Demonstrate strong ownership and accountability in delivering end-to-end data solutions. •Communicate effectively with stakeholders to gather requirements, provide updates, and manage expectations. Required Skills & Qualifications: •Proven hands-on experience and Mastery with Azure Data Factory, Databricks, PySpark, DLT, and Unity Catalog, Performance Tunning, Cost Optimization •Strong command of Python, SQL and data modelling concepts. •Excellent communication and interpersonal skills. •Ability to manage stakeholders and work collaboratively in a team environment. •Self-motivated, proactive, and capable of working independently with minimal supervision. •Strong problem-solving skills and a mindset focused on continuous improvement. Preferred Qualifications: •Azure certifications (e.g., Azure Data Engineer, Databricks Professional) are a plus. •Experience with CI/CD pipelines and DevOps practices in data engineering. •Familiarity with data governance and security best practices in Azure. Interested candidates can apply through https://thexakal.com/share-job?jobId=68623e9da2a5eaf0088b2e65
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job description Job Name: Senior Data Engineer DBT & Snowflake Years of Experience: 5 Job Description: We are looking for a skilled and experienced DBT-Snowflake Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: DBT,Snowflake Secondary Skills: ADF,Databricks,Python,Airflow,Fivetran,Glue Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: • Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Snowflake, ensure the effective transformation and load data from diverse sources into data warehouse or data lake. • Implement and manage data models in DBT, guarantee accurate data transformation and alignment with business needs. • Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. • Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance. • Establish best DBT processes to improve performance, scalability, and reliability. • Expertise in SQL and a strong understanding of Data Warehouse concepts and Modern Data Architectures. • Familiarity with cloud-based platforms (e.g., AWS, Azure, GCP). • Migrate legacy transformation code into modular DBT data models
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Role: - UX Lead Job Location : - Noida/Gurgaon/Hyderabad/Bangalore/Pune Experience : - 5+ Years Job Roles & Responsibilities : - Lead end-to-end UX: research, wireframes, prototypes, and final UI—across web and mobile. Collaborate with product, engineering, and data teams to integrate UX with AWS/Azure and Databricks insights. Conduct usability testing, iterate designs, and ensure accessibility and feature performance. Mentor junior designers and champion UX best practices within agile squads. Enforce secure, compliant, and data-governed design standards. Job Skills & Requirements : - UX Craft: Strong in Figma/Sketch, prototyping, user research, and responsive design. Cloud Integration: Familiar with AWS/Azure UX patterns and embedding Databricks analytics into workflows. Testing & Metrics: Skilled in usability studies and leveraging UX analytics. Collaboration: Excellent stakeholder communication and facilitation. Process Fit: Agile-savvy, delivery-focused, with mentorship experience. Visual Design: Solid in IA, hierarchy, and interface consistency.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France