Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Senior Data Engineering Lead (Databricks) Company Overview: At eClerx, we are a leading IT firm specializing in innovative technologies and solutions that drive business transformation. Leveraging expertise in business process management, advanced analytics, and smart automation, we empower our clients to achieve operational excellence and competitive advantage in fast-evolving markets. Role Overview: We are seeking a highly experienced Senior Data Engineering Lead with a strong focus on Databricks and cloud-based data engineering to lead our data engineering team. This leadership role requires a visionary who can design, develop, and manage scalable data infrastructure and pipelines, while mentoring and inspiring a team of data engineers. You will work closely with cross-functional teams including data scientists, analysts, and software engineers to enable robust data-driven decision-making and support business goals. Key Responsibilities: Lead and manage a team of data engineers, providing mentorship, technical guidance, and fostering a culture of collaboration and innovation. Architect, design, and oversee implementation of large-scale data pipelines, data lakes, and cloud-based data warehouses using Databricks, Apache Spark, and Snowflake. Develop and optimize ETL/ELT workflows ensuring performance, reliability, and scalability of data infrastructure. Collaborate with business stakeholders, data scientists, and software teams to understand requirements and translate them into scalable, efficient data solutions. Implement best practices for data quality, governance, security, and compliance. Drive continuous improvement of data engineering processes, standards, and tools across the organization. Support presales activities by contributing to RFPs, technical proposals, and client engagements. Stay abreast of emerging data technologies and trends, recommending innovative solutions to enhance analytics capabilities. Manage resource planning, project prioritization, and delivery timelines ensuring alignment with business objectives. Lead performance reviews, identify skill gaps, and champion professional development within the data engineering team. Facilitate cross-team communication to streamline data workflows and improve overall delivery. Qualifications & Skills: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related discipline. Minimum 15 years of professional experience in data engineering with at least 9 years in leadership or senior technical roles. Deep hands-on expertise with Databricks and Apache Spark for large-scale data processing. Strong programming skills in Python, Scala, or Java. Extensive experience with cloud data platforms such as AWS, Azure, or GCP, including services like S3, Redshift, BigQuery, Snowflake. Solid understanding of data modeling, data warehousing, ETL/ELT design, and data lakes. Experience with big data technologies like Hadoop, Kafka, and Databricks ecosystem. Knowledge of CI/CD pipelines, data orchestration tools (e.g., Apache Airflow), and data governance best practices. Proven experience managing high-performing teams and delivering complex data engineering projects on time. Familiarity with analytics solutions and the ability to translate business needs into technical requirements. Strong communication skills, capable of engaging with both technical teams and senior leadership. Experience supporting presales efforts and client technical discussions is a plus. Bonus: Exposure to machine learning lifecycle and model deployment on Databricks.
Posted 1 week ago
4.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a Senior/Lead DevOps Engineer – Databricks with strong experience in Azure Databricks to design, implement, and optimize Databricks infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Databricks environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures. Required Skills & Experience • 4 to 10 years of experience in DevOps with a strong focus on Azure Databricks. • Hands-on experience with Azure networking, VNET integration, and firewall rules. • Strong knowledge of Databricks cluster management, job scheduling, and optimization. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Experience with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates, or Bicep. • Strong scripting skills in Python, Bash, or PowerShell. • Familiarity with monitoring tools (Azure Monitor, Prometheus, or Datadog). Preferred Qualifications • Databricks Certified Associate/Professional Administrator or equivalent certification. • Experience with AWS or GCP Databricks in addition to Azure. • Knowledge of Delta Live Tables (DLT), Databricks SQL, and MLflow. • Exposure to Kubernetes (AKS, EKS, or GKE) for ML model deployment. Roles & Responsibilities Key Responsibilities 1. Databricks Infrastructure Setup & Management • Configure and manage Azure Databricks workspaces, networking, and security. • Set up networking components like VNET integration, private endpoints, and firewall configurations. • Implement scalability strategies for efficient resource utilization. • Ensure high availability, resilience, and security of Databricks infrastructure. 2. Cluster & Capacity Management • Manage Databricks clusters, including autoscaling, instance selection, and performance tuning. • Optimize compute resources to minimize costs while maintaining performance. • Implement cluster policies and governance controls. 3. User & Access Management • Implement RBAC (Role-Based Access Control) and IAM (Identity and Access Management) for users and services. • Manage Databricks Unity Catalog and enforce workspace-level access controls. • Define and enforce security policies across Databricks workspaces. 4. CI/CD Automation for Databricks & ML Models • Develop and manage CI/CD pipelines for Databricks Notebooks, Jobs, and ML models using Azure DevOps, GitHub Actions, or Jenkins. • Automate Databricks infrastructure deployment using Terraform, ARM Templates, or Bicep. • Implement automated testing, version control, and rollback strategies for Databricks workloads. • Integrate Databricks Asset Bundles (DAB) for standardized and repeatable Databricks deployments. 5. Databricks Asset Bundle Management • Implement Databricks Asset Bundles (DAB) to package, version, and deploy Databricks workflows efficiently. • Automate workspace configuration, job definitions, and dependencies using DAB. • Ensure traceability, rollback, and version control of deployed assets. • Integrate DAB with CI/CD pipelines for seamless deployment. 6. ML Model Deployment & Monitoring • Deploy ML models using Databricks MLflow, Azure Machine Learning, or Kubernetes (AKS). • Optimize model performance and enable real-time inference. • Implement model monitoring, drift detection, and automated retraining pipelines. 7. Monitoring, Troubleshooting & Performance Optimization • Set up Databricks monitoring and logging using Azure Monitor, Datadog, or Prometheus. • Analyze cluster performance metrics, audit logs, and cost insights to optimize workloads. • Troubleshoot Databricks infrastructure, pipelines, and deployment issues.
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview We are PepsiCo PepsiCo is one of the world's leading food and beverage companies with more than $79 Billion in Net Revenue and a global portfolio of diverse and beloved brands. We have a complementary food and beverage portfolio that includes 22 brands that each generate more than $1 Billion in annual retail sales. PepsiCo's products are sold in more than 200 countries and territories around the world. PepsiCo's strength is its people. We are over 250,000 game changers, mountain movers and history makers, located around the world, and united by a shared set of values and goals. We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Purpose . For more information on PepsiCo and the opportunities it holds, visit www.pepsico.com . Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Machine Learning Services and Pipelines. PepsiCo Data Analytics & AI Overview: With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCo’s leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AIProducts. And will support “pre-engagement” activities as requested and validated by the prioritization framework of DA&AI. Data Scientist: Hyderabad and Gurugram You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Machine Learning and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 5+ years of experience working as a Data Scientist. 4+ years’ experience building solutions in the commercial or in the supply chain space. 4+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 4+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 4+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Skills, Abilities, Knowledge: Data Science – Hands on experience and strong knowledge of building machine learning models – supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills – Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics – Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) – Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation.
Posted 1 week ago
0.0 - 2.0 years
5 - 12 Lacs
Pune, Maharashtra
On-site
Company Name : PibyThree Consulting Pvt Ltd. Job Title : Team Lead - Data Migration and Snowflake Skill : Azure Data factory, Databricks, PySpark, Snowflake & Data Migration Location : Pune, Maharashtra. Website : PibyThree Start Date About Us: Πby3 is A Cloud Transformation company enabling Enterprises for Future. We are nimble, and Highly dynamic focused team with a passion to serve our clients with utmost trust and ownership. Our expertise in Technology with vast experience over the years helps client get Solutions with optimized cost and reduced risks. Job Description: We are looking for an experienced Team Lead – Data Warehouse Migration, Data Engineering & BI to lead enterprise-level data transformation initiatives. The ideal candidate will have deep expertise in migration , Snowflake , Power BI and end-to-end data engineering using tools like Azure Data Factory , Databricks , and PySpark . Key Responsibilities: Lead and manage data warehouse migration projects , including extraction, transformation, and loading (ETL/ELT) across legacy and modern platforms. Architect and implement scalable Snowflake data warehousing solutions for analytics and reporting. Develop and schedule robust data pipelines using Azure Data Factory and Databricks . Write efficient and maintainable PySpark code for batch and real-time data processing. Design and develop dashboards and reports using Power BI to support business insights. Ensure data accuracy, security, and consistency throughout the project lifecycle. Collaborate with stakeholders to understand data and reporting requirements. Mentor and lead a team of data engineers and BI developers. Manage project timelines, deliverables, and team performance effectively Must-Have Skills: Data Migration: Hands-on experience with large-scale data migration, reconciliation, and transformation. Snowflake: Data modeling, performance tuning, ELT/ETL development, role-based access control. Azure Data Factory: Pipeline development, integration services, linked services. Databricks: Spark SQL, notebooks, cluster management, orchestration. PySpark: Advanced transformations, error handling, and optimization techniques. Power BI: Data visualization, DAX, Power Query, dashboard/report publishing and maintenance. Preferred Skills: Familiarity with Agile methodologies and sprint-based development. Experience in working with CI/CD for data workflows. Ability to lead client discussions and manage stakeholder expectations. Strong analytical and problem-solving abilities. Job Type: Full-time Pay: ₹500,000.00 - ₹1,200,000.00 per year Schedule: Day shift Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: total work: 4 years (Preferred) Pyspark: 2 years (Required) Azure Data Factory: 2 years (Required) Databricks: 2 years (Required) Work Location: In person
Posted 1 week ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
🚀 We’re Hiring: Senior Data Engineer (Remote – India | Full-time) We are helping our client hire a Senior Data Engineer with over 10 years of experience in modern data platforms. This is a remote role open across India , and available on both full-time and contract basis. 💼 Position: Senior Data Engineer 🌍 Location: Remote (India) 📅 Type: Full-Time / Contract 📊 Experience: 10+ Years 🔧 Must-Have Skills: Data Engineering, Data Warehousing, ETL Azure Databricks & Azure Data Factory (ADF) PySpark, SparkSQL Python, SQL 👀 What We’re Looking For: A strong background in building and managing data pipelines Hands-on experience in cloud platforms, especially Azure Ability to work independently and collaborate in distributed teams 📩 How to Apply: Please send your resume to connect@infosprucetech.com with the subject line: "Senior Data Engineer – Remote India" ⚠️ Along with your resume, kindly include the following details: Full Name Mobile Number Total Experience Relevant Experience Current CTC Expected CTC Notice Period Current Location Do you have a PF account? (Yes/No) #DataEngineer #AzureDatabricks #ADF #PySpark #SQL #RemoteJobsIndia #HiringNow #Strive4X #FullTimeJobs #IndiaJobs
Posted 1 week ago
3.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will engage in problem-solving discussions and contribute to the overall success of the projects by leveraging your expertise in application development. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and requirements. - Collaborate with cross-functional teams to ensure seamless integration of applications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data processing and analytics workflows. - Experience with cloud-based data solutions and architectures. - Familiarity with programming languages relevant to application development. - Ability to troubleshoot and optimize application performance. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Mumbai. - A 15 years full time education is required.
Posted 1 week ago
18.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are seeking an experienced and dynamic Delivery Lead to oversee the successful delivery of software services by multiple teams operating across locations for our global clients. The ideal candidate will possess exceptional leadership, organizational, and communication skills, with a strong background in managing large-scale, distributed software development teams working for Healthcare Payer customers. This role is crucial to ensuring the high-quality delivery of our software services and maintaining strong relationships with our global clients. Education and Experience: • Bachelor’s degree in computer science, Information Technology, Business Administration, or a related field; advanced degree preferred. • 18+ years of relevant IT experience with at least 8+ years of working in healthcare industry • 6+ years of experience in software delivery management, with a proven track record of managing large-scale, distributed teams. • Experience working with global clients in a software services environment. Location: Mumbai, Pune, Hyderabad, Bangalore Skills and Competencies: • Strong leadership and people management skills, with the ability to inspire and motivate teams. • Strong Healthcare knowledge with specialization in Payer workflows like Enrollment, Billing, Claims etc. • Excellent project management skills, with a deep understanding of Agile/Scrum methodologies. • Superior communication and interpersonal skills, with the ability to build strong relationships with clients and stakeholders. • Prior delivery management experience in digital transformation, Cloud Modernization and Data Platform modernization. • Prior development experience in .NET or Java, REST Services, SQL, Databricks/Snowflake, Medallion Architecture, AWS/GCP/Azure. • Hands on experience in delivery management and business development activities • Strategic thinking and problem-solving skills, with a focus on delivering high-quality results. • Ability to manage multiple projects and priorities in a fast-paced environment. • Knowledge of industry best practices and regulatory requirements. Key Responsibilities: Leadership and Team Management: • Provide leadership and guidance to multiple teams across different locations, ensuring alignment with the company's strategic goals and client requirements. • Foster a collaborative and inclusive team environment that encourages innovation, continuous improvement, and high performance. • Establish Innovation culture and enable continuous improvement and ideation. • Conduct regular performance reviews, provide constructive feedback, and support professional development of team members. Delivery Management: • Oversee the end-to-end delivery of software services, ensuring projects are completed on time, within budget, and to the highest quality standards. • Implement and maintain best practices in project management, including Agile/Scrum methodologies. • Coordinate and prioritize tasks across teams, manage dependencies, and resolve any issues or conflicts that arise. • Develop and manage project revenue, forecast future cash flow and margins, take interventions to improve margins/cost-of-delivery and ensure projects are delivered with a healthy margins. Client Relationship: • Serve as the primary point of contact for global clients, ensuring their needs and expectations are met or exceeded. • Build and maintain strong, long-term client relationships through regular communication, status updates, and feedback sessions. • Address any client concerns or escalations promptly and effectively. Strategic Planning and Execution: • Develop and execute strategic plans for the delivery of software services, aligning with the company's overall business objectives. • Identify opportunities for improving delivery processes, increasing efficiency, and enhancing service quality. • Collaborate with senior management to set goals, monitor progress, and report on key performance indicators (KPIs). • Collaborate with the sales and marketing teams to identify new business opportunities and support pre-sales activities. Risk Management: • Identify and mitigate risks associated with project delivery, including resource constraints, technical challenges, and client dependencies. • Develop contingency plans to address potential issues and ensure business continuity. Resource Management: • Manage resource allocation across teams to ensure optimal utilization and productivity. • Work with recruitment and HR teams to attract, hire, and retain top talent. • Ensure teams have the necessary tools, training, and support to perform their roles effectively.
Posted 1 week ago
6.0 - 10.0 years
17 - 32 Lacs
Ghaziabad, Hyderabad, Delhi / NCR
Hybrid
Job Role: Azure Data Engineer Location: Hyderabad Experience: 5 to 10 years Skill Required: Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.). Pyspark, SQL, Python. Job Responsibilities: Work closely with source data application teams and product owners to design, implement and support analytics solutions that provide insights to make better decisions Implement data migration and data engineering solutions using Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.) and traditional data warehouse tools Perform multiple aspects involved in the development lifecycle design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, prod support Provide technical leadership and collaborate within a team environment as well as work independently Be a part of a DevOps team that completely owns and supports their product Implement batch and streaming data pipelines using cloud technologies Leads development of coding standards, best practices and privacy and security guidelines. Mentors' others on technical and domain skills to create multi-functional teams Minimum Qualifications: 1. Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 2. 3 years Data Engineering experience using SQL 3. 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. 4. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies
Posted 1 week ago
4.0 - 6.0 years
12 - 18 Lacs
Chennai, Bengaluru
Work from Office
Key Skills : Python, SQL, PySpark, Databricks, AWS, Data Pipeline, Data Integration, Airflow, Delta Lake, Redshift, S3, Data Security, Cloud Platforms, Life Sciences. Roles & Responsibilities : Develop and maintain robust, scalable data pipelines for ingesting, transforming, and optimizing large datasets from diverse sources. Integrate multi-source data into performant, query-optimized formats such as Delta Lake, Redshift, and S3. Tune data processing jobs and storage layers to ensure cost efficiency and high throughput. Automate data workflows using orchestration tools like Airflow and Databricks APIs for ingestion, transformation, and reporting. Implement data validation and quality checks to ensure reliable and accurate data. Manage and optimize AWS and Databricks infrastructure to support scalable data operations. Lead cloud platform migrations and upgrades, transitioning legacy systems to modern, cloud-native solutions. Enforce security best practices, ensuring compliance with regulatory standards such as IAM and data encryption. Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders to deliver data solutions. Experience Requirement : 4-6 years of hands-on experience in data engineering with expertise in Python, SQL, PySpark, Databricks, and AWS. Strong background in designing and building data pipelines, and optimizing data storage and processing. Proficiency in using cloud services such as AWS (S3, Redshift, Lambda) for building scalable data solutions. Hands-on experience with containerized environments and orchestration tools like Airflow for automating data workflows. Expertise in data migration strategies and transitioning legacy data systems to modern cloud platforms. Experience with performance tuning, cost optimization, and lifecycle management of cloud data solutions. Familiarity with regulatory compliance (GDPR, HIPAA) and security practices (IAM, encryption). Experience in the Life Sciences or Pharma domain is highly preferred, with an understanding of industry-specific data requirements. Strong problem-solving abilities with a focus on delivering high-quality data solutions that meet business needs. Education : Any Graduation.
Posted 1 week ago
5.0 years
0 Lacs
Coimbatore South, Tamil Nadu, India
On-site
Job Title: Senior Data Engineer Location: Coimbatore Experience: 5+ Years Job Type: Full-Time Key Responsibilities Design, develop, and maintain robust data pipelines using Airflow and AWS services. Implement and manage data warehousing using Databricks and PostgreSQL. Automate recurring tasks using Git and Jenkins. Build and optimize ETL processes leveraging AWS tools like S3, Lambda, AppFlow, and DMS. Create interactive dashboards and reports using Looker. Collaborate with various teams to ensure seamless integration of data infrastructure. Ensure the performance, reliability, and scalability of data systems. Use Jenkins for CI/CD and task automation. Required Skills & Expertise Experience as a senior individual contributor on data-heavy projects. Strong command of building data pipelines using Python and PySpark. Expertise in relational database modeling, ideally with time-series data. Proficiency in AWS services such as S3, Lambda, and Airflow. Hands-on experience with SQL and database scripting. Familiarity with Databricks and ThoughtSpot. Experience using Jenkins for automation. Nice to Have Proficiency in data analytics/BI tools such as Power BI, Tableau, Looker, or ThoughtSpot. Experience with AWS Glue, AppFlow, and data transfer services. Exposure to Terraform for infrastructure-as-code. Experience in data quality testing. Previous interaction with U.S.-based stakeholders. Strong ability to work independently and lead tasks effectively. Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. 5+ years of relevant experience. Tech Stack Databricks PostgreSQL Python & PySpark AWS Stack (S3, Lambda, Airflow, DMS, etc.) Power BI / Tableau / Looker / ThoughtSpot Git / Jenkns / CI-CD tools
Posted 1 week ago
6.0 - 9.0 years
18 - 25 Lacs
Chennai
Work from Office
Key Skills : Python, SQL, PySpark, Databricks, AWS, Data Pipeline, Data Governance, Data Security, Leadership, Cloud Platforms, Life Sciences, Migration, Airflow. Roles & Responsibilities : Lead a team of data engineers and developers, defining technical strategy, best practices, and architecture for data platforms. Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Enforce data validation, lineage, and quality checks across the data lifecycle, defining standards for metadata, cataloging, and governance. Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Define and enforce data security standards, IAM policies, and ensure compliance with industry-specific regulatory frameworks. Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Experience Requirement : 6-9 years of hands-on experience in data engineering with expertise in Python, SQL, PySpark, Databricks, and AWS. Strong leadership experience in data engineering or data architecture roles, with a proven track record in leading teams and delivering large-scale data solutions. Expertise in designing and developing data pipelines, optimizing performance, and ensuring data quality. Solid experience with cloud platforms (AWS, Databricks), data governance, and security best practices. Experience in data migration strategies and leading transitions from on-premises to cloud-based environments. Experience in the Life Sciences or Pharma domain is highly preferred, with a deep understanding of industry-specific data requirements. Strong communication and interpersonal skills with the ability to collaborate across teams and engage stakeholders. Education : Any Graduation.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You are an experienced and highly skilled developer with a total experience of 5-8 years and 4-5 years of relevant experience in Azure Cloud Migration. Your expertise includes Azure, Databricks, Core Java, REST webservices, Oracle/SQL Server, GitHub, MQ, Kafka, python, Spring framework, and Spring Boot. You have a strong background in writing complex SQL queries, data analysis, and data modeling. In addition, you have experience working in an agile delivery environment and possess skills in CICD, test automation, and deployment. Familiarity with Hadoop Distributed File System (HDFS) is essential. Knowledge of .NET and C++ would be a plus. Your primary responsibilities will include Azure Databricks, writing complex SQL queries, data analysis, and data modeling. You will collaborate in an agile delivery environment, participate in CICD, test automation, and deployment processes. Experience with Hadoop Distributed File System (HDFS) is preferred, and knowledge of .NET and C++ would be advantageous. The vendor billing range for this position is Rs 8000-8500/day, and the specific work location and mode of work will be in Bangalore/Hyderabad in a hybrid setup. You will be required to attend a client interview, and a pre-onboarding BGV outcome is necessary.,
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Overview We are seeking an experienced Data Modeller with expertise in designing and implementing data models for modern data platforms. This role requires deep knowledge of data modeling techniques, healthcare data structures, and experience with Databricks Lakehouse architecture. The ideal candidate will have a proven track record of translating complex business requirements into efficient, scalable data models that support analytics and reporting needs. About The Role As a Data Modeller, you will be responsible for designing and implementing data models for our Databricks-based Modern Data Platform. You will work closely with business stakeholders, data architects, and data engineers to create logical and physical data models that support the migration from legacy systems to the Databricks Lakehouse architecture, ensuring data integrity, performance, and compliance with healthcare industry standards. Key Responsibilities Design and implement logical and physical data models for Databricks Lakehouse implementations Translate business requirements into efficient, scalable data models Create and maintain data dictionaries, entity relationship diagrams, and model documentation Develop dimensional models, data vault models, and other modeling approaches as appropriate Support the migration of data models from legacy systems to Databricks platform Collaborate with data architects to ensure alignment with overall data architecture Work with data engineers to implement and optimize data models Ensure data models comply with healthcare industry regulations and standards Implement data modeling best practices and standards Provide guidance on data modeling approaches and techniques Participate in data governance initiatives and data quality assessments Stay current with evolving data modeling techniques and industry trends Qualifications Extensive experience in data modeling for analytics and reporting systems Strong knowledge of dimensional modeling, data vault, and other modeling methodologies Experience with Databricks platform and Delta Lake architecture Expertise in healthcare data modeling and industry standards Experience migrating data models from legacy systems to modern platforms Strong SQL skills and experience with data definition languages Understanding of data governance principles and practices Experience with data modeling tools and technologies Knowledge of performance optimization techniques for data models Bachelor's degree in Computer Science, Information Systems, or related field; advanced degree preferred Professional certifications in data modeling or related areas Technical Skills Data modeling methodologies (dimensional, data vault, etc.) Databricks platform and Delta Lake SQL and data definition languages Data modeling tools (erwin, ER/Studio, etc.) Data warehousing concepts and principles ETL/ELT processes and data integration Performance tuning for data models Metadata management and data cataloging Cloud platforms (AWS, Azure, GCP) Big data technologies and distributed computing Healthcare Industry Knowledge Healthcare data structures and relationships Healthcare terminology and coding systems (ICD, CPT, SNOMED, etc.) Healthcare data standards (HL7, FHIR, etc.) Healthcare analytics use cases and requirements Optionally Healthcare regulatory requirements (HIPAA, HITECH, etc.) Clinical and operational data modeling challenges Population health and value-based care data needs Personal Attributes Strong analytical and problem-solving skills Excellent attention to detail and data quality focus Ability to translate complex business requirements into technical solutions Effective communication skills with both technical and non-technical stakeholders Collaborative approach to working with cross-functional teams Self-motivated with ability to work independently Continuous learner who stays current with industry trends What We Offer Opportunity to design data models for cutting-edge healthcare analytics Collaborative and innovative work environment Competitive compensation package Professional development opportunities Work with leading technologies in the data space This position requires a unique combination of data modeling expertise, technical knowledge, and healthcare industry understanding. The ideal candidate will have demonstrated success in designing efficient, scalable data models and a passion for creating data structures that enable powerful analytics and insights.
Posted 1 week ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Director- Supply Chain Process Analysis & Optimization GCL- F Introduction to role The Supply Chain Process Analysis and Optimization (SCPA) Director will lead a dynamic team dedicated to developing and implementing digital solutions that enhance AstraZeneca's Global Supply Chain operations. This role is critical to driving digital transformation projects sought at optimizing processes and improving efficiency across various supply chain functions. The SCPAO Director will collaborate closely with multiple collaborators, both internally and externally, to ensure that strategic digital initiatives are successfully driven and aligned with the overall Digital Strategy. Job Title: Supply Chain Process Analysis and Optimization (SCPAO) Director The Supply Chain Process Analysis and Optimization (SCPA) Director will be responsible for leading a dynamic team dedicated to developing and implementing digital solutions that enhance AstraZeneca's Global Supply Chain operations. This role is critical to driving digital transformation projects aimed at optimizing processes and improving efficiency across various supply chain functions. The SCPAO Director will collaborate closely with multiple stakeholders, both internally and externally, to ensure that strategic digital initiatives are successfully executed and aligned with the overall Digital Strategy. Key Responsibilities Project Management: Partner with various Supply Chain functions to understand, define, and scope digital project demands, ensuring clarity in objectives and deliverables. Develop comprehensive business cases that articulate the return on investment, business value, and strategic alignment of proposed initiatives. Secure funding for larger initiatives by championing their value to senior leadership and aligning them with strategic priorities. Oversee the delivery of custom solutions, coordinating with IT and other stakeholders to ensure alignment with the Digital Strategy roadmap. Implement robust business benefit tracking mechanisms to evaluate the success and impact of solutions within their team. Team Leadership: Provide strategic direction and leadership to the SCPAO team, ensuring alignment with the overarching Digital Strategy and business objectives. Develop the capabilities of team members to meet evolving project needs, focusing on skill enhancement and resource availability. Foster a collaborative environment where team members are encouraged to engage with peers and stakeholders to identify innovative solutions to complex problems. Implement performance metrics and reporting systems to monitor team progress and impact. Consultancy: Act as a technical advisor to the Supply Chain Digital and Analytics (SCDNA) team, bringing Supply Chin Expertise with understanding of process mining and process automation to stakeholder discussions and informing strategic decision-making in. Facilitate discussions on the advantages and disadvantages of off-the-shelf solutions versus custom builds, helping stakeholders make informed decisions. Innovation: Continuously scan the technological horizon for new process mining, process optimization and automation solutions and innovations that can be integrated into the Digital Strategy. Develop and showcase innovative use cases that demonstrate the potential of new technologies to the broader Global Supply Chain and Strategy (GSC&S) organization. Stakeholder Management: Manage day-to-day relationships with stakeholders, maintaining alignment on project progress and managing escalations and exceptions When necessary, guide reprioritization decisions within team, seeking input from leadership to drive decision-making and ensure resource alignment. Scope This role has a global remit and will operate internationally to support the digital strategy across AstraZeneca's Global Supply Chain and Strategy, engaging with diverse teams and projects worldwide. Essential A Bachelor's degree in a quantitative field such as Supply Chain, engineering, economics, business administration, Data Science, Informatics, computer science, or statistics or a related discipline. A minimum of 7 years of experience leading supply chain projects in sophisticated and complex environments, focusing on demand planning, forecasting, and logistics. At least 2 years of experience in process optimization and automation projects in a supply chain setting (procurement, planning, manufacturing, logistics), including successful implementation of Robotic Process Automation (RPA) solutions to digitize operational workflows and reduce manual processes. Experience managing teams within data-driven, digital, or analytics-heavy environments, with a focus on process optimization and efficiency improvements. Expertise in statistical analysis, machine learning, and predictive modeling. Proficient in data visualization tools (e.g., Tableau, Power BI). Hands-on experience working with platforms like MS Azure, Databricks, Cloud and is continually able to stay updated with the latest D&A technology. Proficient Knowledge of some key programming fundamentals in Python, Spark, SQL, or similar languages Excellent communication and interpersonal skills, with the ability to influence and collaborate across different levels of the organization Lead and manage negotiations across complex groups to a target outcome Desirable Experience in a consulting role, whether internal or external, providing insights and strategies to drive business improvements. Experience in process mining with tools such as Celonis, Axon, or equivalent, enabling deep analysis and enhancement of business process efficiency. Prior experience working within a global team setting and/or within the pharmaceutical industry. Experience in life sciences and healthcare and working across the drug research and development lifecycle. Experience in Agentic AI for workflow automation When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. AstraZeneca is where technology meets an inclusive mindset to cross international boundaries and develop a leading ecosystem. We work in cross-functional teams at scale, bringing together the best minds from across the globe to uncover new solutions. Here we think holistically about how to apply technology, building partnerships inside and out. We drive simplicity and efficiencies to make a real difference. Ready to make an impact? Apply now! Date Posted 04-Jul-2025 Closing Date AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Purpose GMR seeks a visionary, technically skilled leader to transform our SSC into a Global Capability Centre (GCC). This role will align data and AI strategies across GMR’s businesses—airports, energy, infrastructure—to enable enterprise-wide intelligence and automation. ORGANISATION CHART Key Accountabilities Accountabilities Strategy & Vision: Create GMR’s data and AI roadmap. Develop models and governance frameworks to expand from SSC to GCC focusing on data and AI services. Data & AI Architecture: Lead cloud-native data platform development with IT, scaling and maintaining AI/ML applications. Design a unified platform for data engineers and analysts to process and analyse large-scale data. (Experience with Kafka, Snowflake, DBT, Google pub/sub or similar is beneficial.) Agentic AI & Automation: Deploy Agentic AI systems for autonomous workflows and integrate RPA tools like UiPath and Automation Anywhere for intelligent automation. AI/ML Ops: Establish ML Ops frameworks for lifecycle governance and real-time AI deployment globally. Governance & Security: Ensure data governance, compliance (GDPR, ISO 27001), lineage, access controls, and security with the group CISO. ROI: Achieve measurable business impact through insights, APIs, and cost-saving analytics. Talent & Ecosystem: Build a high-performing team and create innovation partnerships with vendors, academia, and AI communities. KEY ACCOUNTABILITIES - Additional Details EXTERNAL INTERACTIONS Consulting and Management Services provider IT Service Providers / Analyst Firms Vendors INTERNAL INTERACTIONS GCFO and Finance Council, Procurement council, IT council, HR Council (GHROC) GCMO/ BCMO FINANCIAL DIMENSIONS None Other Dimensions None Education Qualifications Masters in engineering/computer science; preferred certifications include Databricks, PostgreSQL, or cloud platforms. Relevant Experience 10-15 years' experience in data architecture, cloud, AI/ML, and enterprise automation. Expert in GenAI, LLM orchestration (LangChain, AutoGPT, haystack, Llamaindex), MLOps, and RPA platforms. Proven success in scaling AI with clear ROI impact in complex environments. Strong business acumen to influence senior stakeholders. Core technical & ml tools knowledge (languages , frameworks, platforms etc.) Python, SQL, Hugging Face Transformers, Tensorflow, Pytorch, XGBoost, platforms like Google autoML, Amazon sagemaker, azure ML, Google vertex. Knowledge of Ipaas platforms, ERP and CRM integrations, Data engineering and storage, Databricks, snowflake, big query and similar Bonus capabilities – LLM Ops. Edge AI, Simulation and synthetic data, Privacy aware AI with encryption. Compliance frameworks. Effective team builder and leader for AI and ML engineers. Aim to grow the AI division to match global standards within 3 years. COMPETENCIES Team Leadership Strategic Leadership Entrepreneurship Breakthrough Thinking Developing self & others Empowering others
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
We are looking for an experienced professional who possesses the required mathematical and statistical expertise along with natural curiosity and a creative mind to uncover hidden opportunities in data, aiming to realize its full potential. You will be responsible for developing modern Data Warehouse solutions using Databricks and AWS/Azure Stack. This includes providing forward-thinking solutions in the data engineering and analytics space, collaborating with DW/BI leads on ETL pipeline development requirements, triaging issues in existing pipelines, working with the business to understand reporting needs, and developing data models to fulfill those needs. You will also assist team members in resolving technical challenges, drive technical discussions with client architects, and orchestrate data pipelines via Airflow. As for qualifications, you should have a Bachelor's and/or Master's degree in computer science or equivalent experience, with at least 3+ years of experience in Data & Analytics. Communication and presentation skills are essential. You must have a minimum of 2 years" experience in Databricks implementations and large-scale data warehouse end-to-end implementations. Being a Databricks certified architect is a must. Proficiency in SQL and experience with scripting languages like Python, Spark, and Pyspark for data manipulation and automation is required. Additionally, you should have a solid understanding of cloud platforms (AWS, Azure, GCP) and their integration with Databricks. Familiarity with data governance, data management practices, and exposure to tools like Data sharing, Unity catalog, DBT, replication tools, and performance tuning will be advantageous. Tredence is a company that focuses on delivering powerful insights into profitable actions by combining strengths in business analytics, data science, and software engineering. Headquartered in the San Francisco Bay Area, Tredence serves clients in the US, Canada, Europe, and Southeast Asia. We are an equal opportunity employer that values diversity and is dedicated to creating an inclusive environment for all employees. For more information, please visit our website: [Tredence Website](https://www.tredence.com/),
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an Ingestion Engineer at Saxon Global, you will be responsible for designing, developing, and optimizing data ingestion pipelines to integrate multiple sources into Databricks. Your expertise in CI/CD and Kubernetes will be crucial in implementing and maintaining efficient data workflows. Collaboration with Data Engineers and stakeholders is essential to streamline data ingestion strategies and ensure data integrity, security, and compliance throughout the process. Key Responsibilities: - Design, develop, and optimize data ingestion pipelines for integrating multiple sources into Databricks. - Implement and maintain CI/CD pipelines for data workflows. - Deploy and manage containerized applications using Kubernetes. - Collaborate with Data Engineers and stakeholders to streamline data ingestion strategies. - Troubleshoot and optimize ingestion pipelines for performance and scalability. Required Skills & Qualifications: - Proven experience in data ingestion and pipeline development. - Hands-on experience with CI/CD tools such as GitHub Actions, Jenkins, Azure DevOps, etc. - Strong knowledge of Kubernetes and container orchestration. - Experience with Databricks, Spark, and data lake architectures. - Proficiency in Python, Scala, or SQL for data processing. - Familiarity with cloud platforms like AWS, Azure, or GCP. - Strong problem-solving and analytical skills. Preferred Qualifications: - Experience with Infrastructure as Code tools like Terraform, Helm, etc. - Background in streaming data ingestion technologies such as Kafka, Kinesis, etc. - Knowledge of data governance and security best practices.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Scientist at Amdocs in Pune, you will be responsible for the design, development, modification, debugging, and maintenance of software systems. Your role will involve hands-on experience in handling GenAI use cases and developing Databricks jobs for data ingestion for learnings. You will create partnerships with project stakeholders to provide technical assistance for important decisions and work on the development and implementation of Gen AI use-cases in live production as per business/user requirements. Your technical skills should include mandatory expertise in deep learning engineering (mostly on MLOps), strong NLP/LLM experience, and processing text using LLM. You should be proficient in Pyspark/Databricks & Python programming, building backend applications using Python and deep learning frameworks, and deploying models while building APIs (FAST API, FLASK API). Experience with working with GPUs, vector databases like Milvus, Azure cognitive search, and quadrant, as well as transformers and hugging face models like llama, Mixtral AI, and embedding models is essential. It would be good to have knowledge and experience in Kubernetes, Docker, cloud experience working with VMs and Azure storage, and sound data engineering experience. In this role, you will be challenged to design and develop new software applications, providing you with opportunities for personal growth in a growing organization. Your job will involve minimal travel and will be located in Pune. Join Amdocs and help build the future to make it amazing by unlocking innovative potential for next-generation communication and media experiences for end users and enterprise customers.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
Optum is a global organization dedicated to delivering care using technology to improve the lives of millions of people. Your work with our team will directly enhance health outcomes by providing individuals with access to care, pharmacy benefits, data, and resources necessary for their well-being. Our culture is defined by diversity and inclusion, alongside talented colleagues, comprehensive benefits, and opportunities for career development. Join us in making a positive impact on the communities we serve while contributing to the advancement of global health equity through caring, connecting, and growing together. In this role, your primary responsibilities will include analyzing client requirements and complex business scenarios, designing innovative and fully automated products and solutions, serving as a BI Developer for key projects, ensuring high-quality execution of products, providing consulting to teammates, leaders, and clients, and offering extensive solutions in ETL strategies. You should possess an undergraduate degree or equivalent experience, along with expertise in ETL processes and data integration using Azure Data Factory. Proficiency in Power BI semantic model creation, report development, and data visualization is required, with Snowflake and Azure Data Warehouse as primary data sources. Additionally, you should have a strong understanding of data modeling concepts, relational database systems, Snowflake, and Azure Data Warehouse. Familiarity with Databricks for data engineering, advanced analytics, and machine learning tasks is preferred, as well as proficiency in Azure Cloud services such as Azure Data Factory, Azure SQL Data Warehouse, Azure Data Lake Storage, and Azure Analytics. Solid programming skills in Python, SQL, and other scripting languages are essential, along with proven problem-solving abilities, effective communication and collaboration skills, and the capacity to manage multiple tasks simultaneously. Microsoft certifications in Power BI, Azure Cloud, Snowflake, or related fields are a plus. The role is based in Hyderabad, Telangana, IN.,
Posted 1 week ago
10.0 years
0 Lacs
India
Remote
Ready to embark on a journey where your growth is intertwined with our commitment to making a positive impact? Join the Delphi family - where Growth Meets Values. At Delphi Consulting Pvt. Ltd. , we foster a thriving environment with a hybrid work model that lets you prioritize what matters most. Interviews and onboarding are conducted virtually, reflecting our digital-first mindset . We specialize in Data, Advanced Analytics, AI, Infrastructure, Cloud Security , and Application Modernization , delivering impactful solutions that drive smarter, efficient futures for our clients. About the Role: As a Presales Lead for Data and Digital Solutions at Delphi, you will champion client engagements, crafting tailored solutions that leverage Microsoft Azure and Databricks platforms . With a knack for aligning business needs with technical capabilities, you’ll create compelling proposals and presentations while fostering trusted relationships with enterprise clients. This role combines leadership, technical expertise, and strategic vision to drive impactful client engagements and business growth. What You’ll Do: Presales Leadership : Lead presales efforts for Data and Digital solutions, focusing on Microsoft Azure and Databricks offerings. Proposal Development : Develop compelling proposals and presentations tailored to client needs. Solution Collaboration : Collaborate with delivery teams to ensure solutions are practical and feasible. Client Problem-Solving: Proactively identify client challenges and propose innovative solutions. Client Relationship Management: Build and maintain relationships with enterprise clients, ensuring trust and long-term partnerships. Delivery Collaboration: Work closely with delivery teams to ensure smooth handover and execution of solutions, maintaining quality and timelines. What You’ll Bring: Proven Expertise: 10+ years’ experience in presales, with at least 6+ years dedicated to leading presales efforts for Data and Digital solutions Solution Selling : Proven ability to sell both Data and Digital solutions. Technical Knowledge : Hands-on experience with Microsoft Azure, Databricks, and other enterprise-grade platforms Presales & Delivery Leadership: Strong understanding of presales processes, including solution design, proposal development, and alignment with delivery capabilities. Delivery Experience: Proven experience in managing or overseeing solution delivery, ensuring alignment with client expectations and business outcomes Communication Skills : Exceptional ability to present and communicate complex ideas to stakeholders across technical and business domains. Metrics Management : Experience in managing business metrics and achieving sales targets. A sense of humor 😊 What We Offer: At Delphi, we are dedicated to creating an environment where you can thrive, both professionally and personally. Our competitive compensation package, performance-based incentives, and health benefits are designed to ensure you're well-supported. We believe in your continuous growth and offer company-sponsored certifications, training programs , and skill-building opportunities to help you succeed. We foster a culture of inclusivity and support, with remote work options and a fully supported work-from-home setup to ensure your comfort and productivity. Our positive and inclusive culture includes team activities, wellness and mental health programs to ensure you feel supported.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are hiring for one of the IT product-based company Designation: - Product Manager Location: - Pune Skills: -Power BI/ Analytics, Databricks User case history (Jira)or similar tool Agile env. Domain Experience: - Healthacare/RCM Job Description:- Consolidates input from customers, competitors, marketing, sales, customer support, and development teams into market requirements for an assigned Product space. - Translates market requirements into a product roadmap - Develops and maintains a prioritized list of product features - Works with Product Analyst(s) to translate product roadmap into epics for releases - understands product P&L and key drivers for increasing revenue and profitability - Analyzes market competition by comparing the company's product to competitors' products " - Ability to take complicated or complex information and present it in a logical and concise manner - Comfortable presenting to Director & VP level - Maintain an ""Inner Circle"" of 5 customer contacts at the user or manager level for industry and real-world insight"
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You should have experience in understanding and translating data, analytic requirements, and functional needs into technical requirements while collaborating with global customers. Your responsibilities will include designing cloud-native data architectures to support scalable, real-time, and batch processing. You will be required to build and maintain data pipelines for large-scale data management in alignment with data strategy and processing standards. Additionally, you will define strategies for data modeling, data integration, and metadata management. Your role will also involve having strong experience in database, data warehouse, and data lake design and architecture. You should be proficient in leveraging cloud platforms such as AWS, Azure, or GCP for data storage, compute, and analytics services. Experience in database programming using various SQL flavors is essential. Moreover, you will need to implement data governance frameworks encompassing data quality, lineage, and cataloging. Collaboration with cross-functional teams, including business analysts, data engineers, and DevOps teams, will be a key aspect of this role. Familiarity with the Big Data ecosystem, whether on-premises (Hortonworks/MapR) or in the Cloud, is required. You should be able to evaluate emerging cloud technologies and suggest enhancements to the data architecture. Proficiency in any orchestration tool like Airflow or Oozie for scheduling pipelines is preferred. Hands-on experience in utilizing tools such as Spark Streaming, Kafka, Databricks, and Snowflake is necessary. You should be adept at working in an Agile/Scrum development process and optimizing data systems for cost efficiency, performance, and scalability.,
Posted 1 week ago
8.0 - 13.0 years
0 Lacs
hyderabad, telangana
On-site
At Techwave, we are committed to fostering a culture of growth and inclusivity. We ensure that every individual associated with our brand is challenged at every step and provided with the necessary opportunities to excel in their professional and personal lives. People are at the core of everything we do. Techwave is a leading global IT and engineering services and solutions company dedicated to revolutionizing digital transformations. Our mission is to enable clients to maximize their potential and achieve a greater market share through a wide array of technology services, including Enterprise Resource Planning, Application Development, Analytics, Digital solutions, and the Internet of Things (IoT). Founded in 2004 and headquartered in Houston, TX, USA, Techwave leverages its expertise in Digital Transformation, Enterprise Applications, and Engineering Services to help businesses accelerate their growth. We are a team of dreamers and doers who constantly push the boundaries of what's possible, and we want YOU to be a part of it. Job Title: Data Lead Experience: 10+ Years Mode of Hire: Full-time Key Skills: As a senior-level ETL developer with 10-13 years of experience, you will be responsible for building relational and data warehousing applications. Your primary role will involve supporting the existing EDW, designing and developing various layers of our data, and testing, documenting, and optimizing the ETL process. You will collaborate within a team environment to design and develop frameworks and services according to specifications. Your responsibilities will also include preparing detailed system documentation, performing unit and system tests, coordinating with Operations staff on application deployment, and ensuring that all activities are performed with quality and compliance standards. Additionally, you will design and implement ETL batches that meet SLAs, develop data collection, staging, movement, quality, and archiving strategies, and design automation processes to control data access and movement. To excel in this role, you must have 8-10 years of ETL/ELT experience, strong SQL skills, and proficiency in Stored Procedures and database development. Experience in Azure Data Lake, Synapse, Azure Data Factory, and Databricks, as well as Snowflake, is essential. You should possess a good understanding of data warehouse ETL and ELT design best practices, be able to work independently, and have a strong database experience with DB2, SQL Server, and Azure. Furthermore, you should be adept at designing Relational and Dimensional Data models, have a good grasp of Enterprise reporting (particularly Power BI), and understand Agile practices and methodologies. Your role will also involve assisting in analyzing and extracting relevant information from historical business data to support Business Intelligence initiatives and conducting Proof of Concept for new technology selection and proposing data warehouse architecture enhancements. If you are a self-starter with the required skills and experience, we invite you to join our dynamic team at Techwave and be a part of our journey towards innovation and excellence.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be joining Coders Brain Technology Pvt. Ltd., a global leader in services, digital, and business solutions. At Coders Brain, we partner with our clients to simplify, strengthen, and transform their businesses. We are committed to providing the highest levels of certainty and satisfaction through our comprehensive industry expertise and global network of innovation and delivery centers. As a Data Engineer with a minimum of 5 years of experience, you will be working remotely. Your role will involve collaborating with other developers to define and refine solutions. You will work closely with the business to deliver data and analytics projects. Your responsibilities will include working on data integration with various tools such as Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in AWS Cloud environment. You should have strong real-life experience in Python development, especially in pySpark within AWS Cloud. Designing, developing, testing, deploying, maintaining, and improving data integration pipelines will be a key part of your role. Additionally, you should have experience with Python and common libraries, Perl, Unix Scripts, and analytical skills with databases. Proficiency in source control systems like Git, Bitbucket, and continuous integration tools like Jenkins is required. Experience with continuous deployment (CI/CD), Databricks, Airflow, and Apache Spark will be beneficial. Knowledge of databases such as Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar is essential. Exposure to ETL tools including Informatica is preferred. A degree in Computer Science, Computer Engineering, or Electrical Engineering is desired. If you are interested in this opportunity, click on the apply button. Alternatively, you can send your resume to prerna.jain@codersbrain.com or pooja.gupta@codersbrain.com.,
Posted 1 week ago
0.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu
Remote
Location: Chennai, Tamil Nadu, India Job ID: R0097953 Date Posted: 2025-07-10 Company Name: HITACHI ENERGY TECHNOLOGY SERVICES PRIVATE LIMITED Profession (Job Category): IT, Telecom & Internet Job Schedule: Full time Remote: No Job Description: The opportunity: To software development using Power Apps, power automate & SharePoint as per j ob d e scripti o n. How you’ll make an impact: To develop complex applications with Microsoft power Apps, Power Automate using SharePoint / Dataverse or SQL as backend. Propose and guide team to establish App’s data storage and retrieval in Enterprise data platform (using data lake, data bricks) To Connect with business to gather requirement and set priorities for development. Connect with subject matter experts to understand the business processes. Organize the change requests in a structured manner with excellent traceability. Convert the business requirements into process flow chart. Shall work independently in developing Power App Applications. To conduct periodic design review meetings to ensure the development is progressing as per agreed timeline. Follow up with business to ensure required inputs are received on time. Support business users during user acceptance test and ensure. Undertake change requests Responsible to ensure compliance with applicable external and internal regulations, procedures, and guidelines. Living Hitachi Energy’s core values of safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Your background: B.Tech / MCA 4-8years of experience Should have executed at least 5 projects using Power Apps and Power Automate platform in lead role. Should have good technical and working knowledge in SQL server. Should have expertise in Canvas apps and model driven apps Expertise in creating complex Power automate flows. To have exposure to Enterprise data platform, data lake, Databricks concepts. To have expertise in interfacing with software platforms such as SAP, Salesforce etc. To have knowledge in Artificial intelligence / Machine learning concepts and implementation methods Qualified individuals with a disability may request a reasonable accommodation if you are unable or limited in your ability to use or access the Hitachi Energy career site as a result of your disability. You may request reasonable accommodations by completing a general inquiry form on our website. Please include your contact information and specific details about your required accommodation to support you during the job application process. This is solely for job seekers with disabilities requiring accessibility assistance or an accommodation in the job application process. Messages left for other purposes will not receive a response.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France