Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or related field. 5+ years of experience building Tableau dashboards and visualizations. 1+ years of hands-on experience integrating Tableau with Databricks (including SQL, Delta Lake, and Spark environments). Strong understanding of data modeling, ETL processes, and analytics workflows. Proficient in writing optimized SQL queries. Experience with Tableau Server or Tableau Cloud deployment and administration. Ability to work with large datasets and troubleshoot performance issues. Preferred Qualifications: Experience with scripting or automation tools (e.g., Python, DBT). Familiarity with other BI tools and cloud platforms (e.g., Power BI, AWS, Azure). Tableau certification (Desktop Specialist/Professional or Server). Understanding of data privacy and compliance standards (e.g., GDPR, HIPAA). Soft Skills: Strong analytical and problem-solving skills. Excellent communication and stakeholder management abilities. Detail-oriented with a strong focus on data accuracy and user experience. Comfortable working independently and collaboratively in a fast-paced environment.
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Overview: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. Implement and enforce data quality checks, validation rules, and monitoring. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: 4–6 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture, data integration, and transformation techniques. Experience in working with version control systems like GitHub and knowledge of CI/CD practices. Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): Experience working with the Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools.
Posted 1 week ago
6.0 years
0 Lacs
India
On-site
We are seeking a talented and experienced Data Engineer with a strong background in Microsoft Fabric to design, develop, and maintain robust, scalable, and secure data solutions. You'll play a crucial role in building and optimizing data pipelines, data warehouses, and data lakehouses within the Microsoft Fabric ecosystem to enable advanced analytics and business intelligence. Key Responsibilities Design and Development: Architect, design, develop, and implement end-to-end data solutions within the Microsoft Fabric ecosystem, including Lakehouse, Data Warehouse, and Real-Time Analytics components. Data Pipeline Construction: Build, test, and maintain robust and scalable data pipelines for data ingestion, transformation, and curation from diverse sources using Microsoft Fabric Data Factory (Pipelines and Dataflows Gen2) and Azure Databricks (PySpark/Scala) . Data Modeling & Optimization: Develop and optimize data models within Microsoft Fabric, adhering to best practices for performance, scalability, and data integrity (e.g., dimensional modeling). ETL/ELT Processes: Implement efficient ETL/ELT processes to extract data from various sources, transform it into suitable formats, and load it into the data lakehouse or analytical systems. Performance Tuning: Continuously monitor and fine-tune data pipelines and processing workflows to enhance overall performance and efficiency, especially for large-scale datasets. Data Quality & Governance: Design and implement data quality, validation, and reconciliation processes to ensure data accuracy and reliability. Ensure data security and compliance with data privacy regulations. Collaboration: Work closely with data architects, data scientists, business intelligence developers, and business stakeholders to understand data requirements and translate them into technical solutions. Automation & CI/CD: Implement CI/CD pipelines for data solutions within Azure DevOps or similar tools, ensuring automated deployment and version control. Troubleshooting: Troubleshoot and resolve complex data-related issues and performance bottlenecks. Documentation: Maintain comprehensive documentation for data architectures, pipelines, data models, and processes. Stay Updated: Keep abreast of the latest advancements in Microsoft Fabric, Azure data services, and data engineering best practices. Required Skills & Qualifications Bachelor's degree in Computer Science, Information Technology, or a related quantitative field, or equivalent practical experience. 6+ years of hands-on experience as a Data Engineer or Data Architect. Mandatory hands-on experience with Microsoft Fabric , including its core components such as Lakehouse, Data Warehouse, and Data Factory (Pipelines, Dataflows Gen2), and Spark notebooks. Strong expertise in Microsoft Azure data services, including: Azure Databricks (PySpark/Scala for complex data processing and transformations). Azure Data Lake Storage Gen2 (for scalable data storage). Azure Data Factory (for ETL/ELT orchestration). Proficiency in SQL for data manipulation and querying. Experience with Python or Scala for data engineering tasks. Solid understanding of data warehousing concepts, data modeling (dimensional, relational), and data lakehouse architectures. Experience with version control systems (e.g., Git, Azure Repos). Strong analytical and problem-solving skills with a keen eye for detail. Excellent communication (written and verbal) and interpersonal skills to collaborate effectively with cross-functional teams.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
🚀 We're Hiring! | Data Engineer—Pune (Hybrid) 📍 Location: Pune 🖥️ Mode: Work From Office (Hybrid) 🔍 Position: Data Engineer Are you passionate about building smart, automated testing solutions? Join our growing team! Job Description · Bachelor's or master's degree in computer science, IT, or equivalent and a minimum of 4 to 8 years of experience building and deploying complex data pipelines and data solutions. · Bachelor's or master's degree in computer science, IT, or equivalent (for junior profiles). · Experience deploying data pipelines using technologies like Databricks. · Hands on experience with Java. · Hands-on experience in Databricks. · Experience with visualization software, preferably Splunk (or else Grafana, Prometheus, PowerBI, Tableau, or similar). · Strong experience with SQL , Java with hands-on experience in data modeling. · Experience with Pyspark or Spark to deal with distributed data. · Good to have knowledge on Splunk (SPL) · Experience with data schemas (e.g. JSON/XML/Avro). · Experience in deploying services as containers (e.g. Docker, Kubernetes). · Experience in working with cloud services (preferably with Azure). · Experience with streaming and/or batch storage (e.g. Kafka, streaming platform) is a plus. · Experience in data quality management and monitoring is a plus. · Strong communication skills in English. 📩 Interested? Let's connect! Send your updated CV to: nivetha.s@eminds.ai Join us and be part of something exciting!
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EXL, we go beyond capabilities to focus on collaboration and character, tailoring solutions to your unique needs, culture, goals, and technology environments. We specialize in transformation, data science, and change management to enhance efficiency, improve customer relationships, and drive revenue growth. Our expertise in analytics, digital interventions, and operations management helps you outperform the competition with sustainable models at scale. As your business evolution partner, we optimize data leverage for better business decisions and intelligence-driven operations. For more information, visit www.exlservice.com. Job Title - Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. Role Description We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics Responsibilities: 1. Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently. 2. Collaborate with analysts to understand data requirements and ensure data availability and quality. 3. Write and optimize SQL queries for data extraction, transformation, and loading. 4. Utilize Git for version control, ensuring proper documentation and tracking of code changes. 5. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval. 6. Develop and optimize high-performance, scalable databases using Amazon DynamoDB. 7. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations. 8. Automate workflows using AWS Cloud services like event bridge, step functions. 9. Monitor and optimize data processing workflows for performance and scalability. 10. Troubleshoot data-related issues and provide timely resolution. 11. Stay up-to-date with industry best practices and emerging technologies in data engineering. Qualifications: 1. Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus. 2. Strong proficiency in PySpark and Python for data processing and analysis. 3. Proficiency in SQL for data manipulation and querying. 4. Experience with version control systems, preferably Git. 5. Familiarity with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc. 6. Familiarity with Databricks and it’s concepts. 7. Excellent problem-solving skills and attention to detail. 8. Strong communication and collaboration skills to work effectively within a team. 9. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Preferred Skills: 1. Knowledge of data warehousing concepts and data modeling. 2. Familiarity with big data technologies like Hadoop and Spark. 3. AWS certifications related to data engineering.
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Hi, We have the job opening for Team Lead - Data Migration and Snowflake Company Name : PibyThree Consulting Pvt Ltd. Job Title : Team Lead - Data Migration and Snowflake Skill : Azure Data factory, Databricks, PySpark, Snowflake & Data Migration Location : Pune, Maharashtra. About Us: Πby3 is A Cloud Transformation company enabling Enterprises for Future. We are nimble, and Highly dynamic focused team with a passion to serve our clients with utmost trust and ownership. Our expertise in Technology with vast experience over the years helps client get Solutions with optimized cost and reduced risks. Job Description: We are looking for an experienced Team Lead – Data Warehouse Migration, Data Engineering & BI to lead enterprise-level data transformation initiatives. The ideal candidate will have deep expertise in migration, Snowflake, Power BI and end-to-end data engineering using tools like Azure Data Factory, Databricks, and PySpark. Key Responsibilities: Lead and manage data warehouse migration projects, including extraction, transformation, and loading (ETL/ELT) across legacy and modern platforms. Architect and implement scalable Snowflake data warehousing solutions for analytics and reporting. Develop and schedule robust data pipelines using Azure Data Factory and Databricks. Write efficient and maintainable PySpark code for batch and real-time data processing. Design and develop dashboards and reports using Power BI to support business insights. Ensure data accuracy, security, and consistency throughout the project lifecycle. Collaborate with stakeholders to understand data and reporting requirements. Mentor and lead a team of data engineers and BI developers. Manage project timelines, deliverables, and team performance effectively Must-Have Skills: Data Migration: Hands-on experience with large-scale data migration, reconciliation, and transformation. Snowflake: Data modeling, performance tuning, ELT/ETL development, role-based access control. Azure Data Factory: Pipeline development, integration services, linked services. Databricks: Spark SQL, notebooks, cluster management, orchestration. PySpark: Advanced transformations, error handling, and optimization techniques. Power BI: Data visualization, DAX, Power Query, dashboard/report publishing and maintenance. Preferred Skills: Familiarity with Agile methodologies and sprint-based development. Experience in working with CI/CD for data workflows. Ability to lead client discussions and manage stakeholder expectations. Strong analytical and problem-solving abilities. Regards, Arshee Khan Talent Acquisition Specialist Email -Arshee.khan@Piythree.com https://www.linkedin.com/in/arshee-khan-90311b2b5?utm_source=share&am...
Posted 1 week ago
7.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Overview We are seeking an ETL Developer with expertise in Advanced SQL, Python, and Shell Scripting. This full-time position reports to the Data Engineering Manager and is available in a hybrid work model. This is a replacement position within the SRAI - EYC Implementation team. Key Responsibilities Design and develop ETL processes for data extraction, transformation, and loading. Utilize Advanced SQL for data processing and analysis. Implement data processing solutions using Python and Shell Scripting. Collaborate with cross-functional teams to understand data requirements. Maintain and optimize data pipelines for performance and reliability. Provide insights and analysis to support business decisions. Ensure data quality and integrity throughout the ETL process. Stay updated on industry trends and best practices in data engineering. Must-Have Skills and Qualifications 7-8 years of experience as an ETL Developer. Expertise in Advanced SQL for data manipulation and analysis. Proficient in Python and Shell Scripting. Foundational understanding of Databricks and Power BI. Strong logical problem-solving skills. Experience in data processing and transformation. Understanding of the retail domain is a plus. Good-to-Have Skills and Qualifications Familiarity with cloud data platforms (AWS, Azure). Knowledge of data warehousing concepts. Experience with data visualization tools. Understanding of Agile methodologies. What We Offer Competitive salary and comprehensive benefits package. Opportunities for professional growth and advancement. Collaborative and innovative work environment. Flexible work arrangements. Impactful work that drives industry change.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job description Job Name: Senior Data Engineer Azure Years of Experience: 5 Job Description: We are looking for a skilled and experienced Senior Azure Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: ADF, Databricks Secondary Skills: DBT, Python, Databricks, Airflow, Fivetran, Glue, Snowflake Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge /involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. Utilize Azure Databricks for data transformation and processing. Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. Proficient in programming languages like Python, SQL, and conversant with pertinent scripting languages
Posted 1 week ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities : Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 1 week ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 1 week ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role:-Business Analyst Exp:-7-14 Yrs Location:-Hyderabad Required Skills : Business Analyst- BRD/FRD, Stakeholder Mngt, UAT Testing, Datawarehouse Concepts, SQL joints and subqueries, Data Visualization tools-Power BI/MSTR and Investment Domain (Capital market, Asset management, wealth management). Please share your resumes to jyothsna.g@technogenindia.com, Experience: 10+ years of experience as a BSA or similar role in data analytics or technology projects. 5+ years of domain experience in asset management, investment management, insurance, or financial services. Familiarity with Investment Operations concepts such as Critical Data Elements (CDEs), data traps, and reconciliation workflows. Working knowledge of data engineering principles: ETL/ELT, data lakes, and data warehousing. Proficiency in BI and analytics tools such as Power BI, Tableau, MicroStrategy, and SQL. Excellent communication, analytical thinking, and stakeholder engagement skills. Experience working in Agile/Scrum environments with cross-functional delivery teams. Technical Skills: Proven track record of Analytical and Problem-Solving skills. In-depth knowledge of investment data platforms, including Golden Source, NeoXam, RIMES, JPM Fusion, etc. Expertise in cloud data technologies such as Snowflake, Databricks, and AWS/GCP/Azure data services. Strong understanding of data governance frameworks, metadata management, and data lineage. Familiarity with regulatory requirements and compliance standards in the investment management industry. Hands-on experience with IBOR’s such as Blackrock Alladin, CRD, Eagle STAR (ABOR), Eagle Pace, and Eagle DataMart. Familiarity with investment data platforms such as Golden Source, FINBOURNE, NeoXam, RIMES, and JPM Fusion. Experience with cloud data platforms like Snowflake and Databricks. Background in data governance, metadata management, and data lineage frameworks.
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
🧭 Job Summary: We are seeking a results-driven Data Project Manager (PM) to lead data initiatives leveraging Databricks and Confluent Kafka in a regulated banking environment. The ideal candidate will have a strong background in data platforms, project governance, and financial services, and will be responsible for ensuring successful end-to-end delivery of complex data transformation initiatives aligned with business and regulatory requirements. Key Responsibilities: 🔹 Project Planning & Execution - Lead planning, execution, and delivery of enterprise data projects using Databricks and Confluent. - Develop detailed project plans, delivery roadmaps, and work breakdown structures. - Ensure resource allocation, budgeting, and adherence to timelines and quality standards. 🔹 Stakeholder & Team Management - Collaborate with data engineers, architects, business analysts, and platform teams to align on project goals. - Act as the primary liaison between business units, technology teams, and vendors. - Facilitate regular updates, steering committee meetings, and issue/risk escalations. 🔹 Technical Oversight - Oversee solution delivery on Databricks (for data processing, ML pipelines, analytics). - Manage real-time data streaming pipelines via Confluent Kafka. - Ensure alignment with data governance, security, and regulatory frameworks (e.g., GDPR, CBUAE, BCBS 239). 🔹 Risk & Compliance - Ensure all regulatory reporting data flows are compliant with local and international financial standards. - Manage controls and audit requirements in collaboration with Compliance and Risk teams. 💼 Required Skills & Experience: ✅ Must-Have: - 7+ years of experience in Project Management within the banking or financial services sector. - Proven experience leading data platform projects (especially Databricks and Confluent Kafka). - Strong understanding of data architecture, data pipelines, and streaming technologies. - Experience managing cross-functional teams (onshore/offshore). - Strong command of Agile/Scrum and Waterfall methodologies. ✅ Technical Exposure: - Databricks (Delta Lake, MLflow, Spark) - Confluent Kafka (Kafka Connect, kSQL, Schema Registry) - Azure or AWS Cloud Platforms (preferably Azure) - Integration tools (Informatica, Data Factory), CI/CD pipelines - Oracle ERP Implementation experience ✅ Preferred: - PMP / Prince2 / Scrum Master certification - Familiarity with regulatory frameworks: BCBS 239, GDPR, CBUAE regulations - Strong understanding of data governance principles (e.g., DAMA-DMBOK) 🎓 Education: Bachelor’s or Master’s in Computer Science, Information Systems, Engineering, or related field. 📈 KPIs: - On-time, on-budget delivery of data initiatives - Uptime and SLAs of data pipelines - User satisfaction and stakeholder feedback - Compliance with regulatory milestones
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Data Quality Stable Team is part of the Data Governance & Data Management function. There are responsible to: Ensure the Volvo Group get the capabilities and support to enable business appointed stakeholders to monitor and improve the quality of their data. Establish continuous data quality improvement process. Provide tools, trainings, standards in the area of Data Quality. What is expected from you? Drive workshops with the business analyst to understand their business context and to collect their needs around data quality pain points. Collaborate closely with Business Analysts and with business stakeholders to profile data, build Data Quality measurement, implement Data quality dashboards. Access, manipulate, query, and analyze data using different software, tools (IDMC, Databricks, SQL, etc) and techniques. Design, develop, and implement Data Quality pipelines leveraging Data Quality modules (Data Profiling, Data Quality, and Data Integration) in IDMC. Conduct quality assurance activities to validate the effectiveness of Data Quality pipelines and ensure compliance with established data quality standards. Perform thorough testing and validation of Data Quality processes, including unit testing, integration testing, performance testing and user acceptance testing. Document Data Quality processes, including design specifications, testing procedures, and operational guidelines to ensure clarity and maintainability. Collaborate with domain data stewards and business stakeholders to define data quality requirements and establish metrics for measuring data quality. Monitor and troubleshoot Data Quality pipelines, identifying and resolving issues to maintain optimal performance and reliability. Implement best practices for data quality management, including data cleansing, enrichment, and transformation techniques. Participate in continuous improvement initiatives to enhance data quality processes and tools. Provide training and support to team members and stakeholders on Data Quality tools and methodologies. Do you dream big? We do too, and we are excited to grow together. In this role, you will bring: University degree, with a passion for data. Hands-on experience with Informatica IDMC, specifically in Data Profiling, Data Quality and Data Integration modules. Minimum 4 years’ experience with Informatica DQ modules Strong understanding of Data Quality concepts, including data profiling, cleansing, and validation. Minimum 4/5 years on a DQ project Experience with ETL solutions Experience with quality assurance methodologies and testing frameworks. Basic understanding of MS Azure. Familiarity with Data Governance & Management principles. Experience in agile setups, using DevOps or similar. Strong analytical, problem-solving, and troubleshooting skills. Good ability to link business needs and developments Excellent communication and collaboration skills. Proficiency in English for international projects. Nice to have: Experience developing with Power BI Work from office- all 5 days
Posted 1 week ago
0.0 - 10.0 years
0 Lacs
Mumbai, Maharashtra
On-site
Location Mumbai, Maharashtra, India Category Digital Technology Job ID: R147951 Posted: Jul 14th 2025 Job Available In 5 Locations Staff Technical Product Manager Are you excited about the opportunity to lead a team within an industry leader in Energy Technology? Are you passionate about improving capabilities, efficiency and performance? Join our Digital Technology Team! As a Staff Technical Product Manager, this position will operate in lock-step with product management to create a clear strategic direction for build needs, and convey that vision to the service’s scrum team. You will direct the team with a clear and descriptive set of requirements captured as stories and partner with the team to determine what can be delivered through balancing the need for new features, defects, and technical debt.. Partner with the best As a Staff Technical Product Manager, we are seeking a strong background in business analysis, team leadership, and data architecture and hands-on development skills. The ideal candidate will excel in creating roadmap, planning with prioritization, resource allocation, key item delivery, and seamless integration of perspectives from various stakeholders, including Product Managers, Technical Anchors, Service Owners, and Developers. Results - oriented leader, and capable of building and executing an aligned strategy leading data team and cross functional teams to meet deliverable timelines. As a Staff Technical Product Manager, you will be responsible for: Demonstrating wide and deep knowledge in data engineering, data architecture, and data science. Ability to guide, lead, and work with the team to drive to the right solution Engaging frequently (80%) with the development team; facilitate discussions, provide clarification, story acceptance and refinement, testing and validation; contribute to design activities and decisions; familiar with waterfall, Agile scrum framework; Owning and manage the backlog; continuously order and prioritize to ensure that 1-2 sprints/iterations of backlog are always ready. Collaborating with UX in design decisions, demonstrating deep understanding of technology stack and impact on final product. Conducting customer and stakeholder interviews and elaborate on personas. Demonstrating expert-level skill in problem decomposition and ability to navigate through ambiguity. Partnering with the Service Owner to ensure a healthy development process and clear tracking metric to form standard and trustworthy way of providing customer support Designing and implementing scalable and robust data pipelines to collect, process, and store data from various sources. Developing and maintaining data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation. Optimizing and tuning the performance of data systems to ensure efficient data processing and analysis. Collaborating with product managers and analysts to understand data requirements and implement solutions for data modeling and analysis. Identifying and resolving data quality issues, ensuring data accuracy, consistency, and completeness Implementing and maintaining data governance and security measures to protect sensitive data. Monitoring and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes. Fuel your passion: To be successful in this role you will require: Have a Bachelor's or higher degree in Computer Science, Information Systems, or a related field. Have minimum 6-10 years of proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems. Have Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle). Have Extensive knowledge working with SAP systems, Tcode, data pipelines in SAP, Databricks related technologies. Have Experience with building complex jobs for building SCD type mappings using ETL tools like PySpark, Talend, Informatica, etc. Have Experience with data visualization and reporting tools (e.g., Tableau, Power BI). Have Strong problem-solving and analytical skills, with the ability to handle complex data challenges. Have Excellent communication and collaboration skills to work effectively in a team environment. Have Experience in data modeling, data warehousing, and ETL principles. Have familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery). Have advanced knowledge of distributed computing and parallel processing. Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Certification in relevant technologies or data engineering disciplines. Having working knowledge in Databricks, Dremio, and SAP is highly preferred. Work in a way that works for you We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer the following flexible working patterns (where applicable): Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working with us Our people are at the heart of what we do at Baker Hughes. We know we are better when all our people are developed, engaged, and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent, and develop leaders at all levels to bring out the best in each other. Working for you Our inventions have revolutionized energy for over a century. But to keep going forward tomorrow, we know we must push the boundaries today. We prioritize rewarding those who embrace challenge with a package that reflects how much we value their input. Join us, and you can expect: Contemporary work-life balance policies and wellbeing activities. About Us With operations in over 120 countries, we provide better solutions for our customers and richer opportunities for our people. As a leading partner to the energy industry, we're committed to achieving net-zero carbon emissions by 2050 and we're always looking for the right people to help us get there. People who are as passionate as we are about making energy safer, cleaner, and more efficient. Join Us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will challenge and inspire you! About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward – making it safer, cleaner and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress? Join us and become part of a team of people who will challenge and inspire you! Let’s come together and take energy forward. Baker Hughes Company is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
Posted 1 week ago
1.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Senior Data & Applied Scientist Noida, Uttar Pradesh, India Date posted Jul 14, 2025 Job number 1844835 Work site Microsoft on-site only Travel None Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Data Science Employment type Full-Time Overview Do you want to be on the leading edge of using big data and help drive engineering and product decisions for the biggest productivity software on the planet? Office Product Group (OPG) has embarked on a mission to delight our customers by using data-informed engineering to develop compelling products and services. OPG is looking for an experienced professional with a passion for delivering business value with data insights and analytics to join our team as a Data & Applied Scientist. We are looking for a strong Senior Data Scientist with a proven track record of solving large, complex data analysis problems in a real-world software product development setting. Ideal candidates should be able to take a business or engineering problem from a Product Manager or Engineering leader and translate it to a data problem. This includes all the steps to identify and deeply understand potential data sources, conduct the appropriate analysis to reveal actionable insights, and then operationalize the metrics or solution into PowerBI dashboards. You will be delivering results through innovation and persistence when similar candidates have given up. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 3+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techn OR Bachelor's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 5+ years data-science experience (e.g., managing structured and unstructured data, applying statistical tec OR equivalent experience. 2+ years customer-facing, project-delivery experience, professional services, and/or consulting experience. Preferred Qualifications: 7+ years of experience involving programming with languages Python/R and hands on experience using technologies such as SQL, Kusto, Databricks, Spark etc. 7+ years of experience working with data exploration and data visualization tools like PowerBI or similar. Candidate must be able to communicate complex ideas and concepts to leadership and deliver results. Candidate must be comfortable in manipulating and analyzing complex, high dimensional data from varying sources to solve difficult problems. Bachelors or higher degrees in Computer Science, Statistics, Mathematics, Physics, Engineering, or related disciplines. Responsibilities Dashboard Development and Maintenance: Design, build, and maintain interactive dashboards and reports in PowerBI to visualize key business metrics and insights. Work closely with stakeholders to understand their data visualization needs and translate business requirements into technical specifications. Data Extraction and Analysis: Perform ad-hoc data extraction and analysis from various data sources, including SQL databases, cloud-based data storage solutions, and external APIs. Ensure data accuracy and integrity in reporting and analysis. Deliver high impact analysis to diagnose and drive business critical insights to guide product and business development. Metric Development and Tracking: Be the SME who understand landscape of what data (telemetry) are and should be captured Advice feature teams on telemetry best practices to ensure business needs for data are met. Collaborate with product owners and other stakeholders to define and track key performance indicators (KPIs) and other relevant metrics for business performance. Identify trends and insights in the data to support decision-making processes. User Journey and Funnel Analysis: Assist product owners in mapping out user journeys and funnels to understand user behavior and identify opportunities for feature improvement. Develop and implement ML models to analyze user journeys and funnels. Utilize a variety of techniques to uncover patterns in user behavior that can help improve the product. Forecasting and Growth Analysis: Support the forecasting of key results (KRs) and growth metrics through data analysis and predictive modeling. Provide insights and recommendations to help drive strategic planning and execution. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Chennai, Tamil Nadu
Remote
Location: Chennai, Tamil Nadu, India Job ID: R0099945 Date Posted: 2025-07-14 Company Name: HITACHI ENERGY TECHNOLOGY SERVICES PRIVATE LIMITED Profession (Job Category): Engineering & Science Job Schedule: Full time Remote: No Job Description: The opportunity: As a Data Engineer, you will be part of Operation Center, India (INOPC-PG), aiming to develop a global value chain, where key business activities, resources, and expertise are shared across geographic boundaries to optimize value for Hitachi Energy customers across markets. As part of Transformers BU, we provide high-quality engineering and Technology to Hitachi Energy world. This is an important step from Hitachi Energy's Global Footprint strategy. How you’ll make an impact: Display technical expertise in data analytics focusing on a team of diversified technical competencies. Build and maintain accurate and scalable data pipeline and infrastructure such as SQL Warehouse, Data Lakes, etc. using Cloud platforms (e.g.: MS Azure, Databricks). Proactively work with business stakeholders to understand data lineage, definitions, and methods of data extraction. Write production-grade SQL and PySpark code to create data architecture. Consolidate SQL databases from multiple sources, data cleaning, and manipulation in preparation for analytics and machine learning. Use data visualization tools such as Power BI to create professional quality dashboards and reports. Write good quality documentation for data processing for different projects to ensure reproducibility. Responsible to ensure compliance with applicable external and internal regulations, procedures, and guidelines. Living Hitachi Energy’s core values safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Your Background: BE / B.Tech in Computer Science, Data Science, or related discipline and at least 5 years of related working experience. 5 years of data engineering experience, with understanding lake house architecture, data integration framework, ETL/ELT pipeline, orchestration/monitoring, star schema data modeling. 5 years of experience with Python/PySpark and SQL.( Proficient in PySpark, Python, and Spark SQL). 2-3 years of hands-on data engineering experience using Databricks as the main tool (meaning >60% of their time is using Databricks instead of just occasionally). 2-3 years of hands-on experience with different Databricks components (DLT, workflow, Unity catalog, SQL warehouse, CI/CD) in addition to using notebooks. Experience in Microsoft Power BI. Proficiency in both spoken & written English language is required. Qualified individuals with a disability may request a reasonable accommodation if you are unable or limited in your ability to use or access the Hitachi Energy career site as a result of your disability. You may request reasonable accommodations by completing a general inquiry form on our website. Please include your contact information and specific details about your required accommodation to support you during the job application process. This is solely for job seekers with disabilities requiring accessibility assistance or an accommodation in the job application process. Messages left for other purposes will not receive a response.
Posted 1 week ago
4.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At PwC, our people in audit and assurance focus on providing independent and objective assessments of financial statements, internal controls, and other assurable information enhancing the credibility and reliability of this information with a variety of stakeholders. They evaluate compliance with regulations including assessing governance and risk management processes and related controls. Those in data, analytics and technology solutions at PwC will assist clients in developing solutions that help build trust, drive improvement, and detect, monitor, and predict risk. Your work will involve using advanced analytics, data wrangling technology, and automation tools to leverage data and focus on establishing the right processes and structures to enable our clients to make efficient and effective decisions based on accurate information that is complete and trustworthy. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. About The Job PricewaterhouseCoopers Acceleration Centre (Kolkata) Private Limited is a joint venture in India among members of the PricewaterhouseCoopers Network that will leverage the scale and capabilities of its network. It is a member firm of PricewaterhouseCoopers International Limited and has its registered office in Kolkata, India. The Delivery Center will provide a professional with an opportunity to work in a dynamic environment where you will have the ability to develop process and quality-based skills. To really stand out and make us fit for the future in a constantly changing world, each one of us at PwC needs to be a purpose-led and values-driven leader at every level. To help us achieve this we have the PwC Professional, our global leadership development framework. It gives us a single set of expectations across our lines, geographies and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. We are seeking a skilled Revenue Automation Senior Associate to perform the revenue system implementation and data conversion for our revenue automation consulting area. The candidate will play a critical role in supporting our clients by ensuring compliance with accounting standards, implementing revenue recognition system and data conversion, optimizing revenue recognition processes, and driving cross-functional collaboration to achieve business objectives. As a Senior Associate, you will work as part of a team of problem solvers and help clients solve their complex business issues from strategy to execution. The candidate will report to an AC Manager. The AC team works as an extension of our overseas Engagement Teams and works closely with those teams as well as clients directly. Requirements Knowledge/Skills: In-depth knowledge of revenue recognition principles and accounting standards, including ASC 606 / IFRS 15. Strong understanding of business processes, systems, and controls related to revenue recognition. Experience with Revenue Management systems (e.g., Zuora Revenue, Oracle RMCS), Alteryx, SQL, Microsoft Visio preferred. Excellent analytical skills, with the ability to assess complex issues, identify solutions, and make recommendations. Effective communication skills, with the ability to communicate complex concepts to non-technical stakeholders. Good interpersonal skills, with the ability to build relationships and collaborate effectively with clients’ stakeholders at all levels of the organization. Perform basic review activities and provide coaching to junior team members in completing their tasks. Functional Skills Hands on experience with Data management as per business requirements for Analytics. Experience in dealing with financial data, and data analytics for business processes. Experience in performing data transformation, data quality checks, and data blending. Demonstrates good knowledge and understanding of performing on project teams and providing deliverables. Involving multiphase data analysis related to the evaluation of compliance, finance, and risk issues. Technical Tools Must have: Hands-on experience with MS-SQL / ACL or other structured query language. Demonstrates good knowledge and/or a proven record of success leveraging data manipulation and analysis technologies. Proficiency in Microsoft Excel, PowerPoint. Demonstrates knowledge in Excel and its functionality. Good To Have Experience in a similar role in their current profile. Strong understanding of business processes, systems, and controls related to revenue recognition. Experience working on cross-functional projects or initiatives, with a proven track record of successful implementations. Strong accounting knowledge and experience in dealing with financial data are a plus. Knowledge of Azure Databricks / Alteryx / Python / SAS / Knime. Demonstrates thorough knowledge and / or proven record of success leveraging data visualization tools such as Power BI and Tableau. Education/Qualification Bachelor's degree in Accounting and Information System or a related field Level Of Experience 4+ years of experience in relevant roles, with a focus on revenue recognition, preferably in a public accounting firm or a large corporation Preferred CPA or equivalent certification
Posted 1 week ago
15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills any cloud data and reporting migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled strong analytical background to work in our Analytics Consulting practice Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 2+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, AWS SageMaker or Databricks) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA
Posted 1 week ago
15.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills any cloud data and reporting migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 1 week ago
15.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills Experience in GCP and one more cloud ( AWS/Azure) platform specifically in data migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Are you ready to take the lead in building cutting-edge AI solutions that make a real impact? At Seco, we’re on an exciting journey to expand our AI & Automation Center for Enablement—and we’re looking for a Senior AI Engineer to help us shape the future. This is a unique opportunity to join a growing team from the ground up, influence how we work with AI across the company, and drive meaningful change through innovation. About The Job In this position, you're responsible for designing and delivering AI/ML solutions that address real business challenges, unlock new opportunities, and drive business value. You collaborate closely with both business and technical stakeholders to identify high-impact use cases and translate them into scalable, production-ready applications. You also play a key role in shaping our AI strategy, mentoring colleagues, and ensuring ethical and effective AI adoption across the organization. Your responsibilities include: Driving the evolution of our AI platform, tools, and reusable assets. Applying software engineering best practices such as version control, testing, and modular design. Collaborating with DevOps/MLOps engineers to build reliable AI pipelines. Ensuring compliance with AI governance, data privacy, IT Security and ethical standards. Mentoring junior engineers and contributing to internal capability building. Demonstrating a keen interest in solution architecture, with a drive to design scalable, robust, and future-ready AI systems. Staying up to date with the latest AI innovations and bringing new ideas into practice. The location for this position is Stockholm and we apply a hybrid work solution. Your profile We're looking for someone with extensive hands-on experience designing and delivering AI/ML solutions in production environments. Your knowledge is backed by a degree in Computer Science, Data Science, Engineering, or a related field. Skills in Python and familiarity with major ML frameworks like TensorFlow, PyTorch, and scikit-learn are a plus, as is an understanding of modern software development practices, including CI/CD and version control. Experience with lakehouse architectures such as Databricks and knowledge of NLP, computer vision, deep learning, and GenAI is also required. Acting in a global environment calls for proficiency in English, verbally and in writing. You’re a strong communicator who can adapt your style to both technical and non-technical audiences. You understand stakeholder needs and confidently translate them into technical solutions. You thrive in collaborative environments, enjoy mentoring others, and bring a team-oriented mindset to everything you do. Our Seco culture Seco employees across the globe share our family spirit, along with a passion for our customers and a personal commitment to ensure success in everything we do. For us, it’s also clear that our diversities form an amazing foundation for achieving great results. Curious about our workplace and benefits? Read more on our website. You’re also welcome to visit our LinkedIn or Facebook to get to know us and our products further. Contact information For further information about this position, please contact Joel Strandh, hiring manager, joel.strandh@secotools.com We’ve already decided on which advertising channels and marketing campaigns we wish to use and respectfully decline any additional contacts in that matter. Union contacts – Sweden David Romlin, Unionen, +46 (0)70-608 46 90 Suncana Bandalo, Akademikerföreningen, +46 (0)70-300 10 73 Benny Christiansen, Ledarna, +46 (0)70-523 50 60 Recruitment Specialist: Gustaf Sjögren At Seco, we value a healthy work-life balance and will be away on summer vacation. Therefore, it can be difficult to reach us, and the recruitment process might take longer than usual. How To Apply Send us your application no later than August 10, 2025. Click apply and include your resume and cover letter in English. Please note that we don’t accept applications by e-mail. Job ID: R0079931. As we aim for a fair recruitment process, we utilize assessment tools to safeguard objectivity. When you apply for this job, you will therefore receive an invitation via email to a personality and logic ability test. Feedback comes immediately after the test has been completed and the selection process begins after the application deadline. For more information about our recruitment process, please contact HR Services at hrservices.sweden@sandvik.com. At Seco we develop and offer advanced products & solutions that make metal cutting easier. We work together with our customers to identify and implement the best solutions for their needs. The corporate culture empowers employees through shared values: Passion for our customers, Family Spirit, Personal commitment. Seco Tools has a presence in more than 75 countries and employs about 4000 people.
Posted 1 week ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At PwC, our people in audit and assurance focus on providing independent and objective assessments of financial statements, internal controls, and other assurable information enhancing the credibility and reliability of this information with a variety of stakeholders. They evaluate compliance with regulations including assessing governance and risk management processes and related controls. Those in data, analytics and technology solutions at PwC will assist clients in developing solutions that help build trust, drive improvement, and detect, monitor, and predict risk. Your work will involve using advanced analytics, data wrangling technology, and automation tools to leverage data and focus on establishing the right processes and structures to enable our clients to make efficient and effective decisions based on accurate information that is complete and trustworthy. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. About The Job PricewaterhouseCoopers Acceleration Centre (Kolkata) Private Limited is a joint venture in India among members of the PricewaterhouseCoopers Network that will leverage the scale and capabilities of its network. It is a member firm of PricewaterhouseCoopers International Limited and has its registered office in Kolkata, India. The Delivery Center will provide a professional with an opportunity to work in a dynamic environment where you will have the ability to develop process and quality-based skills. To really stand out and make us fit for the future in a constantly changing world, each one of us at PwC needs to be a purpose-led and values-driven leader at every level. To help us achieve this we have the PwC Professional, our global leadership development framework. It gives us a single set of expectations across our lines, geographies and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. We are seeking a skilled Revenue Automation Senior Associate to perform the revenue system implementation and data conversion for our revenue automation consulting area. The candidate will play a critical role in supporting our clients by ensuring compliance with accounting standards, implementing revenue recognition system and data conversion, optimizing revenue recognition processes, and driving cross-functional collaboration to achieve business objectives. As a Senior Associate, you will work as part of a team of problem solvers and help clients solve their complex business issues from strategy to execution. The candidate will report to an AC Manager. The AC team works as an extension of our overseas Engagement Teams and works closely with those teams as well as clients directly. Requirements Knowledge/Skills: In-depth knowledge of revenue recognition principles and accounting standards, including ASC 606 / IFRS 15. Strong understanding of business processes, systems, and controls related to revenue recognition. Experience with Revenue Management systems (e.g., Zuora Revenue, Oracle RMCS), Alteryx, SQL, Microsoft Visio preferred. Excellent analytical skills, with the ability to assess complex issues, identify solutions, and make recommendations. Effective communication skills, with the ability to communicate complex concepts to non-technical stakeholders. Good interpersonal skills, with the ability to build relationships and collaborate effectively with clients’ stakeholders at all levels of the organization. Perform basic review activities and provide coaching to junior team members in completing their tasks. Functional Skills Hands on experience with Data management as per business requirements for Analytics. Experience in dealing with financial data, and data analytics for business processes. Experience in performing data transformation, data quality checks, and data blending. Demonstrates good knowledge and understanding of performing on project teams and providing deliverables. Involving multiphase data analysis related to the evaluation of compliance, finance, and risk issues. Technical Tools Must have: Hands-on experience with MS-SQL / ACL or other structured query language. Demonstrates good knowledge and/or a proven record of success leveraging data manipulation and analysis technologies. Proficiency in Microsoft Excel, PowerPoint. Demonstrates knowledge in Excel and its functionality. Good To Have Experience in a similar role in their current profile. Strong understanding of business processes, systems, and controls related to revenue recognition. Experience working on cross-functional projects or initiatives, with a proven track record of successful implementations. Strong accounting knowledge and experience in dealing with financial data are a plus. Knowledge of Azure Databricks / Alteryx / Python / SAS / Knime. Demonstrates thorough knowledge and / or proven record of success leveraging data visualization tools such as Power BI and Tableau. Education/Qualification Bachelor's degree in Accounting and Information System or a related field Level Of Experience 4+ years of experience in relevant roles, with a focus on revenue recognition, preferably in a public accounting firm or a large corporation Preferred CPA or equivalent certification
Posted 1 week ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills Experience in GCP and one more cloud ( AWS/Azure) platform specifically in data migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France