Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role description Exp 5 8 yr Location Hyd JD Pyspark developer to work on a range of datadriven projects using PysparkSQLPython and Apache Airflow for Job scheduling and orchestration on Google Cloud PlatformGCPIn this this role you will be responsible for implementing data pipelines processing large datasets writing sql queries and ensuring smooth orchestration and automation of jobs using Airflow Required Skills Qualifications Experience with Pyspark for data processing and largescale data processing Proficiency in SQL for writing complex queries and optimizing database operations Strong knowledge of Python and experience in using Python libraries like Pandas and Numpy Handsonexperience with Apache Airflow for job scheduling DAG creation and workflow management experience working with Google Cloud PlatformGCP including Goolg Cloud StorageGCS BigQuery Dataflow and Dataproc Strong understanding of ETL processes and data pipeline development Familiarity with version control systems like Git Skills Mandatory Skills : GCP Storage,GCP BigQuery,GCP DataProc,GCP Cloud Composer,GCP DMS,Apache airflow,Java,Python,Scala,GCP Datastream,Google Analytics Hub,GCP Workflows,GCP Dataform,GCP Datafusion,GCP Pub/Sub,ANSI-SQL,GCP Dataflow,GCP Data Flow,GCP Cloud Pub/Sub,Big Data Hadoop Ecosystem
Posted 2 weeks ago
0 years
0 Lacs
Bagalur, Karnataka, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work, and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What Youll Be Doing... As an Engineer II - Data Engineering in the Artificial Intelligence and Data Organization (AI&D) , you will drive various activities including data engineering, data operations automation, data frameworks, and platforms to improve the efficiency, customer experience, and profitability of the company. At Verizon, we are on a journey to industrialize our data science and AI capabilities. Very simply, this means that AI will fuel all decisions and business processes across the company. With our leadership in bringing the 5G network nationwide, the opportunity for AI will only grow exponentially in going from enabling billions of predictions to possibly trillions of predictions that are automated and real-time. Building high-quality Data Engineering applications. Design and implement data pipelines using Apache Airflow via Composer, Dataflow, and Dataproc for batch and streaming workloads. Develop and optimize SQL queries and data models in BigQuery to support downstream analytics and reporting. Automate data ingestion, transformation, and export processes across various GCP components using Cloud Functions and Cloud Run. Monitor and troubleshoot data workflows using Cloud Monitoring and Cloud Logging to ensure system reliability and performance. Collaborate with data analysts, scientists, and business stakeholders to gather requirements and deliver data-driven solutions. Ensure adherence to data security, quality, and governance best practices throughout the pipeline lifecycle. Support the deployment of production-ready data solutions and assist in performance tuning and scalability efforts. Debugging the production failures and identifying the solution. Working on ETL/ELT development. What were looking for... We are looking for a highly motivated and skilled Engineer II Data Engineer with strong experience in Google Cloud Platform (GCP) to join our growing data engineering team. The ideal candidate will work on building and maintaining scalable data pipelines and cloud-native workflows using a wide range of GCP services such as Airflow (Composer), BigQuery, Dataflow, Dataproc, Cloud Functions, Cloud Run, Cloud Monitoring, and Cloud Logging. You'll Need To Have Bachelor's or one or more years of work experience. Two or more years of relevant work experience. Two or more years of relevant work experience in GCP. Hands-on experience with Google Cloud Platform (GCP) and services such as: Airflow (Composer) for workflow orchestration BigQuery for data warehousing and analytics Dataflow for scalable data processing Dataproc for Spark/Hadoop-based jobs Cloud Functions and Cloud Run for event-driven and container-based computing Cloud Monitoring and Logging for observability and alerting Proficiency in Python for scripting and pipeline development. Good understanding of SQL, data modelling, and data transformation best practices. Ability to troubleshoot complex data issues and optimize performance. Ability to effectively communicate through presentation, interpersonal, verbal, and written skills. Strong communication skills, collaboration, problem-solving, analytical, and critical-thinking skills. Even better if you have one or more of the following: Master's degree in Computer Science, Information Systems, and/or related technical discipline. Hands-on experience with AI/ML Models and Agentic AI building, tuning, and deploying for Data Engineering applications. Big Data Analytics Certification in Google Cloud. Hands-on experience with Hadoop-based environments (HDFS, Hive, Spark, Dataproc). Knowledge of cost optimization techniques for cloud workloads. Knowledge of telecom architecture. If Verizon and this role sound like a fit for you, we encourage you to apply even if you dont meet every even better qualification listed above. Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability, or any other legally protected characteristics. Locations Hyderabad, India Bangalore, India Chennai, India
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Engineering - GCP-Bigquery Pune, India Job description Mandatory skills ANSISQL Apache airflow GCP BigQuery GCP Cloud Composer GCP Dataflow GCP Dataform GCP Datafusion GCP DataProc GCP Datastream GCP DMS GCP PubSub GCP Storage GCP Workflows Google Analytics Hub Java Python Scala 5 years Experience in GCP BigQuery Oracle PLSQL Good knowledge of GCP tools such as GCS DataFlow Cloud Composer Cloud Pub Sub Proficient in Big Query DBMS and BQL Able to design end to end batch process in GCP Competent in Linux and python scripting Terraform Scripting for creating the GCP infrastructure Good communication skills Proficient with CICD tools like GitHub Jenkins Nexus
Posted 2 weeks ago
4.0 years
10 - 30 Lacs
Bengaluru, Karnataka, India
On-site
Industry & Sector Operating at the crossroads of financial services and advanced analytics, our client delivers cloud-native data platforms that unlock enterprise-scale insights and regulatory reporting. The organization champions Google Cloud innovation to modernize legacy warehouses and fuel AI-driven products. Role & Responsibilities Design, build and optimize petabyte-scale data marts on BigQuery for analytics and ML workloads. Develop ELT pipelines with Dataflow, Apache Beam and Cloud Composer, ensuring end-to-end observability. Implement partitioning, clustering and columnar compression strategies to reduce query cost and latency. Orchestrate batch and streaming workflows integrating Pub/Sub, Cloud Storage and external databases. Enforce data governance, lineage and security via IAM, DLP and encryption best practices. Partner with product, BI and ML teams to translate business questions into performant SQL and repeatable templates. Skills & Qualifications Must-Have 4+ years data engineering on GCP. Expert SQL and schema design in BigQuery. Proficient Python or Java with Beam SDK. Hands-on building Composer/Airflow DAGs. ETL performance tuning and cost optimization. Git, CI/CD and Terraform proficiency. Preferred Experience with Looker or Data Studio. Familiarity with Kafka or Pub/Sub streaming patterns. Data Quality tooling like Great Expectations. Spark on Dataproc or Vertex AI exposure. Professional Data Engineer certification. BFSI analytics domain knowledge. Benefits & Culture Highlights Modern Bengaluru campus with on-site labs and wellness facilities. Annual GCP certification sponsorship and dedicated learning budget. Performance-linked bonus and accelerated career paths. Skills: vertex ai,data studio,airflow,terraform,ci/cd,spark,beam sdk,bigquery,etl performance tuning,kafka,git,dataproc,data quality tooling,gcp,sql,schema design,java,cost optimization,etl,python,composer,looker,pub/sub
Posted 2 weeks ago
4.0 years
10 - 30 Lacs
Hyderabad, Telangana, India
On-site
Industry & Sector Operating at the crossroads of financial services and advanced analytics, our client delivers cloud-native data platforms that unlock enterprise-scale insights and regulatory reporting. The organization champions Google Cloud innovation to modernize legacy warehouses and fuel AI-driven products. Role & Responsibilities Design, build and optimize petabyte-scale data marts on BigQuery for analytics and ML workloads. Develop ELT pipelines with Dataflow, Apache Beam and Cloud Composer, ensuring end-to-end observability. Implement partitioning, clustering and columnar compression strategies to reduce query cost and latency. Orchestrate batch and streaming workflows integrating Pub/Sub, Cloud Storage and external databases. Enforce data governance, lineage and security via IAM, DLP and encryption best practices. Partner with product, BI and ML teams to translate business questions into performant SQL and repeatable templates. Skills & Qualifications Must-Have 4+ years data engineering on GCP. Expert SQL and schema design in BigQuery. Proficient Python or Java with Beam SDK. Hands-on building Composer/Airflow DAGs. ETL performance tuning and cost optimization. Git, CI/CD and Terraform proficiency. Preferred Experience with Looker or Data Studio. Familiarity with Kafka or Pub/Sub streaming patterns. Data Quality tooling like Great Expectations. Spark on Dataproc or Vertex AI exposure. Professional Data Engineer certification. BFSI analytics domain knowledge. Benefits & Culture Highlights Modern Bengaluru campus with on-site labs and wellness facilities. Annual GCP certification sponsorship and dedicated learning budget. Performance-linked bonus and accelerated career paths. Skills: vertex ai,data studio,airflow,terraform,ci/cd,spark,beam sdk,bigquery,etl performance tuning,kafka,git,dataproc,data quality tooling,gcp,sql,schema design,java,cost optimization,etl,python,composer,looker,pub/sub
Posted 2 weeks ago
4.0 years
10 - 30 Lacs
Pune, Maharashtra, India
On-site
Industry & Sector Operating at the crossroads of financial services and advanced analytics, our client delivers cloud-native data platforms that unlock enterprise-scale insights and regulatory reporting. The organization champions Google Cloud innovation to modernize legacy warehouses and fuel AI-driven products. Role & Responsibilities Design, build and optimize petabyte-scale data marts on BigQuery for analytics and ML workloads. Develop ELT pipelines with Dataflow, Apache Beam and Cloud Composer, ensuring end-to-end observability. Implement partitioning, clustering and columnar compression strategies to reduce query cost and latency. Orchestrate batch and streaming workflows integrating Pub/Sub, Cloud Storage and external databases. Enforce data governance, lineage and security via IAM, DLP and encryption best practices. Partner with product, BI and ML teams to translate business questions into performant SQL and repeatable templates. Skills & Qualifications Must-Have 4+ years data engineering on GCP. Expert SQL and schema design in BigQuery. Proficient Python or Java with Beam SDK. Hands-on building Composer/Airflow DAGs. ETL performance tuning and cost optimization. Git, CI/CD and Terraform proficiency. Preferred Experience with Looker or Data Studio. Familiarity with Kafka or Pub/Sub streaming patterns. Data Quality tooling like Great Expectations. Spark on Dataproc or Vertex AI exposure. Professional Data Engineer certification. BFSI analytics domain knowledge. Benefits & Culture Highlights Modern Bengaluru campus with on-site labs and wellness facilities. Annual GCP certification sponsorship and dedicated learning budget. Performance-linked bonus and accelerated career paths. Skills: vertex ai,data studio,airflow,terraform,ci/cd,spark,beam sdk,bigquery,etl performance tuning,kafka,git,dataproc,data quality tooling,gcp,sql,schema design,java,cost optimization,etl,python,composer,looker,pub/sub
Posted 2 weeks ago
4.0 years
10 - 30 Lacs
Delhi, India
On-site
Industry & Sector Operating at the crossroads of financial services and advanced analytics, our client delivers cloud-native data platforms that unlock enterprise-scale insights and regulatory reporting. The organization champions Google Cloud innovation to modernize legacy warehouses and fuel AI-driven products. Role & Responsibilities Design, build and optimize petabyte-scale data marts on BigQuery for analytics and ML workloads. Develop ELT pipelines with Dataflow, Apache Beam and Cloud Composer, ensuring end-to-end observability. Implement partitioning, clustering and columnar compression strategies to reduce query cost and latency. Orchestrate batch and streaming workflows integrating Pub/Sub, Cloud Storage and external databases. Enforce data governance, lineage and security via IAM, DLP and encryption best practices. Partner with product, BI and ML teams to translate business questions into performant SQL and repeatable templates. Skills & Qualifications Must-Have 4+ years data engineering on GCP. Expert SQL and schema design in BigQuery. Proficient Python or Java with Beam SDK. Hands-on building Composer/Airflow DAGs. ETL performance tuning and cost optimization. Git, CI/CD and Terraform proficiency. Preferred Experience with Looker or Data Studio. Familiarity with Kafka or Pub/Sub streaming patterns. Data Quality tooling like Great Expectations. Spark on Dataproc or Vertex AI exposure. Professional Data Engineer certification. BFSI analytics domain knowledge. Benefits & Culture Highlights Modern Bengaluru campus with on-site labs and wellness facilities. Annual GCP certification sponsorship and dedicated learning budget. Performance-linked bonus and accelerated career paths. Skills: vertex ai,data studio,airflow,terraform,ci/cd,spark,beam sdk,bigquery,etl performance tuning,kafka,git,dataproc,data quality tooling,gcp,sql,schema design,java,cost optimization,etl,python,composer,looker,pub/sub
Posted 2 weeks ago
5.0 years
8 - 16 Lacs
Mumbai Metropolitan Region
On-site
Key Responsibilities Design, develop, and maintain scalable web applications using .NET Core, .NET Framework, C#, and related technologies. Participate in all phases of the SDLC, including requirements gathering, architecture design, coding, testing, deployment, and support. Build and integrate RESTful APIs, and work with SQL Server, Entity Framework, and modern front-end technologies such as Angular, React, and JavaScript. Conduct thorough code reviews, write unit tests, and ensure adherence to coding standards and best practices. Lead or support .NET Framework to .NET Core migration initiatives, ensuring minimal disruption and optimal performance. Implement and manage CI/CD pipelines using tools like Azure DevOps, Jenkins, or GitLab CI/CD. Containerize applications using Docker and deploy/manage them on orchestration platforms like Kubernetes or GKE. Lead and execute database migration projects, particularly transitioning from SQL Server to PostgreSQL. Manage and optimize Cloud SQL for PostgreSQL, including configuration, tuning, and ongoing maintenance. Leverage Google Cloud Platform (GCP) services such as GKE, Cloud SQL, Cloud Run, and Dataflow to build and maintain cloud-native solutions. Handle schema conversion and data transformation tasks as part of migration and modernization efforts. Required Skills & Experience 5+ years of hands-on experience with C#, .NET Core, and .NET Framework. Proven experience in application modernization and cloud-native development. Strong knowledge of containerization (Docker) and orchestration tools like Kubernetes/GKE. Expertise in implementing and managing CI/CD pipelines. Solid understanding of relational databases and experience in SQL Server to PostgreSQL migrations. Familiarity with cloud infrastructure, especially GCP services relevant to application hosting and data processing. Excellent problem-solving, communication, Skills:- C#, .NET, .NET Compact Framework, SQL, Microsoft Windows Azure, CI/CD, Google Cloud Platform (GCP), React.js and Data-flow analysis
Posted 2 weeks ago
0 years
4 - 6 Lacs
Gurgaon
On-site
Job Description: We are looking for a highly skilled Engineer with a solid experience of building Bigdata, GCP Cloud based real time data pipelines and REST APIs with Java frameworks. The Engineer will play a crucial role in designing, implementing, and optimizing data solutions to support our organization s data-driven initiatives. This role requires expertise in data engineering, strong problem-solving abilities, and a collaborative mindset to work effectively with various stakeholders. This role will be focused on the delivery of innovative solutions to satisfy the needs of our business. As an agile team we work closely with our business partners to understand what they require, and we strive to continuously improve as a team. Technical Skills 1. Core Data Engineering Skills Proficiency in using GCP s big data tools like BigQuery For data warehousing and SQL analytics. Dataproc: For running Spark and Hadoop clusters. GCP Dataflow For stream and batch data processing.(High level Idea) GCP Pub/Sub: For real-time messaging and event ingestion.(High level Idea) Expertise in building automated, scalable, and reliable pipelines using custom Python/Scala solutions or Cloud Data Functions . 2. Programming and Scripting Strong coding skills in SQL, and Java. Familiarity with APIs and SDKs for GCP services to build custom data solutions. 3. Cloud Infrastructure Understanding of GCP services such as Cloud Storage, Compute Engine, and Cloud Functions. Familiarity with Kubernetes (GKE) and containerization for deploying data pipelines. (Optional but Good to have) 4. DevOps and CI/CD Experience setting up CI/CD pipelines using Cloud Build, GitHub Actions, or other tools. Monitoring and logging tools like Cloud Monitoring and Cloud Logging for production workflows. 5. Backend Development (Spring Boot & Java) Design and develop RESTful APIs and microservices using Spring Boot. Implement business logic, security, authentication (JWT/OAuth), and database operations. Work with relational databases (MySQL, PostgreSQL, MongoDB, Cloud SQL). Optimize backend performance, scalability, and maintainability. Implement unit testing and integration testing Big Data ETL - Datawarehousing GCP Java RESTAPI CI/CD Kubernetes About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary We are seeking a highly skilled and motivated GCP Data Engineering Manager to join our dynamic team. As a Data Engineering manager specializing in Google Cloud Platform (GCP), you will play a crucial role in designing, implementing, and maintaining scalable data pipelines and systems. You will leverage your expertise in Google Big Query, SQL, Python, and analytical skills to drive data-driven decision-making processes and support various business functions. About The Role Key Responsibilities: Data Pipeline Development: Design, develop, and maintain robust data pipelines using GCP services like Dataflow, Dataproc, ensuring high performance and scalability. Google Big Query Expertise: Utilize your hands-on experience with Google Big Query to manage and optimize data storage, retrieval, and processing. SQL Proficiency: Write and optimize complex SQL queries to transform and analyze large datasets, ensuring data accuracy and integrity. Python Programming: Develop and maintain Python scripts for data processing, automation, and integration with other systems and tools. Data Integration: Collaborate with data analysts, and other stakeholders to integrate data from various sources, ensuring seamless data flow and consistency. Data Quality and Governance: Implement data quality checks, validation processes, and governance frameworks to maintain high data standards. Performance Tuning: Monitor and optimize the performance of data pipelines, queries, and storage solutions to ensure efficient data processing. Documentation: Create comprehensive documentation for data pipelines, processes, and best practices to facilitate knowledge sharing and team collaboration. Minimum Qualifications Proven experience (minimum 6 – 8 yrs) in Data Engineer, with significant hands-on experience in Google Cloud Platform (GCP) and Google Big Query. Proficiency in SQL for data transformation, analysis and performance optimization. Strong programming skills in Python, with experience in developing data processing scripts and automation. Proven analytical skills with the ability to interpret complex data and provide actionable insights. Excellent problem-solving abilities and attention to detail. Strong communication and collaboration skills, with the ability to work effectively in a team enviro Desired Skills Experience with Google Analytics data and understanding of digital marketing data. Familiarity with other GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Dataproc. Knowledge of data visualization tools such as Looker, Tableau, or Data Studio. Experience with machine learning frameworks and libraries. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards
Posted 2 weeks ago
7.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The purpose of this role is to understand, model and facilitate change in a significant area of the business and technology portfolio either by line of business, geography or specific architecture domain whilst building the overall Architecture capability and knowledge base of the company. Job Description: Role Overview : We are seeking a highly skilled and motivated Cloud Data Engineering Manager to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. The GCP Data Engineering Manager will design, implement, and maintain scalable, reliable, and efficient data solutions on Google Cloud Platform (GCP). The role focuses on enabling data-driven decision-making by developing ETL/ELT pipelines, managing large-scale datasets, and optimizing data workflows. The ideal candidate is a proactive problem-solver with strong technical expertise in GCP, a passion for data engineering, and a commitment to delivering high-quality solutions aligned with business needs. Key Responsibilities : Data Engineering & Development : Design, build, and maintain scalable ETL/ELT pipelines for ingesting, processing, and transforming structured and unstructured data. Implement enterprise-level data solutions using GCP services such as BigQuery, Dataform, Cloud Storage, Dataflow, Cloud Functions, Cloud Pub/Sub, and Cloud Composer. Develop and optimize data architectures that support real-time and batch data processing. Build, optimize, and maintain CI/CD pipelines using tools like Jenkins, GitLab, or Google Cloud Build. Automate testing, integration, and deployment processes to ensure fast and reliable software delivery. Cloud Infrastructure Management : Manage and deploy GCP infrastructure components to enable seamless data workflows. Ensure data solutions are robust, scalable, and cost-effective, leveraging GCP best practices. Infrastructure Automation and Management: Design, deploy, and maintain scalable and secure infrastructure on GCP. Implement Infrastructure as Code (IaC) using tools like Terraform. Manage Kubernetes clusters (GKE) for containerized workloads. Collaboration and Stakeholder Engagement : Work closely with cross-functional teams, including data analysts, data scientists, DevOps, and business stakeholders, to deliver data projects aligned with business goals. Translate business requirements into scalable, technical solutions while collaborating with team members to ensure successful implementation. Quality Assurance & Optimization : Implement best practices for data governance, security, and privacy, ensuring compliance with organizational policies and regulations. Conduct thorough quality assurance, including testing and validation, to ensure the accuracy and reliability of data pipelines. Monitor and optimize pipeline performance to meet SLAs and minimize operational costs. Qualifications and Certifications : Education: Bachelor’s or master’s degree in computer science, Information Technology, Engineering, or a related field. Experience: Minimum of 7 to 9 years of experience in data engineering, with at least 4 years working on GCP cloud platforms. Proven experience designing and implementing data workflows using GCP services like BigQuery, Dataform Cloud Dataflow, Cloud Pub/Sub, and Cloud Composer. Certifications: Google Cloud Professional Data Engineer certification preferred. Key Skills : Mandatory Skills: Advanced proficiency in Python for data pipelines and automation. Strong SQL skills for querying, transforming, and analyzing large datasets. Strong hands-on experience with GCP services, including Cloud Storage, Dataflow, Cloud Pub/Sub, Cloud SQL, BigQuery, Dataform, Compute Engine and Kubernetes Engine (GKE). Hands-on experience with CI/CD tools such as Jenkins, GitHub or Bitbucket. Proficiency in Docker, Kubernetes, Terraform or Ansible for containerization, orchestration, and infrastructure as code (IaC) Familiarity with workflow orchestration tools like Apache Airflow or Cloud Composer Strong understanding of Agile/Scrum methodologies Nice-to-Have Skills: Experience with other cloud platforms like AWS or Azure. Knowledge of data visualization tools (e.g., Power BI, Looker, Tableau). Understanding of machine learning workflows and their integration with data pipelines. Soft Skills : Strong problem-solving and critical-thinking abilities. Excellent communication skills to collaborate with technical and non-technical stakeholders. Proactive attitude towards innovation and learning. Ability to work independently and as part of a collaborative team. Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent
Posted 2 weeks ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Principal Data Engineer _ Hyderabad (Onsite) Job Title. Principal Data Engineer Work Location. Hyderabad (Onsite) Experience. 10+ Years Job Description: 10+ years of experience in data engineering, with at least 3 years in a technical leadership role. Strong expertise in SQL, Python or Scala, and modern ETL/ELT frameworks. Deep knowledge of data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) and distributed systems (e.g., Hadoop, Spark). Proven experience with cloud platforms (AWS, Azure, or GCP) and associated data services (e.g., S3, Glue, Dataflow, Databricks). Hands-on experience with streaming platforms such as Kafka, Flink, or Kinesis. Solid understanding of data modeling, data lakes, data governance, and security. Excellent communication, leadership, and stakeholder management skills. Preferred Qualifications: Exposure to tools like Airflow, dbt, Terraform, or Kubernetes. Familiarity with data cataloging and lineage tools (e.g., Alation, Collibra). Domain experience in [e.g., Banking, Healthcare, Finance, E-commerce] is a plus. Experience in designing data platforms for AI/ML workloads.
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Join us at Syensqo, where our IT team is gearing up to enhance its capabilities. We play a crucial role in the group's transformation—accelerating growth, reshaping progress, and creating sustainable shared value. IT team is making operational adjustments to supercharge value across the entire organization. Here at Syensqo, we're one strong team! Our commitment to accountability drives us as we work hard to deliver value for our customers and stakeholders. In our dynamic and collaborative work environment, we add a touch of enjoyment while staying true to our motto: reinvent progress. Come be part of our transformation journey and contribute to the change as a future team member. We are looking for: As a Data/ML Engineer, you will play a central role in defining, implementing, and maintaining cloud governance frameworks across the organization. You will collaborate with cross-functional teams to ensure secure, compliant, and efficient use of cloud resources for data and machine learning workloads. Your expertise in full-stack automation, DevOps practices, and Infrastructure as Code (IaC) will drive the standardization and scalability of our cloud-based data and ML platforms. Key requirements are: Ensuring cloud data governance Define and maintain central cloud governance policies, standards, and best practices for data, AI and ML workloads Ensure compliance with security, privacy, and regulatory requirements across all cloud environments Monitor and optimize cloud resource usage, cost, and performance for data, AI and ML workloads Design and Implement Data Pipelines Co-develop, co-construct, test, and maintain highly scalable and reliable data architectures, including ETL processes, data warehouses, and data lakes with the Data Platform Team Build and Deploy ML Systems Co-design, co-develop, and deploy machine learning models and associated services into production environments, ensuring performance, reliability, and scalability Infrastructure Management Manage and optimize cloud-based infrastructure (e.g., AWS, Azure, GCP) for data storage, processing, and ML model serving Collaboration Work collaboratively with data scientists, ML engineers, security and business stakeholders to align cloud governance with organizational needs Provide guidance and support to teams on cloud architecture, data management, and ML operations. Work collaboratively with other teams to transition prototypes and experimental models into robust, production-ready solutions Data Governance and Quality: Implement best practices for data governance, data quality, and data security to ensure the integrity and reliability of our data assets. Performance and Optimisation: Identify and implement performance improvements for data pipelines and ML models, optimizing for speed, cost-efficiency, and resource utilization. Monitoring and Alerting Establish and maintain monitoring, logging, and alerting systems for data pipelines and ML models to proactively identify and resolve issues Tooling and Automation Design and implement full-stack automation for data pipelines, ML workflows, and cloud infrastructure Build and manage cloud infrastructure using IaC tools (e.g., Terraform, CloudFormation) Develop and maintain CI/CD pipelines for data and ML projects Promote DevOps culture and best practices within the organization Develop and maintain tools and automation scripts to streamline data operations, model training, and deployment processes Stay Current on new ML / AI trends: Keep abreast of the latest advancements in data engineering, machine learning, and cloud technologies, evaluating and recommending new tools and approach Document processes, architectures, and standards for knowledge sharing and onboarding Education and experience Education: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field. (Relevant work experience may be considered in lieu of a degree). Programming: Strong proficiency in Python (essential) and experience with other relevant languages like Java, Scala, or Go. Data Warehousing/Databases: Solid understanding and experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) is highly desirable. Big Data Technologies: Hands-on experience with big data processing frameworks (e.g., Spark, Flink, Hadoop). Cloud Platforms: Experience with at least one major cloud provider (AWS, Azure, or GCP) and their relevant data and ML services (e.g., S3, EC2, Lambda, EMR, SageMaker, Dataflow, BigQuery, Azure Data Factory, Azure ML). ML Concepts: Fundamental understanding of machine learning concepts, algorithms, and workflows. MLOps Principles: Familiarity with MLOps principles and practices for deploying, monitoring, and managing ML models in production. Version Control: Proficiency with Git and collaborative development workflows. Problem-Solving: Excellent analytical and problem-solving skills with a strong attention to detail. Communication: Strong communication skills, able to articulate complex technical concepts to both technical and non-technical stakeholders. Bonus Points (Highly Desirable Skills & Experience): Experience with containerisation technologies (Docker, Kubernetes). Familiarity with CI/CD pipelines for data and ML deployments. Experience with stream processing technologies (e.g., Kafka, Kinesis). Knowledge of data visualization tools (e.g., Tableau, Power BI, Looker). Contributions to open-source projects or a strong portfolio of personal projects. Experience with [specific domain knowledge relevant to your company, e.g., financial data, healthcare data, e-commerce data]. Language skills Fluent English What’s in it for the candidate Be part of a highly motivated team of explorers Help make a difference and thrive in Cloud and AI technology Chart your own course and build a fantastic career Have fun and enjoy life with an industry leading remuneration pack About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply.
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description We are looking for a highly skilled Engineer with a solid experience of building Bigdata, GCP Cloud based real time data pipelines and REST APIs with Java frameworks. The Engineer will play a crucial role in designing, implementing, and optimizing data solutions to support our organization s data-driven initiatives. This role requires expertise in data engineering, strong problem-solving abilities, and a collaborative mindset to work effectively with various stakeholders. This role will be focused on the delivery of innovative solutions to satisfy the needs of our business. As an agile team we work closely with our business partners to understand what they require, and we strive to continuously improve as a team. Technical Skills 1. Core Data Engineering Skills Proficiency in using GCP s big data tools like BigQuery For data warehousing and SQL analytics. Dataproc: For running Spark and Hadoop clusters. GCP Dataflow For stream and batch data processing.(High level Idea) GCP Pub/Sub: For real-time messaging and event ingestion.(High level Idea) Expertise in building automated, scalable, and reliable pipelines using custom Python/Scala solutions or Cloud Data Functions . Programming and Scripting Strong coding skills in SQL, and Java. Familiarity with APIs and SDKs for GCP services to build custom data solutions. Cloud Infrastructure Understanding of GCP services such as Cloud Storage, Compute Engine, and Cloud Functions. Familiarity with Kubernetes (GKE) and containerization for deploying data pipelines. (Optional but Good to have) DevOps and CI/CD Experience setting up CI/CD pipelines using Cloud Build, GitHub Actions, or other tools. Monitoring and logging tools like Cloud Monitoring and Cloud Logging for production workflows. Backend Development (Spring Boot & Java) Design and develop RESTful APIs and microservices using Spring Boot. Implement business logic, security, authentication (JWT/OAuth), and database operations. Work with relational databases (MySQL, PostgreSQL, MongoDB, Cloud SQL). Optimize backend performance, scalability, and maintainability. Implement unit testing and integration testing Big Data ETL - Datawarehousing GCP Java RESTAPI CI/CD Kubernetes
Posted 2 weeks ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Business Functional Analyst Corporate Title: Associate Location: Pune, India Role Description Business Functional Analysis is responsible for business solution design in complex project environments (e.g. transformational programmes). Work includes: Identifying the full range of business requirements and translating requirements into specific functional specifications for solution development and implementation Analysing business requirements and the associated impacts of the changes Designing and assisting businesses in developing optimal target state business processes Creating and executing against roadmaps that focus on solution development and implementation Answering questions of methodological approach with varying levels of complexity Aligning with other key stakeholder groups (such as Project Management & Software Engineering) to support the link between the business divisions and the solution providers for all aspects of identifying, implementing and maintaining solutions What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Write clear and well-structured business requirements/documents. Convert roadmap features into smaller user stories. Analyse process issues and bottlenecks and to make improvements. Communicate and validate requirements with relevant stakeholders. Perform data discovery, analysis, and modelling. Assist with project management for selected projects. Understand and translate business needs into data models supporting long-term solutions. Understand existing SQL/Python code convert to business requirement. Write advanced SQL and Python scripts. Your Skills And Experience A minimum of 8+ years of experience in business analysis or a related field. Exceptional analytical and conceptual thinking skills. Proficient in SQL. Proficient in Python for data engineering. Experience in automating ETL testing using python and SQL. Exposure on GCP services corresponding cloud storage, data lake, database, data warehouse; like Big Query, GCS, Dataflow, Cloud Composer, gsutil, Shell Scripting etc. Previous experience in Procurement and Real Estate would be plus. Competency in JIRA, Confluence, draw i/o and Microsoft applications including Word, Excel, Power Point an Outlook. Previous Banking Domain experience is a plus. Good problem-solving skills How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Proven expertise in supply chain analytics across domains such as demand forecasting, inventory optimization, logistics, segmentation, and network design Well versed and hands-on experience of working on optimization methods like linear programming, mixed integer programming, scheduling optimization. Having understanding of working on third party optimization solvers like Gurobi will be an added advantage Proficiency in forecasting techniques (e.g., Holt-Winters, ARIMA, ARIMAX, SARIMA, SARIMAX, FBProphet, NBeats) and machine learning techniques (supervised and unsupervised) Strong command of statistical modeling, testing, and inference Proficient in using GCP tools: BigQuery, Vertex AI, Dataflow, Looker Building data pipelines and models for forecasting, optimization, and scenario planning Strong SQL and Python programming skills; experience deploying models in GCP environment Knowledge of orchestration tools like Cloud Composer (Airflow) Nice To Have Familiarity with MLOps, containerization (Docker, Kubernetes), and orchestration tools (e.g., Cloud composer) Strong communication and stakeholder engagement skills at the executive level Roles And Responsibilities Assist analytics projects within the supply chain domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. Those in Oracle technology at PwC will focus on utilising and managing Oracle suite of software and technologies for various purposes within an organisation. You will be responsible for tasks such as installation, configuration, administration, development, and support of Oracle products and solutions. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Role / Job Title Senior Associate Tower Oracle Experience 6 - 10 years Key Skills Oracle Fusion PPM - Project Billing / Project Costing, Fixed Assets, Integration with Finance Modules, BPM Workflow and OTBI Reports Educational Qualification BE / B Tech / ME / M Tech / MBA / B.SC / B.Com / BBA Work Location India Job Description 5 ~ 9 year of experience of Oracle Fusion Applications, specifically PPM Costing, Billing and Project Resource Mana Cloud gement / Project Management and Oracle Cloud Fixed Assets Should have completed minimum two end-to-end implementations in Fusion PPM modules, upgradation, lift and shift and support projects experience and integration of Fixed Assets Experience in Oracle Cloud / Fusion PPM Functional modules and Fixed Assets along with integration with all finance and SCM modules Should be able to understand and articulate business requirements and propose solutions after performing appropriate due diligence Good knowledge of BPM Approval Workflow Solid understanding of Enterprise Structures, Hierarchies, FlexFields, Extensions setup in Fusion Project Foundations and Subledger Accounting Experience in working with Oracle Support for various issue resolutions Exposure perform Unit Testing and UAT of issues and collaborate with the business users to obtain UAT sign-off Quarterly Release Testing, preparation of release notes and presenting the new features Worked on Transition Management Experience in working with various financials data upload / migration techniques like FBDI / ADFDI and related issue resolutions Experience in supporting period end closure activities independently Experience in reconciliation of financial data between GL and subledger modules High level knowledge of end-to-end integration of Financial Modules with other modules like Projects, Procurement / Order Management and HCM Fair knowledge of other Fusion modules like SCM or PPM functionality is a plus Generate adhoc reports to measure and to communicate the health of the applications Focus on reducing recurrence issues caused by the Oracle Fusion application Prepare process flows, dataflow dgiagrams, requirement documents, user training and onboarding documents to support upcoming projects and enhancements Deliver and track the delivery of issue resolutions to meet the SLA’s and KPI’s Should have good communication, presentation, analytical and problem-solving skills Coordinate with team to close the client requests on time and within SLA Should be able to independently conduct CRP, UAT and SIT sessions with the clients / stakeholders Should be able to manage the Oracle Fusion PPM Track independently, interact with clients, conduct business requirement meetings and user training sessions Managed Services - Application Evolution Services At PwC we relentlessly focus on working with our clients to bring the power of technology and humans together and create simple, yet powerful solutions. We imagine a day when our clients can simply focus on their business knowing that they have a trusted partner for their IT needs. Everyday we are motivated and passionate about making our clients’ better. Within our Managed Services platform, PwC delivers integrated services and solutions that are grounded in deep industry experience and powered by the talent that you would expect from the PwC brand. The PwC Managed Services platform delivers scalable solutions that add greater value to our client’s enterprise through technology and human-enabled experiences. Our team of highly-skilled and trained global professionals, combined with the use of the latest advancements in technology and process, allows us to provide effective and efficient outcomes. With PwC’s Managed Services our client’s are able to focus on accelerating their priorities, including optimizing operations and accelerating outcomes. PwC brings a consultative first approach to operations, leveraging our deep industry insights combined with world class talent and assets to enable transformational journeys that drive sustained client outcomes. Our clients need flexible access to world class business and technology capabilities that keep pace with today’s dynamic business environment. Within our global, Managed Services platform, we provide Application Evolution Services (formerly Application Managed Services), where we focus more so on the evolution of our clients’ applications and cloud portfolio. Our focus is to empower our client’s to navigate and capture the value of their application portfolio while cost-effectively operating and protecting their solutions. We do this so that our clients can focus on what matters most to your business: accelerating growth that is dynamic, efficient and cost-effective. As a member of our Application Evolution Services (AES) team, we are looking for candidates who thrive working in a high-paced work environment capable of working on a mix of critical Application Evolution Service offerings and engagement including help desk support, enhancement and optimization work, as well as strategic roadmap and advisory level work. It will also be key to lend experience and effort in helping win and support customer engagements from not only a technical perspective, but also a relationship perspective
Posted 2 weeks ago
6.0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description UST is hiring Senior Java Developers to build end-to-end business solutions for a leading financial services client in the UK. The ideal candidate will have strong full-stack development experience, hands-on cloud exposure (preferably GCP), and excellent communication skills to collaborate effectively across technical and business teams. Key Responsibilities Collaborate with Product Owners to understand business needs and translate them into technical solutions. Lead feature development through sprints, from design through delivery. Conduct technical and code reviews ensuring performance, scalability, and maintainability. Provide technical guidance and mentorship to junior developers. Work across UI and service layers, addressing both frontend and backend concerns. Demonstrate developed features to client stakeholders. Support QA teams in test planning, defect triaging, and resolution. Mandatory Skills Java/JEE (6+ years) – Core Java, enterprise-level development, multi-tier architecture Spring Framework – Spring Boot, Spring MVC Web Services – REST, SOAP, JSON API Development – Swagger/OpenAPI, integration best practices Apache Beam Unit Testing – JUnit or equivalent frameworks SQL & Relational Databases Responsive UI Development – Cross-browser support, accessibility standards CI/CD & DevOps Tools – Jenkins, SonarQube Version Control & Collaboration Tools – Bitbucket, Jira, Confluence Agile & Scaled Agile Methodologies Strong written and verbal communication skills Good To Have Skills Google Cloud Platform (GCP) (2+ years preferred) Cloud Composer Dataflow Dataproc Cloud Pub/Sub DAG creation Python Scripting Application Design & Architecture UML, Design Patterns Frontend Technologies – AJAX, HTML5/CSS3, JavaScript (optional frameworks) Experience supporting QA teams – test case planning, root cause analysis Experience with microservices and scalable cloud-native architectures Skills Java,Spring ,Spring Boot,Microservices
Posted 2 weeks ago
0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Req ID: 332236 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Business Consulting-Technical analyst with ETL,GCP using Pyspark to join our team in Pune, Mahārāshtra (IN-MH), India (IN). Key Responsibilities: Data Pipeline Development: Designing, implementing, and optimizing data pipelines on GCP using PySpark for efficient and scalable data processing. ETL Workflow Development: Building and maintaining ETL workflows for extracting, transforming, and loading data into various GCP services. GCP Service Utilization: Leveraging GCP services like BigQuery, Cloud Storage, Dataflow, and Dataproc for data storage, processing, and analysis. Data Transformation: Utilizing PySpark for data manipulation, cleansing, enrichment, and validation. Performance Optimization: Ensuring the performance and scalability of data processing jobs on GCP. Collaboration: Working with data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions. Data Quality and Governance: Implementing and maintaining data quality standards, security measures, and compliance with data governance policies on GCP. Troubleshooting and Support: Diagnosing and resolving issues related to data pipelines and infrastructure. Staying Updated: Keeping abreast of the latest GCP services, PySpark features, and best practices in data engineering. Required Skills: GCP Expertise: Strong understanding of GCP services like BigQuery, Cloud Storage, Dataflow, and Dataproc. PySpark Proficiency: Demonstrated experience in using PySpark for data processing, transformation, and analysis. Python Programming: Solid Python programming skills for data manipulation and scripting. Data Modeling and ETL: Experience with data modeling, ETL processes, and data warehousing concepts. SQL: Proficiency in SQL for querying and manipulating data in relational databases. Big Data Concepts: Understanding of big data principles and distributed computing concepts. Communication and Collaboration: Ability to effectively communicate technical solutions and collaborate with cross-functional teams About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .
Posted 2 weeks ago
0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Req ID: 332238 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Business Consulting-Technical analyst with ETL,GCP using Pyspark to join our team in Pune, Mahārāshtra (IN-MH), India (IN). Key Responsibilities: Data Pipeline Development: Designing, implementing, and optimizing data pipelines on GCP using PySpark for efficient and scalable data processing. ETL Workflow Development: Building and maintaining ETL workflows for extracting, transforming, and loading data into various GCP services. GCP Service Utilization: Leveraging GCP services like BigQuery, Cloud Storage, Dataflow, and Dataproc for data storage, processing, and analysis. Data Transformation: Utilizing PySpark for data manipulation, cleansing, enrichment, and validation. Performance Optimization: Ensuring the performance and scalability of data processing jobs on GCP. Collaboration: Working with data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions. Data Quality and Governance: Implementing and maintaining data quality standards, security measures, and compliance with data governance policies on GCP. Troubleshooting and Support: Diagnosing and resolving issues related to data pipelines and infrastructure. Staying Updated: Keeping abreast of the latest GCP services, PySpark features, and best practices in data engineering. Required Skills: GCP Expertise: Strong understanding of GCP services like BigQuery, Cloud Storage, Dataflow, and Dataproc. PySpark Proficiency: Demonstrated experience in using PySpark for data processing, transformation, and analysis. Python Programming: Solid Python programming skills for data manipulation and scripting. Data Modeling and ETL: Experience with data modeling, ETL processes, and data warehousing concepts. SQL: Proficiency in SQL for querying and manipulating data in relational databases. Big Data Concepts: Understanding of big data principles and distributed computing concepts. Communication and Collaboration: Ability to effectively communicate technical solutions and collaborate with cross-functional teams About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .
Posted 2 weeks ago
170.0 years
0 Lacs
Mulshi, Maharashtra, India
On-site
Area(s) of responsibility About Birlasoft Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. About the Job – Ability to relate the product functionality to business processes, and thus offer implementation advice to customers on how to meet their various business scenarios. Job Title – GCP BigQuery Engineer Location: Pune/Bangalore/Mumbai/Pune/Hyd/Noida Educational Background – BE/Btech Key Responsibilities – Must Have Skills Should have 4-8 Years of Exp Design, develop, and implement data warehousing and analytics solutions using Google BigQuery as the primary data storage and processing platform. Work closely with business stakeholders, data architects, and data engineers to gather requirements and design scalable and efficient data models and schemas in BigQuery. Implement data ingestion pipelines to extract, transform, and load (ETL) data from various source systems into BigQuery using GCP services such as Cloud Dataflow, Cloud Storage, and Data Transfer Service. Optimize BigQuery performance and cost-effectiveness by designing partitioned tables, clustering tables, and optimizing SQL queries. Develop and maintain data pipelines and workflows using GCP tools and technologies to automate data processing and analytics tasks. Implement data security and access controls in BigQuery to ensure compliance with regulatory requirements and protect sensitive data. Collaborate with cross-functional teams to integrate BigQuery with other GCP services and third-party tools to support advanced analytics, machine learning, and business intelligence initiatives. Provide technical guidance and mentorship to junior members of the team and contribute to knowledge sharing and best practices development. Qualifications Bachelor's degree in Computer Science, Information Technology, or related field. Strong proficiency in designing and implementing data warehousing and analytics solutions using BigQuery. Experience with data modeling, schema design, and optimization techniques in BigQuery. Hands-on experience with GCP services such as Cloud Dataflow, Cloud Storage, Data Transfer Service, and Data Studio.
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Description Roles and Responsibilities: Lead and manage end-to-end delivery of data-centric projects including data warehousing, data integration, and business intelligence initiatives. Drive project planning, execution, monitoring, and closure using industry-standard project management methodologies (Agile/Scrum/Waterfall). Collaborate with cross-functional teams, data architects, developers, and business stakeholders to define project scope and requirements. Ensure timely delivery of project milestones within the approved scope, budget, and timelines. Proactively manage project risks, dependencies, and issues with clear mitigation strategies. Establish effective communication plans and engage stakeholders at all levels to ensure project alignment and transparency. Maintain and track detailed project documentation including timelines, resource plans, status reports, and governance logs. Lead one or more full lifecycle ETL/Data integration implementations from initiation to go-live and support transition. Ensure alignment of data architecture and modeling practices with organizational standards and best practices. Must-Have Skills Minimum 5+ years of experience in Project Management, with at least 3 years managing data-centric projects (e.g., Data Warehousing, Business Intelligence, Data Integration). Strong understanding of data architecture principles, data modeling, and database design. Proven experience managing full-lifecycle ETL/Data integration projects. Hands-on exposure to project planning, budgeting, resource management, stakeholder communication, and risk management. Ability to drive cross-functional teams and communicate effectively with both technical and non-technical stakeholders. Good-to-Have Skills Working knowledge or hands-on experience with ETL tools such as: Informatica Talend IBM DataStage SSIS AWS Glue Azure Data Factory GCP Dataflow Familiarity with Agile/Scrum methodologies and tools like JIRA, MS Project, or Confluence. PMP, PMI-ACP, or Scrum Master certification. Prior experience working with cloud-based data solutions. Skills Healthcare,Etl,Data Warehousing,Project Management
Posted 2 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Product Strategy & Vision (AI/ML & Scale): Define and evangelize the product vision, strategy, and roadmap for our AI/ML platform, data pipelines, and scalable application infrastructure, aligning with overall company objectives. Identify market opportunities, customer needs, and technical trends to drive innovation and competitive advantage in the AI/ML and large-scale data domain. Translate complex technical challenges and opportunities into clear, actionable product requirements and specifications. Product Development & Execution Leadership: Oversee the entire product lifecycle from ideation and discovery through development, launch, and post-launch iteration for critical AI/ML and data products. Work intimately with engineering, data science, and operations teams to ensure seamless execution and delivery of high-quality, performant, and scalable solutions. Champion best practices for productionizing applications at scale and ensuring our systems can handle huge volumes of data efficiently and reliably. Define KPIs and metrics for product success, monitoring performance and iterating based on data-driven insights. People Management & Team Leadership: Lead, mentor, coach, and grow a team of Technical Product Managers, fostering a culture of innovation, accountability, and continuous improvement. Provide clear direction, set performance goals, conduct regular reviews, and support the professional development and career growth of your team members. Act as a leader and role model, promoting collaboration, open communication, and a positive team environment. Technical Expertise & Hands-On Contribution: Possess a deep understanding of the end-to-end ML lifecycle (MLOps), from data ingestion and model training to deployment, monitoring, and continuous improvement. Demonstrate strong proficiency in Google Cloud Platform (GCP) services, including but not limited to, compute, storage, networking, data processing (e.g., BigQuery, Dataflow, Dataproc), and AI/ML services (e.g., Vertex AI, Cloud AI Platform). Maintain a strong hands-on expertise level in Python programming, capable of contributing to prototypes, proof-of-concepts, data analysis, or technical investigations as needed. Extensive practical experience with leading AI frameworks and libraries, including Hugging Face for natural language processing and transformer models. Proven experience with LangGraph (or similar sophisticated agentic frameworks like LangChain, LlamaIndex), understanding their architecture and application in building intelligent, multi-step AI systems. Solid understanding of agentic frameworks, their design patterns, and how to productionize complex AI agents. Excellent exposure to GitHub and modern coding practices, including version control, pull requests, code reviews, CI/CD pipelines, and writing clean, maintainable code. Cross-functional Collaboration & Stakeholder Management: Collaborate effectively with diverse stakeholders across engineering, data science, design, sales, marketing, and executive leadership to gather requirements, communicate progress, and align strategies. Act as a bridge between technical teams and business stakeholders, translating complex technical concepts into understandable business implications and vice-versa. Responsibilities Technical Skills: Deep expertise in Google Cloud Platform (GCP) services for data, AI/ML, and scalable infrastructure. Expert-level hands-on Python programming skills (e.g., for data manipulation, scripting, API interaction, ML prototyping, productionizing) Strong working knowledge of Hugging Face libraries and ecosystem. Direct experience with LangGraph and/or other advanced agentic frameworks (e.g., LangChain, LlamaIndex) for building intelligent systems. Solid understanding of software development lifecycle, GitHub, Git workflows, and modern coding practices (CI/CD, testing, code quality). Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Experience: 8+ years of progressive experience in technical product management roles, with a significant portion focused on AI/ML, data platforms, or highly scalable systems. 3+ years of direct people management experience, leading and mentoring a team of product managers or technical leads. Demonstrable track record of successfully bringing complex technical products from concept to production at scale. Proven ability to manage products that handle massive volumes of data and require high throughput. Extensive practical experience with AI/ML model deployment and MLOps best practices in a production environment. Leadership & Soft Skills: Exceptional leadership, communication, and interpersonal skills, with the ability to inspire and motivate a team. Strong analytical and problem-solving abilities, with a data-driven approach to decision-making. Ability to thrive in a fast-paced, ambiguous, and rapidly evolving technical environment. Excellent ability to articulate complex technical concepts to both technical and non-technical audiences.
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Impetus is hiring for good GCP Data Engineers, If you are good in Bigdata, Spark, pyspark & GCP-Pub Sub, Dataproc, Big query etc & you are immediate joiner & can join us in 0-30 days, please share your resume at rashmeet.g.tuteja@impetus.com. Responsibilities Able to effectively use GCP managed services e.g. Dataproc, Dataflow, pub/sub, Cloud functions, Big Query, GCS - At least 4 of these Services. Should have strong experience in Bigdata, Spark, Pyspark & Python. Strong experience in Big Data technologies – Hadoop, Sqoop, Hive and Spark including. Good hands-on expertise on either Python or Java programming. Good Understanding of GCP core services like Google cloud storage, Google compute engine, Cloud SQL, Cloud IAM. Good to have knowledge on GCP services like App engine, GKE, Cloud Run, Cloud Built.
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Impetus is hiring for good GCP Data Engineers If you are good in Bigdata, Spark, pyspark & GCP-Pub Sub, Dataproc, Big query etc & you are immediate joiner & can join us in 0-30 days, please share your resume at vaishali.tyagi@impetus.com. Responsibilities Able to effectively use GCP managed services e.g. Dataproc, Dataflow, pub/sub, Cloud functions, Big Query, GCS - At least 4 of these Services. Should have strong experience in Bigdata, Spark, Pyspark & Python. Strong experience in Big Data technologies – Hadoop, Sqoop, Hive and Spark including. Good hands-on expertise on either Python or Java programming. Good Understanding of GCP core services like Google cloud storage, Google compute engine, Cloud SQL, Cloud IAM. Good to have knowledge on GCP services like App engine, GKE, Cloud Run, Cloud Built.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France