Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
8 - 13 Lacs
Pune
Work from Office
: Job Title- Senior Engineer PD Location- Pune, India Role Description Our team is part of the area Technology, Data, and Innovation (TDI) Private Bank. Within TDI, Partner data is the central client reference data system in Germany. As a core banking system, many banking processes and applications are integrated and communicate via >2k interfaces. From a technical perspective, we focus on mainframe but also build solutions on premise cloud, restful services, and an angular frontend. Next to the maintenance and the implementation of new CTB requirements, the content focus also lies on the regulatory and tax topics surrounding a partner/ client. We are looking for a very motivated candidate for the Cloud Data Engineer area. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities You are responsible for the implementation of the new project on GCP (Spark, Dataproc, Dataflow, BigQuery, Terraform etc) in the whole SDLC chain You are responsible for the support of the migration of current functionalities to Google Cloud You are responsible for the stability of the application landscape and support software releases You also support in L3 topics and application governance You are responsible in the CTM area for coding as part of an agile team (Java, Scala, Spring Boot) Your skills and experience You have experience with databases (BigQuery, Cloud SQl, No Sql, Hive etc.) and development preferably for Big Data and GCP technologies Strong understanding of Data Mesh Approach and integration patterns Understanding of Party data and integration with Product data Your architectural skills for big data solutions, especially interface architecture allows a fast start You have experience in at leastSpark, Java ,Scala and Python, Maven, Artifactory, Hadoop Ecosystem, Github Actions, GitHub, Terraform scripting You have knowledge in customer reference data, customer opening processes and preferably regulatory topics around know your customer processes You can work very well in teams but also independent and are constructive and target oriented Your English skills are good and you can both communicate professionally but also informally in small talks with the team How well support you
Posted 1 month ago
8.0 - 13.0 years
35 - 50 Lacs
Hyderabad
Hybrid
Location: Hyderabad Exp: 8+ Years Immediate Joiners Preferred We at Datametica Solutions Private Limited are looking for a GCP Data Architect who has a passion for cloud, with knowledge and working experience of GCP Platform. This role will involve understanding business requirements, analyzing technical options and providing end to end Cloud based ETL Solutions. Required Past Experience: 10 + years of overall experience in architecting, developing, testing & implementing Big data projects using GCP Components (e.g. BigQuery, Composer, Dataflow, Dataproc, DLP, BigTable, Pub/Sub, Cloud Function etc.). Experience and understanding on ETL - AB initio Minimum 4 + years experience with data management strategy formulation, architectural blueprinting, and effort estimation. Cloud capacity planning and Cost-based analysis. Worked with large datasets and solving difficult analytical problems. Regulatory and Compliance work in Data Management. Tackle design and architectural challenges such as Performance, Scalability, and Reusability Advocate engineering and design best practices including design patterns, code reviews and automation (e.g., CI|CD, test automation) E2E Data Engineering and Lifecycle (including non-functional requirements and operations) management. Work with client teams to design and implement modern, scalable data solutions using a range of new and emerging technologies from the Google Cloud Platform Fundamentals of Kafka,Pub/Sub to handle real-time data feeds. Good Understanding of Data Pipeline Design and Data Governance concepts Experience in code deployment from lower environment to production. Good communication skills to understand business requirements. Required Skills and Abilities: Mandatory Skills - BigQuery ,Composer, Python, GCP Fundamentals. Secondary Skills Dataproc, Kubernetes, DLP, Pub/Sub, Dataflow,Shell Scripting,SQL, Security(Platform & Data) concepts. Expertise in Data Modeling Detailed knowledge of Data Lake and Enterprise Data Warehouse principles Expertise in ETL Migration from On-Primes to GCP Cloud Familiar with Hadoop ecosystems, HBase, Hive, Spark or emerging data mesh patterns. Ability to communicate with customers, developers, and other stakeholders. Good To Have - Certifications in any of the following: GCP Professional Cloud Architect, GCP Professional Data Engineer Mentor and guide team members Good Presentation skills Strong Team Player
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Job Description GCP Lead Google Cloud Platform Location: Brookefield, Bangalore, India Department: Software Development Legal Entity: FGSI Why Join Fossil Group? At Fossil Group, we are part of an international team that dares to dream, disrupt, and deliver innovative watches, jewelry, and leather goods to the world. We're committed to long-term value creation, driven by technology and our core values: Authenticity, Grit, Curiosity, Humor, and Impact. If you are a forward-thinker who thrives in a diverse, global setting, we want to hear from you. Job Summary We are seeking a passionate and technically strong GCP Lead to join our Global Technology team at Fossil Group . This role is responsible for leading cloud-based architecture design, implementation, automation, and optimization efforts on Google Cloud Platform (GCP) . You will serve as a technical mentor and thought leader, enabling the business with modern, scalable cloud infrastructure. Responsibilities Lead the design, development, and deployment of end-to-end GCP-based solutions. Architect cloud-native data pipelines, applications, and infrastructure using GCP tools such as BigQuery, Dataflow, Dataproc, Airflow and Cloud Composer. Collaborate with stakeholders across engineering, data, security, and operations to align cloud solutions with business goals. Provide technical guidance and mentorship to team members on GCP best practices and automation. Implement monitoring, logging, and alerting systems using Stackdriver, Cloud Logging, Cloud Monitoring, or similar tools. Oversee performance tuning, cost optimization, and security hardening of GCP workloads. Drive continuous integration and deployment pipelines using Terraform, Jenkins, GitOps, etc. Ensure compliance with cloud security standards, access control policies, and audit requirements. Participate in architectural reviews, sprint planning, and roadmap discussions. Manage escalations, troubleshoot cloud issues, and ensure uptime and service reliability. Requirements Bachelors degree in Computer Science, Information Technology, or equivalent experience. 712 years of total experience, with 3+ years in GCP-focused roles and prior leadership/architect experience. Hands-on experience with GCP services : BigQuery, Dataproc, Dataflow, Airflow, Cloud Composer . Strong development background in Python or Java , with experience building data pipelines and APIs. Expertise in infrastructure as code (IaC) using Terraform , Deployment Manager , or similar tools. Working knowledge of containerization and orchestration (Docker, Kubernetes, GKE). Proficiency in CI/CD pipelines , version control (Git), and Agile delivery methodologies. Familiarity with data security, encryption, identity management, and compliance frameworks. Excellent problem-solving skills, system-level thinking, and performance tuning expertise. Strong communication and stakeholder management skills. EEO Statement At Fossil, we believe our differences not only make us stronger as a team, but also help us create better products and a richer community. We are an Equal Employment Opportunity Employer dedicated to a policy of nondiscrimination in all employment practices without regard to age, disability, gender identity or expression, marital status, pregnancy, race, religion, sexual orientation, or any other protected characteristic.
Posted 1 month ago
2.0 - 5.0 years
18 - 21 Lacs
Hyderabad
Work from Office
Overview Annalect is currently seeking a data engineer to join our technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design, and development of software products as well as research and evaluation of new technical solutions. Responsibilities Design, build, test and deploy scalable and reusable systems that handle large amounts of data. Collaborate with product owners and data scientists to build new data products. Ensure data quality and reliability Qualifications Experience designing and managing data flows. Experience designing systems and APIs to integrate data into applications. 4+ years of Linux, Bash, Python, and SQL experience 2+ years using Spark and other frameworks to process large volumes of data. 2+ years using Parquet, ORC, or other columnar file formats. 2+ years using AWS cloud services, esp. services that are used for data processing e.g. Glue, Dataflow, Data Factory, EMR, Dataproc, HDInsights , Athena, Redshift, BigQuery etc. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Hybrid
Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Data Engineering, Big Data Technologies, Hadoop, Spark, Hive, Presto, Airflow, Data Modeling, Etl development, Data Lake Architecture, Python, Scala, GCP Big Query, Dataproc, Dataflow, Cloud Composer, AWS, Big Data Stack, Azure, GCP Wayfair is Looking for: About the job The Data Engineering team within the SMART org supports development of large-scale data pipelines for machine learning and analytical solutions related to unstructured and structured data. You'll have the opportunity to gain hands-on experience on all kinds of systems in the data platform ecosystem. Your work will have a direct impact on all applications that our millions of customers interact with every day: search results, homepage content, emails, auto-complete searches, browse pages and product carousels and build and scale data platforms that enable to measure the effectiveness of wayfairs ad-costs , media attribution that helps to decide on day to day or major marketing spends. About the Role: As a Data Engineer, you will be part of the Data Engineering team with this role being inherently multi-functional, and the ideal candidate will work with Data Scientist, Analysts, Application teams across the company, as well as all other Data Engineering squads at Wayfair. We are looking for someone with a love for data, understanding requirements clearly and the ability to iterate quickly. Successful candidates will have strong engineering skills and communication and a belief that data-driven processes lead to phenomenal products. What you'll do: Build and launch data pipelines, and data products focussed on SMART Org. Helping teams push the boundaries of insights, creating new product features using data, and powering machine learning models. Build cross-functional relationships to understand data needs, build key metrics and standardize their usage across the organization. Utilize current and leading edge technologies in software engineering, big data, streaming, and cloud infrastructure What You'll Need: Bachelor/Master degree in Computer Science or related technical subject area or equivalent combination of education and experience 3+ years relevant work experience in the Data Engineering field with web scale data sets. Demonstrated strength in data modeling, ETL development and data lake architecture. Data Warehousing Experience with Big Data Technologies (Hadoop, Spark, Hive, Presto, Airflow etc.). Coding proficiency in at least one modern programming language (Python, Scala, etc) Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing and query performance tuning skills of large data sets. Industry experience as a Big Data Engineer and working along cross functional teams such as Software Engineering, Analytics, Data Science with a track record of manipulating, processing, and extracting value from large datasets. Strong business acumen. Experience leading large-scale data warehousing and analytics projects, including using GCP technologies Big Query, Dataproc, GCS, Cloud Composer, Dataflow or related big data technologies in other cloud platforms like AWS, Azure etc. Be a team player and introduce/follow the best practices on the data engineering space. Ability to effectively communicate (both written and verbally) technical information and the results of engineering design at all levels of the organization. Good to have : Understanding of NoSQL Database exposure and Pub-Sub architecture setup. Familiarity with Bl tools like Looker, Tableau, AtScale, PowerBI, or any similar tools.
Posted 1 month ago
15.0 - 20.0 years
9 - 14 Lacs
Pune
Work from Office
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI to improve performance and efficiency, including but not limited to deep learning, neural networks, chatbots, natural language processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : GCP Dataflow, Google Dataproc, Google Pub/SubMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence to enhance performance and efficiency. Your typical day will involve collaborating with cross-functional teams to design and implement innovative solutions, utilizing advanced technologies such as deep learning and natural language processing. You will also be responsible for analyzing data and refining algorithms to ensure optimal functionality and user experience, while continuously exploring new methodologies to drive improvements in AI applications. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and development of AI-driven applications to meet project requirements.- Collaborate with team members to troubleshoot and resolve technical challenges. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google Cloud Machine Learning Services.- Good To Have Skills: Experience with GCP Dataflow, Google Pub/Sub, Google Dataproc.- Strong understanding of machine learning frameworks and libraries.- Experience in deploying machine learning models in cloud environments.- Familiarity with data preprocessing and feature engineering techniques. Additional Information:- The candidate should have minimum 2 years of experience in Google Cloud Machine Learning Services.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
6.0 - 11.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Shift: (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Machine Learning, ML, ml architectures and lifecycle, Airflow, Kubeflow, MLFlow, Spark, Kubernetes, Docker, Python, SQL, machine learning platforms, BigQuery, GCS, Dataproc, AI Platform, Search Ranking, Deep Learning, Deep Learning Frameworks, PyTorch, TensorFlow About the job Candidates for this position are preferred to be based in Bangalore, India and will be expected to comply with their team's hybrid work schedule requirements. Who We Are Wayfairs Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. What youll do Provide technical leadership in the development of an automated and intelligent advertising system by advancing the state-of-the-art in machine learning techniques to support recommendations for Ads campaigns and other optimizations. Design, build, deploy and refine extensible, reusable, large-scale, and real-world platforms that optimize our ads experience. Work cross-functionally with commercial stakeholders to understand business problems or opportunities and develop appropriately scoped machine learning solutions Collaborate closely with various engineering, infrastructure, and machine learning platform teams to ensure adoption of best-practices in how we build and deploy scalable machine learning services Identify new opportunities and insights from the data (where can the models be improved? What is the projected ROI of a proposed modification?) Research new developments in advertising, sort and recommendations research and open-source packages, and incorporate them into our internal packages and systems. Be obsessed with the customer and maintain a customer-centric lens in how we frame, approach, and ultimately solve every problem we work on. We Are a Match Because You Have: Bachelor's or Masters degree in Computer Science, Mathematics, Statistics, or related field. 6-9 years of industry experience in advanced machine learning and statistical modeling, including hands-on designing and building production models at scale. Strong theoretical understanding of statistical models such as regression, clustering and machine learning algorithms such as decision trees, neural networks, etc. Familiarity with machine learning model development frameworks, machine learning orchestration and pipelines with experience in either Airflow, Kubeflow or MLFlow as well as Spark, Kubernetes, Docker, Python, and SQL. Proficiency in Python or one other high-level programming language Solid hands-on expertise deploying machine learning solutions into production Strong written and verbal communication skills, ability to synthesize conclusions for non-experts, and overall bias towards simplicity Nice to have Familiarity with Machine Learning platforms offered by Google Cloud and how to implement them on a large scale (e.g. BigQuery, GCS, Dataproc, AI Notebooks). Experience in computational advertising, bidding algorithms, or search ranking Experience with deep learning frameworks like PyTorch, Tensorflow, etc.
Posted 1 month ago
4.0 - 8.0 years
10 - 18 Lacs
Hyderabad
Hybrid
About the Role: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. Implement and enforce data quality checks, validation rules, and monitoring. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: 4-6 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture, data integration, and transformation techniques. Experience in working with version control systems like GitHub and knowledge of CI/CD practices. Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): Experience working with Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools.
Posted 1 month ago
4.0 - 9.0 years
14 - 20 Lacs
Hyderabad
Work from Office
Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Job Title: GCP Data Engineer Overview: We are looking for a skilled GCP Data Engineer with 4 to 5 years of real hands-on experience in data ingestion, data engineering, data quality, data governance, and cloud data warehouse implementations using GCP data services. The ideal candidate will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment. Key Responsibilities: Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Develop and maintain data ingestion frameworks and pipelines from various data sources using GCP services. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and implement data and semantic interoperability specifications. Work closely with business teams to define and scope requirements. Analyze existing systems to identify appropriate data sources and drive continuous improvement. Implement and continuously enhance automation processes for data ingestion and data transformation. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Skills and Qualifications: Overall 4-5 years of hands-on experience as a Data Engineer, with at least 3 years of direct GCP Data Engineering experience . Strong SQL and Python development skills are mandatory. Solid experience in data engineering, working with distributed architectures, ETL/ELT, and big data technologies. Demonstrated knowledge and experience with Google Cloud BigQuery is a must. Experience with DataProc and Dataflow is highly preferred. Strong understanding of serverless data warehousing on GCP and familiarity with DWBI modeling frameworks . Extensive experience in SQL across various database platforms. Experience in data mapping and data modeling . Familiarity with data analytics tools and best practices. Hands-on experience with one or more programming/scripting languages such as Python, JavaScript, Java, R, or UNIX Shell . Practical experience with Google Cloud services including but not limited to: Big Query , BigTable Cloud Dataflow , Cloud Data proc Cloud Storage , Pub/Sub Cloud Functions , Cloud Composer Cloud Spanner , Cloud SQL Knowledge of modern data mining, cloud computing, and data management tools (such as Hadoop, HDFS, and Spark ). Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker , etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. GCP Data Engineer Certification is highly preferred. Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Total Experience: Relevant Experience : Current Role / Skillset: Current CTC: Fixed: Variables(if any): Bonus(if any): Payroll Company(Name): Client Company(Name): Expected CTC: Official Notice Period: Serving Notice (Yes / No): CTC of offer in hand: Last Working Day (in current organization): Location of the Offer in hand: Willing to work from office: ************* 5DAYS WORK FROM OFFICE MANDATORY ****************
Posted 1 month ago
4.0 - 9.0 years
14 - 20 Lacs
Hyderabad
Work from Office
Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Job Title: GCP Data Engineer Overview: We are looking for a skilled GCP Data Engineer with 4 to 5 years of real hands-on experience in data ingestion, data engineering, data quality, data governance, and cloud data warehouse implementations using GCP data services. The ideal candidate will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment. Key Responsibilities: Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Develop and maintain data ingestion frameworks and pipelines from various data sources using GCP services. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and implement data and semantic interoperability specifications. Work closely with business teams to define and scope requirements. Analyze existing systems to identify appropriate data sources and drive continuous improvement. Implement and continuously enhance automation processes for data ingestion and data transformation. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Skills and Qualifications: Overall 4-5 years of hands-on experience as a Data Engineer, with at least 3 years of direct GCP Data Engineering experience . Strong SQL and Python development skills are mandatory. Solid experience in data engineering, working with distributed architectures, ETL/ELT, and big data technologies. Demonstrated knowledge and experience with Google Cloud BigQuery is a must. Experience with DataProc and Dataflow is highly preferred. Strong understanding of serverless data warehousing on GCP and familiarity with DWBI modeling frameworks . Extensive experience in SQL across various database platforms. Experience in data mapping and data modeling . Familiarity with data analytics tools and best practices. Hands-on experience with one or more programming/scripting languages such as Python, JavaScript, Java, R, or UNIX Shell . Practical experience with Google Cloud services including but not limited to: Big Query , BigTable Cloud Dataflow , Cloud Data proc Cloud Storage , Pub/Sub Cloud Functions , Cloud Composer Cloud Spanner , Cloud SQL Knowledge of modern data mining, cloud computing, and data management tools (such as Hadoop, HDFS, and Spark ). Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker , etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. GCP Data Engineer Certification is highly preferred. Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Total Experience: Relevant Experience : Current Role / Skillset: Current CTC: Fixed: Variables(if any): Bonus(if any): Payroll Company(Name): Client Company(Name): Expected CTC: Official Notice Period: Serving Notice (Yes / No): CTC of offer in hand: Last Working Day (in current organization): Location of the Offer in hand: Willing to work from office: ************* 5DAYS WORK FROM OFFICE MANDATORY ****************
Posted 1 month ago
7.0 - 12.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 5 - 15 Yrs Location: Pan India Job Description: Minimum 2 years hands on experience in GCP Development ( Data Engineering ) Position : Developer / Tech Lead / Architect Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 1 month ago
2.0 - 6.0 years
5 - 9 Lacs
Noida
Work from Office
About Us :. At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service all backed by TELUS, our multi-billion dollar telecommunications parent.. Required Skills :. Minimum 6 years of experience in Architectecture, Design and building data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms.. Perform application impact assessments, requirements reviews, and develop work estimates.. Develop test strategies and site reliability engineering measures for data products and solutions.. Lead agile development "scrums" and solution reviews.. Mentor junior Data Engineering Specialists.. Lead the resolution of critical operations issues, including post-implementation reviews.. Perform technical data stewardship tasks, including metadata management, security, and privacy by design.. Demonstrate expertise in SQL and database proficiency in various data engineering tasks.. Automate complex data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect.. Develop and manage Unix scripts for data engineering tasks.. Intermediate proficiency in infrastructure-as-code tools like Terraform, Puppet, and Ansible to automate infrastructure deployment.. Proficiency in data modeling to support analytics and business intelligence.. Working knowledge of ML Ops to integrate machine learning workflows with data pipelines.. Extensive expertise in GCP technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud. Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion, Dataproc (good to have), and BigTable.. Advanced proficiency in programming languages (Python).. Qualifications:. Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field.. Analytics certification in BI or AI/ML.. 6+ years of data engineering experience.. 4 years of data platform solution architecture and design experience.. GCP Certified Data Engineer (preferred).. Show more Show less
Posted 1 month ago
2.0 - 6.0 years
5 - 9 Lacs
Noida
Work from Office
About Us :. At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service all backed by TELUS, our multi-billion dollar telecommunications parent.. Required Skills:. Design, develop, and support data pipelines and related data products and platforms.. Design and build data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms.. Perform application impact assessments, requirements reviews, and develop work estimates.. Develop test strategies and site reliability engineering measures for data products and solutions.. Participate in agile development "scrums" and solution reviews.. Mentor junior Data Engineers.. Lead the resolution of critical operations issues, including post-implementation reviews.. Perform technical data stewardship tasks, including metadata management, security, and privacy by design.. Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies. Demonstrate SQL and database proficiency in various data engineering tasks.. Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect.. Develop Unix scripts to support various data operations.. Model data to support business intelligence and analytics initiatives.. Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation.. Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog,. Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and Dataproc (good to have).. Qualifications:. Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field.. 4+ years of data engineering experience.. 2 years of data solution architecture and design experience.. GCP Certified Data Engineer (preferred).. Show more Show less
Posted 1 month ago
5.0 - 8.0 years
10 - 15 Lacs
Hyderabad, Pune
Work from Office
Role & responsibilities We are looking for a highly skilled Data Engineers with 5 to 9 years of experience in data engineering specializing in PySpark, Python, GCP, IAM CS DataProc BigQuery SQL Airflow and building data pipelines Handling TerabyteScale Data Processing The ideal candidate will have a strong background in designing developing and maintaining scalable data pipelines and architectures Key Responsibilities Design develop and maintain scalable data pipelines using PySpark, Python, GCP, and Airflow Implement data processing workflows and ETL processes to extract transform and load data from various sources into data lakes and data warehouses Manage and optimize data storage solutions using GCP services TerabyteScale Data Processing Developed and optimized PySpark code to handle terabytes of data efficiently Implemented performance tuning techniques to reduce processing time and improve resource utilization Data Lake Implementation Built a scalable data lake on GCP CS to store and manage structured and unstructured data Data Quality Framework Developed a data quality framework using PySpark and GCP to perform automated data validation and anomaly detection Improved data accuracy and reliability for downstream analytics Collaborate with data scientists analysts and other stakeholders to understand data requirements and deliver highquality data solutions Perform data quality checks and validation to ensure data accuracy and consistency Monitor and troubleshoot data pipelines to ensure smooth and efficient data processing Stay updated with the latest industry trends and technologies in data Preferred candidate profile
Posted 1 month ago
5.0 - 10.0 years
9 - 19 Lacs
Pune, Chennai, Bengaluru
Hybrid
Project Role : Cloud Platform Architect Project Role Description : Oversee application architecture and deployment in cloud platform environments -- including public cloud, private cloud and hybrid cloud. This can include cloud adoption plans, cloud application design, and cloud management and monitoring. Must have skills : Google Cloud Platform Architecture Summary: As a Cloud Platform Architect, you will be responsible for overseeing application architecture and deployment in cloud platform environments, including public cloud, private cloud, and hybrid cloud. Your typical day will involve designing cloud adoption plans, managing and monitoring cloud applications, and ensuring cloud application design meets business requirements. Roles & Responsibilities: - Design and implement cloud adoption plans, including public cloud, private cloud, and hybrid cloud environments. - Oversee cloud application design, ensuring it meets business requirements and aligns with industry best practices. - Manage and monitor cloud applications, ensuring they are secure, scalable, and highly available. - Collaborate with cross-functional teams to ensure cloud applications are integrated with other systems and services. - Stay up-to-date with the latest advancements in cloud technology, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: - Must To Have Skills: Strong experience in Google Cloud Platform Architecture. - Good To Have Skills: Experience with other cloud platforms such as AWS or Azure. - Experience in designing and implementing cloud adoption plans. - Strong understanding of cloud application design and architecture. - Experience in managing and monitoring cloud applications. - Solid grasp of cloud security, scalability, and availability best practices.
Posted 1 month ago
3.0 - 7.0 years
11 - 15 Lacs
Bengaluru
Work from Office
A Data Platform Engineer specialises in the design, build, and maintenance of cloud-based data infrastructure and platforms for data-intensive applications and services. They develop Infrastructure as Code and manage the foundational systems and tools for efficient data storage, processing, and management. This role involves architecting robust and scalable cloud data infrastructure, including selecting and implementing suitable storage solutions, data processing frameworks, and data orchestration tools. Additionally, a Data Platform Engineer ensures the continuous evolution of the data platform to meet changing data needs and leverage technological advancements, while maintaining high levels of data security, availability, and performance. They are also tasked with creating and managing processes and tools that enhance operational efficiency, including optimising data flow and ensuring seamless data integration, all of which are essential for enabling developers to build, deploy, and operate data-centric applications efficiently. Job Description - Grade Specific A strong grasp of the principles and practices associated with data platform engineering, particularly within cloud environments, and demonstrates proficiency in specific technical areas related to cloud-based data infrastructure, automation, and scalability.Key responsibilities encompass:Community Engagement: Actively participating in the professional data platform engineering community, sharing insights, and staying up-to-date with the latest trends and best practices.Project Contributions: Making substantial contributions to client delivery, particularly in the design, construction, and maintenance of cloud-based data platforms and infrastructure.Technical Expertise: Demonstrating a sound understanding of data platform engineering principles and knowledge in areas such as cloud data storage solutions (e.g., AWS S3, Azure Data Lake), data processing frameworks (e.g., Apache Spark), and data orchestration tools.Independent Work and Initiative: Taking ownership of independent tasks, displaying initiative and problem-solving skills when confronted with intricate data platform engineering challenges.Emerging Leadership: Commencing leadership roles, which may encompass mentoring junior engineers, leading smaller project teams, or taking the lead on specific aspects of data platform projects.
Posted 1 month ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad
Remote
Canterr is looking for talented and passionate professionals for exciting opportunities with a US-based MNC product company! You will be working permanently with Canterr and deployed to a top-tier global tech client. Key Responsibilities: Design and develop data pipelines and ETL processes to ingest, process, and store large volumes of data. Implement and manage big data technologies such as Kafka, Dataflow, BigQuery, CloudSQL, Kafka, PubSub Collaborate with stakeholders to understand data requirements and deliver high-quality data solutions. Monitor and troubleshoot data pipeline issues and implement solutions to prevent future occurrences. Required Skills and Experience: Generally, we use Google Cloud Platform (GCP) for all software deployed at Wayfair. Data Storage and Processing BigQuery CloudSQL PostgreSQL DataProc Pub/Sub Data modeling: Breaking the business requirements (KPIs) to data points. Building the scalable data model ETL Tools: DBT SQL Data Orchestration and ETL Dataflow Cloud Composer Infrastructure and Deployment Kubernetes Helm Data Access and Management Looker Terraform Ideal Business Domain Experience: Supply chain or warehousing experience: The project is focused on building a normalized data layer which ingests information from multiple Warehouse Management Systems (WMS) and projects it for back-office analysis
Posted 1 month ago
4.0 - 9.0 years
0 - 2 Lacs
Chennai
Hybrid
Job Description: We are seeking a skilled and proactive GCP Data Engineer with strong experience in Python and SQL to build and manage scalable data pipelines on Google Cloud Platform (GCP) . The ideal candidate will work closely with data analysts, architects, and business teams to enable data-driven decision-making. Key Responsibilities: Design and develop robust data pipelines and ETL/ELT processes using GCP services Write efficient Python scripts for data processing, transformation, and automation Develop complex SQL queries for data extraction, aggregation, and analysis Work with tools like BigQuery, Cloud Storage, Cloud Functions , and Pub/Sub Ensure high data quality, integrity, and governance across datasets Optimize data workflows for performance and scalability Collaborate with cross-functional teams to define and deliver data solutions Monitor, troubleshoot, and resolve issues in data workflows and pipelines Required Skills: Hands-on experience with Google Cloud Platform (GCP) Strong programming skills in Python for data engineering tasks Advanced proficiency in SQL for working with large datasets Experience with BigQuery , Cloud Storage , and Cloud Functions Familiarity with streaming and batch processing (e.g., Pub/Sub , Dataflow , or Dataproc )
Posted 1 month ago
2.0 - 4.0 years
7 - 9 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
POSITION Senior Data Engineer / Data Engineer LOCATION Bangalore/Mumbai/Kolkata/Gurugram/Hyd/Pune/Chennai EXPERIENCE 2+ Years JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business-critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high-quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). Implement efficient solutions for high-volume, batch, real-time streaming, and event-driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code reviews, technical discussions, and peer mentoring as needed. Skills & Experience: Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). Strong SQL development skills for ETL, analytics, and performance optimization. Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus.
Posted 1 month ago
4.0 - 8.0 years
25 - 30 Lacs
Pune
Hybrid
So, what’s t he r ole all about? As a Data Engineer, you will be responsible for designing, building, and maintaining large-scale data systems, as well as working with cross-functional teams to ensure efficient data processing and integration. You will leverage your knowledge of Apache Spark to create robust ETL processes, optimize data workflows, and manage high volumes of structured and unstructured data. How will you make an impact? Design, implement, and maintain data pipelines using Apache Spark for processing large datasets. Work with data engineering teams to optimize data workflows for performance and scalability. Integrate data from various sources, ensuring clean, reliable, and high-quality data for analysis. Develop and maintain data models, databases, and data lakes. Build and manage scalable ETL solutions to support business intelligence and data science initiatives. Monitor and troubleshoot data processing jobs, ensuring they run efficiently and effectively. Collaborate with data scientists, analysts, and other stakeholders to understand business needs and deliver data solutions. Implement data security best practices to protect sensitive information. Maintain a high level of data quality and ensure timely delivery of data to end-users. Continuously evaluate new technologies and frameworks to improve data engineering processes. Have you got what it takes? 8-11 years of experience as a Data Engineer, with a strong focus on Apache Spark and big data technologies. Expertise in Spark SQL , DataFrames , and RDDs for data processing and analysis. Proficient in programming languages such as Python , Scala , or Java for data engineering tasks. Hands-on experience with cloud platforms like AWS , specifically with data processing and storage services (e.g., S3 , BigQuery , Redshift , Databricks ). Experience with ETL frameworks and tools such as Apache Kafka , Airflow , or NiFi . Strong knowledge of data warehousing concepts and technologies (e.g., Redshift , Snowflake , BigQuery ). Familiarity with containerization technologies like Docker and Kubernetes . Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. Excellent communication and collaboration skills to work effectively with cross-functional teams. You will have an advantage if you also have: Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7235 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Mumbai
Work from Office
Google Cloud Infrastructure Support Engineer will be responsible for ensuring the reliability, performance, and security of our Google Cloud Platform (GCP) infrastructure. Work closely with cross-functional teams to troubleshoot issues, optimize infrastructure, and implement best practices for cloud architecture. Experience with Terraform for deploying and managing infrastructure templates. Administer BigQuery environments, including managing datasets, access controls, and optimize query performance. Be familiar with Vertex AI for monitoring and managing machine learning model deployments. Knowledge of GCPs Kubernetes Engine and its integration with the cloud ecosystem. Understanding of cloud security best practices and experience with implementing security measures. Knowledge of setting up and managing data clean rooms within BigQuery. Understanding of the Analytics Hub platform and how it integrates with data clean rooms to facilitate sensitive data-sharing use cases. Knowledge of DataPlex and how it integrates with other Google Cloud services such as BigQuery, Dataproc Metastore, and Data Catalog. Key Responsibilities: Provide technical support for our Google Cloud Platform infrastructure, including compute, storage, networking, and security services. Monitor system performance and proactively identify and resolve issues to ensure maximum uptime and reliability. Collaborate with cross-functional teams to design, implement, and optimize cloud infrastructure solutions. Automate repetitive tasks and develop scripts to streamline operations and improve efficiency. Document infrastructure configurations, processes, and procedures. Qualifications: Required: Strong understanding of GCP services, including Compute Engine, Kubernetes Engine, Cloud Storage, VPC networking, and IAM. Experience with BigQuery and VertexAI Proficiency in scripting languages such as Python, Bash, or PowerShell. Experience with infrastructure as code tools such as Terraform or Google Deployment Manager. Strong communication and collaboration skills. Bachelor's Degree in Computer Science or related discipline, or the equivalent in education and work experience Preferred: Google Cloud certification (e.g., Google Cloud Certified - Professional Cloud Architect, Google Cloud Certified - Professional Cloud DevOps Engineer)
Posted 1 month ago
3.0 - 7.0 years
5 - 10 Lacs
Pune
Work from Office
This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Python and Spark technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Big Data platform for at least 5 years. Hands own experience in Spark (Hive, Impala). Hands own experience in Python Programming language. Preferably, experience in BigQuery , Dataproc , Composer , Terraform , GKE , Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of DevOps. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platforms: OpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members.
Posted 1 month ago
4.0 - 8.0 years
10 - 19 Lacs
Chennai
Hybrid
Greetings from Getronics! We have permanent opportunities for GCP Data Engineers in Chennai Location . Hope you are doing well! This is Abirami from Getronics Talent Acquisition team. We have multiple opportunities for GCP Data Engineers for our automotive client in Chennai Sholinganallur location. Please find below the company profile and Job Description. If interested, please share your updated resume, recent professional photograph and Aadhaar proof at the earliest to abirami.rsk@getronics.com. Company : Getronics (Permanent role) Client : Automobile Industry Experience Required : 4+ Years in IT and minimum 3+ years in GCP Data Engineering Location : Chennai (Elcot - Sholinganallur) Work Mode : Hybrid Position Description: We are currently seeking a seasoned GCP Cloud Data Engineer with 3 to 5 years of experience in leading/implementing GCP data projects, preferrable implementing complete data centric model. This position is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier supply Chain, Supplier collaboration Design and implement data-centric solutions on Google Cloud Platform (GCP) using various GCP tools like Storage Transfer Service, Cloud Data Fusion, Pub/Sub, Data flow, Cloud compression, Cloud scheduler, Gutil, FTP/SFTP, Dataproc, BigTable etc. • Build ETL pipelines to ingest the data from heterogeneous sources into our system • Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data • Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets • Deploy and manage databases, both SQL and NoSQL, such as Bigtable, Firestore, or Cloud SQL, based on project requirements infrastructure. Skill Required: - GCP Data Engineer, Hadoop, Spark/Pyspark, Google Cloud Platform (Google Cloud Platform) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Compose, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. - 4+ years of professional experience in: o Data engineering, data product development and software product launches. - 3+ years of cloud data/software engineering experience building scalable, reliable, and cost- effective production batch and streaming data pipelines using: Data warehouses like Google BigQuery. Workflow orchestration tools like Airflow. Relational Database Management System like MySQL, PostgreSQL, and SQL Server. Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub. Education Required: Any Bachelors' degree Candidate should be willing to take GCP assessment (1-hour online video test) LOOKING FOR IMMEDIATE TO 30 DAYS NOTICE CANDIDATES ONLY. Regards, Abirami Getronics Recruitment team
Posted 1 month ago
5.0 - 10.0 years
16 - 31 Lacs
Pune, Bengaluru
Work from Office
GCP Data Lead Experience - 4- 8 Years Location -Pune / Bengluru Required Past Experience: 5+ years of overall experience in architecting, developing, testing & implementing Data Platform projects using GCP Components (e.g. PySpark, SQL, GCP EcoSystem(Biq Query, Cloud Composer, DataProc) Good Understanding of Data Structures. Worked with large datasets and solving difficult analytical problems. Experience working with GIT for Source Code Management Worked with Structured and Unstructured data E2E Data Engineering and Lifecycle (including non-functional requirements and operations) management. Worked with client teams to design and implement modern, scalable data solutions using a range of new and emerging technologies from the Google Cloud Platform Automating manual processes to speed up delivery. Good Understanding of Data Pipeline (Batch and Streaming) and Data Governance Experience in code deployment from lower environment to production. Good communication skills to understand business requirements. Required Skills and Abilities: Mandatory Skills - BigQuery ,Composer, Python, GCP Fundamentals. Secondary Skills PySpark, SQL, GCP EcoSystem(Biq Query, Cloud Composer, DataProc) Knowledge of ETL Migration from On-Premises to GCP Cloud SQL Performance Tuning Batch/Streaming Data Processing Fundamentals of Kafka,Pub/Sub to handle real-time data feeds. Good To Have - Certifications in any of the following: GCP Professional Cloud Architect, GCP Professional Data Engineer Ability to communicate with customers, developers, and other stakeholders. Mentor and guide team members Good Presentation skills Strong Team Player
Posted 1 month ago
5.0 - 6.0 years
55 - 60 Lacs
Pune
Work from Office
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of "22.5 billion.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi