Jobs
Interviews

992 Dataflow Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Kyndryl Software Engineering, IT, Data Science Bengaluru, Karnataka, India Chennai, Tamil Nadu, India Posted on Jun 12, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to join a team that is passionate about solving complex business problems with cutting-edge technology? Kyndryl is seeking a talented Data Architect who will take charge of all things data and information, transforming them into remarkable solutions that drive our customers’ success. Get ready to unleash your creative prowess and shape the future of data management. As a Data Architect with Kyndryl, you will play a vital role in managing all aspects of data and information, from understanding business requirements to translating them into data models scheme design and data architectures. Your expertise will be critical in guiding the full IT data lifecycle governance , including acquisition, transformation, classification, storage, presentation, distribution, security, privacy, and archiving to ensure data is accurate, complete and secure. You will work closely with our customers to design solutions and data architectures that address their unique business problems, considering their needs and constraints while applying your industry knowledge and expertise. You will have the opportunity to work with diverse technologies, like databases (relational, hierarchical and object-oriented), file systems and storage management, document imaging, knowledge and content management, taxonomies and business intelligence. All relevant for designing business-driven IT solutions that meet data requirements and incorporate cloud solutions for different types of storage needs. As a Data Architect, you will have the chance to develop and design centralized or distributed systems that both address user requirements and perform efficiently and effectively. You will ensure the viability of proposed solutions by conducting solution assurance assessments and work closely with the project team and key stakeholders to ensure that the final solution meets all customer requirements and expectations. At Kyndryl, you'll be part of a dynamic, forward-thinking team where creativity knows no bounds. Together, we'll shape the future of data architecture and revolutionize the way businesses thrive. Apply now and take the first step towards an exciting and rewarding career with Kyndryl. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career, from a Junior Architect to Principal Architect – we have opportunities for that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Expertise 10+ Years experience in data modeling, database design, and data management best practices. Proven expertise in architecting data and AI platforms using GCP. Hands-on experience with BigQuery, Dataflow, VertexAI, and Gemini. Proficiency in Python and SQL for data processing, transformation, and automation. Deep understanding of Generative AI, Agentic AI, and related architectures. Experience implementing GenAI-based applications using tools such as Langchain and Langraph. Solid background in building scalable and secure data pipelines and storage systems in cloud environments. Preferred Technical And Professional Experience Experience with cloud-based data platforms, integration, and governance frameworks across Azure, AWS, or GCP. Knowledge of machine learning lifecycle management, model monitoring, and MLOps best practices. Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Engineering, or a related field. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Job The Director Data Engineering will lead the development and implementation of a comprehensive data strategy that aligns with the organization’s business goals and enables data driven decision making. Roles and Responsibilities Build and manage a team of talented data managers and engineers with the ability to not only keep up with, but also pioneer, in this space Collaborate with and influence leadership to directly impact company strategy and direction Develop new techniques and data pipelines that will enable various insights for internal and external customers Develop deep partnerships with client implementation teams, engineering and product teams to deliver on major cross-functional measurements and testing Communicate effectively to all levels of the organization, including executives Provide success in partnering teams with dramatically varying backgrounds, from the highly technical to the highly creative Design a data engineering roadmap and execute the vision behind it Hire, lead, and mentor a world-class data team Partner with other business areas to co-author and co-drive strategies on our shared roadmap Oversee the movement of large amounts of data into our data lake Establish a customer-centric approach and synthesize customer needs Own end-to-end pipelines and destinations for the transfer and storage of all data Manage 3rd-party resources and critical data integration vendors Promote a culture that drives autonomy, responsibility, perfection and mastery. Maintain and optimize software and cloud expenses to meet financial goals of the company Provide technical leadership to the team in design and architecture of data products and drive change across process, practices, and technology within the organization Work with engineering managers and functional leads to set direction and ambitious goals for the Engineering department Ensure data quality, security, and accessibility across the organization Skills You Will Need 10+ years of experience in data engineering 5+ years of experience leading data teams of 30+ resources or more, including selection of talent planning / allocating resources across multiple geographies and functions. 5+ years of experience with GCP tools and technologies, specifically, Google BigQuery, Google cloud composer, Dataflow, Dataform, etc. Experience creating large-scale data engineering pipelines, data-based decision-making and quantitative analysis tools and software Experience with hands-on to code version control systems (git) Experience with CICD, data architectures, pipelines, quality, and code management Experience with complex, high volume, multi-dimensional data, based on unstructured, structured, and streaming datasets Experience with SQL and NoSQL databases Experience creating, testing, and supporting production software and systems Proven track record of identifying and resolving performance bottlenecks for production systems Experience designing and developing data lake, data warehouse, ETL and task orchestrating systems Strong leadership, communication, time management and interpersonal skills Proven architectural skills in data engineering Experience leading teams developing production-grade data pipelines on large datasets Experience designing a large data lake and lake house experience, managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model Experience with common data languages (e.g. Python, Scala) and data warehouses (e.g. Redshift, BigQuery, Snowflake, Databricks) Extensive experience on cloud tools and technologies - GCP preferred Experience managing real-time data pipelines Successful track record and demonstrated thought-leadership and cross-functional influence and partnership within an agile / water-fall development environment. Experience in regulated industries or with compliance frameworks (e.g., SOC 2, ISO 27001). Nice to have: HR services industry experience Experience in data science, including predictive modeling Experience leading teams across multiple geographies Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

What You’ll Do Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, Spring Framework, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Big Data Technologies : Spark/Scala/Hadoop What could set you apart Experience designing and developing big data processing solutions using DataProc, Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others Cloud Certification especially in GCP Self-starter that identifies/responds to priority shifts with minimal supervision. You have excellent leadership and motivational skills You have an inquisitive and innovative mindset with a shown ability to recognize opportunities to create distinctive value You can successfully evaluate workload to drive efficiency Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Reporting to the A/NZ DSE Chapter Manager India PEC within Decision Sciences & Engineering, this role will own and be responsible for the data & analytic engineering chapter in India PEC. The Data Engineer is an essential part of the business that enables the team to support the ongoing acquisition and internal purposing of data, through to the fulfilment of products, insights and systems. As a Data Engineer, you will be responsible for working with our internal customers to ensure that data and systems are being designed and built to move and manipulate data in a scalable, reusable and efficient manner to suit the environment, project, security and requirements. What You’ll Do Design, architect, and implement scalable and secure data pipelines on GCP, utilizing services like Dataflow, Pub/Sub, and Cloud Storage. Develop and maintain data models, ensuring data quality, consistency, and accessibility for various internal stakeholders. Automate data processes and workflows using scripting languages like Python, leveraging technologies like Spark and Airflow. Monitor and troubleshoot data pipelines, identifying and resolving performance issues proactively. Stay up-to-date with the latest trends and advancements in GCP and related technologies, actively proposing and evaluating new solutions. Implement data governance best practices, including data security, access control, and lineage tracking. Lead security initiatives, design and implement security architecture. Lead data quality initiatives, design and implement monitoring dashboards. Mentor and guide junior data engineers, sharing knowledge and best practices to foster a high-performing team. Role requires a solid educational foundation and the ability to develop a strategic vision and roadmap for D&A’s transition to the cloud while balancing delivery of near-term results that are aligned with execution. What Experience You Need BS degree in a STEM major or equivalent discipline; Master’s Degree strongly preferred 8+ years of experience as a data engineer or related role, with experience demonstrating leadership capabilities Cloud certification strongly preferred Expert level skills using programming languages such as Python or SQL (Big Query) and advanced level experience with scripting languages. Demonstrated proficiency in all Google Cloud Services Experience building and maintaining complex data pipelines, troubleshooting complex issues, transforming and entering data into a data pipeline in order for the content to be digested and usable for future projects; Proficiency in Airflow strongly desired Experience designing and implementing advanced to complex data models and experience enabling advanced optimization to improve performance Experience leading a team with Git expertise strongly preferred Hands on Experience on Agile Methodoligies Working Knowledge of CI/CD What could set you apart: Self-starter that identifies/responds to priority shifts with minimal supervision. Strong communication and presentation skills Strong leadership qualities A well-balanced view of resource management, thinking creatively and effectively to deploy the team whilst building skills for the future Skilled in internal networking, negotiating and proactively developing individuals and teams to be the best they can be Strong communicator & presenter, bringing everyone on the journey. Knowledge of Big Data technology and tools with the ability to share ideas among a collaborative team and drive the team based on technical expertise and learning, sharing best practices Excellent communication skills to engage with senior management, internal customers and product management Sound understanding of regulations and security requirements governing access to data in Big Data systems Sound understanding of Insight delivery systems for batch and online Should be able to run Agile Scrum-based projects Demonstrated problem solving skills and the ability to resolve conflicts Experience creating and maintaining product and software roadmaps Experience overseeing yearly as well as product/project budgets Working in a highly regulated environment Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Overview Job Title: Associate - Production Support Engineer Location: Bangalore, India Role Description You will be operating within Corporate Bank Production as an Associate, Production Support Engineer in the Corporate Banking subdivisions. You will be accountable to drive a culture of proactive continual improvement into the Production environment through application, user request support, troubleshooting and resolving the errors in production environment. Automation of manual work, monitoring improvements and platform hygiene. Supporting the resolution of issues and conflicts and preparing reports and meetings. Candidate should have experience in all relevant tools used in the Service Operations environment and has specialist expertise in one or more technical domains and ensures that all associated Service Operations stakeholders are provided with an optimum level of service in line with Service Level Agreements (SLAs) / Operating Level Agreements (OLAs). Ensure all the BAU support queries from business are handled on priority and within agreed SLA and also to ensure all application stability issues are well taken care off. Support the resolution of incidents and problems within the team. Assist with the resolution of complex incidents. Ensure that the right problem-solving techniques and processes are applied Embrace a Continuous Service Improvement approach to resolve IT failings, drive efficiencies and remove repetition to streamline support activities, reduce risk, and improve system availability. Be responsible for your own engineering delivery plus, using data and analytics, drive a reduction in technical debt across the production environment with development and infrastructure teams. Act as a Production Engineering role model to enhance the technical capability of the Production Support teams to create a future operating model embedded with engineering culture. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support." What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Lead by example to drive a culture of proactive continual improvement into the Production environment through automation of manual work, monitoring improvements and platform hygiene. Carry out technical analysis of the Production platform to identify and remediate performance and resiliency issues. Engage in the Software Development Lifecycle (SDLC) to enhance Production Standards and controls. Update the RUN Book and KEDB as & when required Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow to best provide operational support Event monitoring and management via a 24x7 workbench that is both monitoring and regularly probing the service environment and acting on instruction of the run book. Drive knowledge management across the supported applications and ensure full compliance Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Your Skills And Experience Recent experience of applying technical solutions to improve the stability of production environments Working experience of some of the following technology skills: Technologies/Frameworks: Unix, Shell Scripting and/or Python SQL Stack Oracle 12c/19c - for pl/sql, familiarity with OEM tooling to review AWR reports and parameters ITIL v3 Certified (must) Control-M, CRON scheduling MQ- DBUS, IBM JAVA 8/OpenJDK 11 (at least) - for debugging Familiarity with Spring Boot framework Data Streaming – Kafka (Experience with Confluent flavor a plus) and ZooKeeper Hadoop framework Configuration Mgmt Tooling: Ansible Operating System/Platform: RHEL 7.x (preferred), RHEL6.x OpenShift (as we move towards Cloud computing and the fact that Fabric is dependent on OpenShift) CI/CD: Jenkins (preferred) APM Tooling: either or one of Splunk AppDynamics Geneos NewRelic Other platforms: Scheduling – Ctrl-M is a plus, Autosys, etc Search – Elastic Search and/or Solr+ is a plus Methodology: Micro-services architecture SDLC Agile Fundamental Network topology – TCP, LAN, VPN, GSLB, GTM, etc Familiarity with TDD and/or BDD Distributed systems Experience on cloud platforms such as Azure, GCP is a plus Familiarity with containerization/Kubernetes Tools: ServiceNow Jira Confluence BitBucket and/or GIT IntelliJ SQL Plus Familiarity with simple Unix Tooling – putty, mPutty, exceed (PL/)SQL Developer Good understanding of ITIL Service Management framework such as Incident, Problem, and Change processes. Ability to self-manage a book of work and ensure clear transparency on progress with clear, timely, communication of issues. Excellent communication skills, both written and verbal, with attention to detail. Ability to work in Follow the Sun model, virtual teams and in matrix structure Service Operations experience within a global operations context 6-9 yrs experience in IT in large corporate environments, specifically in the area of controlled production environments or in Financial Services Technology in a client-facing function Global Transaction Banking Experience is a plus. Experience of end-to-end Level 2,3,4 management and good overview of Production/Operations Management overall Experience of run-book execution Experience of supporting complex application and infrastructure domains Good analytical, troubleshooting and problem-solving skills Working knowledge of incident tracking tools (i.e., Remedy, Heat etc.) How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Role: Senior Data Engineer with Databricks. Experience: 5+ Years Job Type: Contract Contract Duration: 6 Months Budget: 1.0 lakh per month Location : Remote JOB DESCRIPTION: We are looking for a dynamic and experienced Senior Data Engineer – Databricks to design, build, and optimize robust data pipelines using the Databricks Lakehouse platform. The ideal candidate should have strong hands-on skills in Apache Spark, PySpark, cloud data services, and a good grasp of Python and Java. This role involves close collaboration with architects, analysts, and developers to deliver scalable and high-performing data solutions across AWS, Azure, and GCP. ESSENTIAL JOB FUNCTIONS 1. Data Pipeline Development • Build scalable and efficient ETL/ELT workflows using Databricks and Spark for both batch and streaming data. • Leverage Delta Lake and Unity Catalog for structured data management and governance. • Optimize Spark jobs by tuning configurations, caching, partitioning, and serialization techniques. 2. Cloud-Based Implementation • Develop and deploy data workflows onAWS (S3, EMR,Glue), Azure (ADLS, ADF, Synapse), and/orGCP (GCS, Dataflow, BigQuery). • Manage and optimize data storage, access control, and pipeline orchestration using native cloud tools. • Use tools like Databricks Auto Loader and SQL Warehousing for efficient data ingestion and querying. 3. Programming & Automation • Write clean, reusable, and production-grade code in Python and Java. • Automate workflows using orchestration tools(e.g., Airflow, ADF, or Cloud Composer). • Implement robust testing, logging, and monitoring mechanisms for data pipelines. 4. Collaboration & Support • Collaborate with data analysts, data scientists, and business users to meet evolving data needs. • Support production workflows, troubleshoot failures, and resolve performance bottlenecks. • Document solutions, maintain version control, and follow Agile/Scrum processes Required Skills Technical Skills: • Databricks: Hands-on experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration. • Spark: Expertise in Spark transformations, joins, window functions, and performance tuning. • Programming: Strong in PySpark and Java, with experience in data validation and error handling. • Cloud Services: Good understanding of AWS, Azure, or GCP data services and security models. • DevOps/Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools. Experience: • 5–8 years of data engineering or backend development experience. • Minimum 1–2 years of hands-on work in Databricks with Spark. • Exposure to large-scale data migration, processing, or analytics projects. Certifications (nice to have): Databricks Certified Data Engineer Associate Working Conditions Hours of work - Full-time hours; Flexibility for remote work with ensuring availability during US Timings. Overtime expectations - Overtime may not be required as long as the commitment is accomplished Work environment - Primarily remote; occasional on-site work may be needed only during client visit. Travel requirements - No travel required. On-call responsibilities - On-call duties during deployment phases. Special conditions or requirements - Not Applicable. Workplace Policies and Agreements Confidentiality Agreement: Required to safeguard client sensitive data. Non-Compete Agreement: Must be signed to ensure proprietary model security. Non-Disclosure Agreement: Must be signed to ensure client confidentiality and security. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... We’re seeking a skilled Lead Senior Data Engineering Analyst to join our high-performing team and propel our telecom business forward. You’ll contribute to building cutting-edge data products and assets for our wireless and wireline operations, spanning areas like consumer analytics, network performance, and service assurance. In this role, you will develop deep expertise in various telecom domains. As part of the Data Architecture & Strategy team, you’ll collaborate closely with IT and business stakeholders to design and implement user-friendly, robust data product solutions. This includes defining data quality and incorporating data classification and governance principles. Your responsibilities encompass Collaborating with stakeholders to understand data requirements and translate them into efficient data models Defining the scope and purpose of data product solutions, collaborating with stakeholders to finalize project blueprints, and overseeing the design process through all phases of the release lifecycle. Designing, developing, and implementing data architecture solutions on GCP and Teradata to support our Telecom business. Designing data ingestion for both real-time and batch processing, ensuring efficient and scalable data acquisition for creating an effective data warehouse. Formulating End to End data solutions (Authoritative Data Source, Data Protection, Taxonomy Alignment) Maintaining meticulous documentation, including data design specifications, functional test cases, data lineage, and other relevant artifacts for all data product solution assets. Defining Data Architecture Strategy (Enterprise & Domain level) and Enterprise Data Model Standards & Ownership Proactively identifying opportunities for automation and performance optimization within your scope of work Collaborating effectively within a product-oriented organization, providing data expertise and solutions across multiple business units. Cultivating strong cross-functional relationships and establish yourself as a subject matter expert in data and analytics within the organization. Acting as a mentor to junior team members What we’re looking for... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solve business problems. You thrive in a fast-paced, innovative environment working as a phenomenal teammate to drive the best results and business outcomes. You'll need to have… Bachelor’s degree or four or more years of work experience. Four or more years of relevant work experience. Four or more years of relevant work experience in data architecture, data warehousing, or a related role. Strong grasp of data architecture principles, best practices, and methodologies. Expertise in SQL for data analysis, data discovery, data profiling and solution design. Experience defining data standards, data quality and implementing industry best practices for scalable and maintainable data models using data modeling tools like Erwin Proven experience with ETL, data warehousing concepts, and the data management lifecycle Skilled in creating technical documentation, including source-to-target mappings and SLAs. Experience in shell scripting and python programming language Understanding of git version control and basic git command Hands-on experience with cloud services relevant to data engineering and architecture (e.g., BigQuery, Dataflow, Dataproc, Cloud Storage). Even better if you have one or more of the following… Master's degree in Computer Science. Experience in the Telecommunications industry, with knowledge of wireless and wireline business domains. Experience with stream-processing systems, API, Events etc. Certification in GCP-Data Engineer/Architect. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influence stakeholders. Experience with large clusters, databases, BI tools, data quality and performance tuning. Experience in driving one or more smaller teams for technical delivery If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 month ago

Apply

8.0 years

2 - 7 Lacs

Hyderābād

On-site

Minimum qualifications: Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience. 8 years of experience with software development in one or more programming languages (e.g., Python, C, C++, Java, JavaScript). 3 years of experience in a technical leadership role; overseeing projects, with 2 years of experience in a people management, supervision/team leadership role. Experience in one or more disciplines such as machine learning, recommendation systems, natural language processing, computer vision, pattern recognition, or artificial intelligence. Preferred qualifications: Understanding of agentic AI/ML and Large Language Model (LLM). Excellent coding skills. About the job Like Google's own ambitions, the work of a Software Engineer goes beyond just Search. Software Engineering Managers have not only the technical expertise to take on and provide technical leadership to major projects, but also manage a team of Engineers. You not only optimize your own code but make sure Engineers are able to optimize theirs. As a Software Engineering Manager you manage your project goals, contribute to product strategy and help develop your team. Teams work all across the company, in areas such as information retrieval, artificial intelligence, natural language processing, distributed computing, large-scale system design, networking, security, data compression, user interface design; the list goes on and is growing every day. Operating with scale and speed, our exceptional software engineers are just getting started - and as a manager, you guide the way. With technical and leadership expertise, you manage engineers across multiple teams and locations, a large product budget and oversee the deployment of large-scale projects across multiple sites internationally. At Corp Eng, we build world-leading business solutions that scale a more helpful Google for everyone. As Google’s IT organization, we provide end-to-end solutions for organizations across Google. We deliver the right tools, platforms, and experiences for all Googlers as they create more helpful products and services for everyone. In the simplest terms, we are Google for Googlers. Responsibilities Manage a team of AI software engineers, fostering a collaborative and high-performing environment. This includes hiring, mentoring, performance management, and career development. Drive the design, development, and deployment of scalable and reliable Artificial Intelligence/Machine Learning (AI/ML) systems and infrastructure relevant to HR applications (e.g., talent acquisition, performance management, employee engagement, workforce planning). Collaborate with Product Managers and HR stakeholders to understand business needs, define product requirements, and translate them into technical specifications and project plans. Oversee the architecture and implementation of data pipelines using Google's data processing infrastructure (e.g., Beam, Dataflow) to support AI/ML initiatives. Stay up-to-date of the latest advancements in AI/ML and related technologies, evaluating their potential application within human resources and guiding the team's adoption of relevant innovations. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description The Service360 Senior Data Engineer will be the trusted data advisor in GDI&A (Global Data Insights & Analytics) supporting the following teams: Ford Pro, FCSA (Ford Customer Service Analytics) and FCSD (Ford Customer Service Division) Business. This is an exciting opportunity that provides the Data Engineer a well-rounded experience. The position requires translation of the customer’s Analytical needs into specific data products that should be built in the GCP environment by collaborating with the Product Owners, Technical Anchor and the Customers Responsibilities Work on a small agile team to deliver curated data products for the Product Organization. Work effectively with fellow data engineers, product owners, data champions and other technical experts. Minimum of 5 years of experience with progressive responsibilities in software development Minimum of 3 years of experience defining product vision, strategy, product roadmaps and creating and managing backlogs Experience wrangling, transforming and visualizing large data sets from multiple sources, using a variety of tools Proficiency in SQL is a must have skill Excellent written and verbal communication skills Must be comfortable presenting to and interacting with cross-functional teams and customers Demonstrate technical knowledge and communication skills with the ability to advocate for well-designed solutions. Develop exceptional analytical data products using both streaming and batch ingestion patterns on Google Cloud Platform with solid data warehouse principles. Be the Subject Matter Expert in Data Engineering with a focus on GCP native services and other well integrated third-party technologies. Architect and implement sophisticated ETL pipelines, ensuring efficient data integration into Big Query from diverse batch and streaming sources. Spearhead the development and maintenance of data ingestion and analytics pipelines using cutting-edge tools and technologies, including Python, SQL, and DBT/Data form. Ensure the highest standards of data quality and integrity across all data processes. Data workflow management using Astronomer and Terraform for cloud infrastructure, promoting best practices in Infrastructure as Code Rich experience in Application Support in GCP. Experienced in data mapping, impact analysis, root cause analysis, and document data lineage to support robust data governance. Develop comprehensive documentation for data engineering processes, promoting knowledge sharing and system maintainability. Utilize GCP monitoring tools to proactively address performance issues and ensure system resilience, while providing expert production support. Provide strategic guidance and mentorship to team members on data transformation initiatives, championing data utility within the enterprise. Qualifications Experience working in GCP native (or equivalent) services like Big Query, Google Cloud Storage, PubSub, Dataflow, Dataproc etc. Experience working with Airflow for scheduling and orchestration of data pipelines. Experience working with Terraform to provision Infrastructure as Code. 2 + years professional development experience in Java or Python. Bachelor’s degree in computer science or related scientific field. Experience in analysing complex data, organizing raw data, and integrating massive datasets from multiple data sources to build analytical domains and reusable data products. Experience in working with architects to evaluate and productionalize data pipelines for data ingestion, curation, and consumption. Experience in working with stakeholders to formulate business problems as technical data requirements, identify and implement technical solutions while ensuring key business drivers are captured in collaboration with product management. Show more Show less

Posted 1 month ago

Apply

4.0 - 6.0 years

1 - 4 Lacs

Hyderābād

On-site

Job Title: Senior Data Analyst – AdTech (Team Lead) Location: Hyderabad Experience Level: 4–6 Years Employment Type: Full-time Shift Timings: 5PM - 2AM IST About the Role: We are looking for a highly experienced and hands-on Senior Data Analyst (AdTech) to lead our analytics team. This role is ideal for someone with a strong background in log-level data handling , cross-platform data engineering , and a solid command of modern BI tools . You'll play a key role in building scalable data pipelines, leading analytics strategy, and mentoring a team of analysts. Key Responsibilities: Lead and mentor a team of data analysts, ensuring quality delivery and technical upskilling. Design, develop, and maintain scalable ETL/ELT pipelines using GCP tools (BigQuery, Dataflow, Cloud Composer, Cloud Functions, Pub/Sub). Ingest and process log-level data from platforms like Google Ad Manager, Google Analytics (GA4/UA), DV360, and other advertising and marketing tech sources. Build and optimize data pipelines from diverse sources via APIs, cloud connectors, and third-party tools (e.g., Supermetrics, Fivetran, Stitch). Integrate and manage data across multiple cloud platforms and data warehouses such as BigQuery, Snowflake, DOMO, and AWS (Redshift, S3). Own the creation of data models, data marts, and analytical layers to support dashboards and deep-dive analyses. Build and maintain scalable, intuitive dashboards using Looker Studio, Tableau, Power BI, or Looker. Partner with engineering, product, revenue ops, and client teams to gather requirements and drive strategic insights from data. Ensure data governance, security, and quality standards are followed across the analytics ecosystem. Required Qualifications: 4–6 years of experience in data analytics or data engineering roles, with at least 1–2 years in a leadership capacity. Deep expertise working with log-level AdTech data—Google Ad Manager, Google Analytics, GA4, programmatic delivery logs, and campaign-level data. Strong knowledge of SQL and Google BigQuery for large-scale data querying and transformation. Hands-on experience building data pipelines using GCP tools (Dataflow, Composer, Cloud Functions, Pub/Sub, Cloud Storage). Proven experience integrating data from various APIs and third-party connectors. Experience working with multiple data warehouses: Snowflake, DOMO, AWS Redshift, etc. Strong skills in data visualization tools: Looker Studio, Tableau, Power BI, or Looker. Excellent stakeholder communication and documentation skills. Preferred Qualifications: Scripting experience in Python or JavaScript for automation and custom ETL development. Familiarity with version control (e.g., Git), CI/CD pipelines, and workflow orchestration. Exposure to privacy regulations and consent-based data handling in digital advertising (GDPR, CCPA). Experience working in agile environments and managing delivery timelines across multiple stakeholders.

Posted 1 month ago

Apply

1.0 years

1 - 4 Lacs

Hyderābād

On-site

Job Title: Data Analyst – AdTech (1+ Years Experience) Location: Hyderabad Experience Level: 2–3 Years Employment Type: Full-time Shift Timings: 5PM - 2AM IST About the Role: We are looking for a highly motivated and detail-oriented Data Analyst with 1+ years of experience to join our AdTech analytics team. In this role, you will be responsible for working with large-scale advertising and digital media datasets, building robust data pipelines, querying and transforming data using GCP tools, and delivering insights through visualization platforms like Looker Studio, Looker, Tableau etc Key Responsibilities: Analyze AdTech data (e.g., ads.txt, programmatic delivery, campaign performance, revenue metrics) to support business decisions. Design, develop, and maintain scalable data pipelines using GCP-native tools (e.g., Cloud Functions, Dataflow, Composer). Write and optimize complex SQL queries in BigQuery for data extraction and transformation. Build and maintain dashboards and reports in Looker Studio to visualize KPIs and campaign performance. Collaborate with cross-functional teams including engineering, operations, product, and client teams to gather requirements and deliver analytics solutions. Monitor data integrity, identify anomalies, and work on data quality improvements. Provide actionable insights and recommendations based on data analysis and trends. Required Qualifications: 1+ years of experience in a data analytics or business intelligence role. Hands-on experience with AdTech datasets and understanding of digital advertising concepts. Strong proficiency in SQL, particularly with Google BigQuery. Experience building and managing data pipelines using Google Cloud Platform (GCP) tools. Proficiency in Looker Studio Strong problem-solving skills and attention to detail. Excellent communication skills with the ability to explain technical topics to non-technical stakeholders. Preferred Qualifications: Experience with additional visualization tools such as Tableau, Power BI, or Looker (BI). Exposure to data orchestration tools like Apache Airflow (via Cloud Composer). Familiarity with Python for scripting or automation. Understanding of cloud data architecture and AdTech integrations (e.g., DV360, Ad Manager, Google Ads).

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

CloudWerx is looking for a dynamic SENIOR ENGINEER, DATA to become a vital part of our vibrant DATA ANALYTICS & ENGINEERING TEAM , working in HYDERABAD, INDIA . Join the energy and come be part of the momentum! As a Senior Cloud Data Engineer you will be at the forefront of cloud technology, architecting and implementing cutting-edge data solutions that drive business transformation. You'll have the opportunity to work with a diverse portfolio of clients, from innovative startups to industry leaders, solving complex data challenges using the latest GCP technologies. This role offers a unique blend of technical expertise and client interaction, allowing you to not only build sophisticated data systems but also to consult directly with clients, shaping their data strategies and seeing the real-world impact of your work. If you're passionate about pushing the boundaries of what's possible with cloud data engineering and want to be part of a team that's shaping the future of data-driven decision making, this is your chance to make a significant impact in a rapidly evolving field. Our goal is to have a sophisticated team equipped with expert technical skills in addition to keen business acumen. Each member of our team adds unique value to the business and the customer. CloudWerx is committed to a culture where we attract the best talent in the industry. We aim to be second-to-none when it comes to cloud consulting and business acceleration. This is an incredible opportunity to get involved in an engineering-focused cloud consulting company that provides the most elite technology resources to solve the toughest challenges. Each member of our team adds unique value to the business and the customer. CloudWerx is committed to a culture where we attract the best talent in the industry. We aim to be second-to-none when it comes to cloud consulting and business acceleration. This role is a full-time opportunity in our Hyderabad Office. INSIGHT ON YOUR IMPACT Lead technical discussions with clients, translating complex technical concepts into clear, actionable strategies that align with their business goals. Architect and implement innovative data solutions that transform our clients' businesses, enabling them to harness the full power of their data assets. Collaborate with cross-functional teams to design and optimize data pipelines that process petabytes of data, driving critical business decisions and insights. Mentor junior engineers and contribute to the growth of our data engineering practice, fostering a culture of continuous learning and innovation. Drive the adoption of cutting-edge GCP technologies, positioning our company and clients at the forefront of the cloud data revolution. Identify opportunities for process improvements and automation, increasing the efficiency and scalability of our consulting services. Collaborate with sales and pre-sales teams to scope complex data engineering projects, ensuring technical feasibility and alignment with client needs. YOUR QUALIFICATION, YOUR INFLUENCE To be successful in the role, you must possess the following skills Proven experience (typically 4-8 years) in data engineering, with a strong focus on Google Cloud Platform technologies. Deep expertise in GCP data services, particularly tools like BigQuery, Cloud Composer, Cloud SQL, and Dataflow, with the ability to architect complex data solutions. Strong proficiency in Python and SQL, with the ability to write efficient, scalable, and maintainable code. Demonstrated experience in data modeling, database performance tuning, and cloud migration projects. Excellent communication skills, capable of explaining complex technical concepts to both technical and non-technical stakeholders. Proven ability to work directly with clients, understanding their business needs and translating them into technical solutions. Strong project management skills, including experience with Agile methodologies and tools like Jira. Ability to lead and mentor junior team members, fostering a culture of knowledge sharing and continuous improvement. Track record of staying current with emerging technologies and best practices in cloud data engineering. Experience working in a consulting or professional services environment, with the ability to manage multiple projects and priorities. Demonstrated problem-solving skills, with the ability to think creatively and innovatively to overcome technical challenges. Willingness to obtain relevant Google Cloud certifications if not already held. Ability to work collaboratively in a remote environment, with excellent time management and self-motivation skills. Cultural sensitivity and adaptability, with the ability to work effectively with diverse teams and clients across different time zones. Our Diversity and Inclusion Commitment At CloudWerx, we are dedicated to creating a workplace that values and celebrates diversity. We believe that a diverse and inclusive environment fosters innovation, collaboration, and mutual respect. We are committed to providing equal employment opportunities for all individuals, regardless of background, and actively promote diversity across all levels of our organization. We welcome all walks of life, as we are committed to building a team that embraces and mirrors a wide range of perspectives and identities. Join us in our journey toward a more inclusive and equitable workplace. Background Check Requirement All candidates for employment will be subject to pre-employment background screening for this position. All offers are contingent upon the successful completion of the background check. For additional information on the background check requirements and process, please reach out to us directly. Our Story CloudWerx is an engineering-focused cloud consulting firm born in Silicon Valley - in the heart of hyper-scale and innovative technology. In a cloud environment we help businesses looking to architect, migrate, optimize, secure or cut costs. Our team has unique experience working in some of the most complex cloud environments at scale and can help businesses accelerate with confidence. Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Senior Data Analyst – AdTech (Team Lead) Location: Hyderabad Experience Level: 4–6 Years Employment Type: Full-time Shift Timings: 5PM - 2AM IST About The Role We are looking for a highly experienced and hands-on Senior Data Analyst (AdTech) to lead our analytics team. This role is ideal for someone with a strong background in log-level data handling , cross-platform data engineering , and a solid command of modern BI tools . You'll play a key role in building scalable data pipelines, leading analytics strategy, and mentoring a team of analysts. Key Responsibilities Lead and mentor a team of data analysts, ensuring quality delivery and technical upskilling. Design, develop, and maintain scalable ETL/ELT pipelines using GCP tools (BigQuery, Dataflow, Cloud Composer, Cloud Functions, Pub/Sub). Ingest and process log-level data from platforms like Google Ad Manager, Google Analytics (GA4/UA), DV360, and other advertising and marketing tech sources. Build and optimize data pipelines from diverse sources via APIs, cloud connectors, and third-party tools (e.g., Supermetrics, Fivetran, Stitch). Integrate and manage data across multiple cloud platforms and data warehouses such as BigQuery, Snowflake, DOMO, and AWS (Redshift, S3). Own the creation of data models, data marts, and analytical layers to support dashboards and deep-dive analyses. Build and maintain scalable, intuitive dashboards using Looker Studio, Tableau, Power BI, or Looker. Partner with engineering, product, revenue ops, and client teams to gather requirements and drive strategic insights from data. Ensure data governance, security, and quality standards are followed across the analytics ecosystem. Required Qualifications 4–6 years of experience in data analytics or data engineering roles, with at least 1–2 years in a leadership capacity. Deep expertise working with log-level AdTech data—Google Ad Manager, Google Analytics, GA4, programmatic delivery logs, and campaign-level data. Strong knowledge of SQL and Google BigQuery for large-scale data querying and transformation. Hands-on experience building data pipelines using GCP tools (Dataflow, Composer, Cloud Functions, Pub/Sub, Cloud Storage). Proven experience integrating data from various APIs and third-party connectors. Experience working with multiple data warehouses: Snowflake, DOMO, AWS Redshift, etc. Strong skills in data visualization tools: Looker Studio, Tableau, Power BI, or Looker. Excellent stakeholder communication and documentation skills. Preferred Qualifications Scripting experience in Python or JavaScript for automation and custom ETL development. Familiarity with version control (e.g., Git), CI/CD pipelines, and workflow orchestration. Exposure to privacy regulations and consent-based data handling in digital advertising (GDPR, CCPA). Experience working in agile environments and managing delivery timelines across multiple stakeholders. Show more Show less

Posted 1 month ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Data Analyst – AdTech (1+ Years Experience) Location: Hyderabad Experience Level: 2–3 Years Employment Type: Full-time Shift Timings: 5PM - 2AM IST About The Role We are looking for a highly motivated and detail-oriented Data Analyst with 1+ years of experience to join our AdTech analytics team. In this role, you will be responsible for working with large-scale advertising and digital media datasets, building robust data pipelines, querying and transforming data using GCP tools, and delivering insights through visualization platforms like Looker Studio, Looker, Tableau etc Key Responsibilities Analyze AdTech data (e.g., ads.txt, programmatic delivery, campaign performance, revenue metrics) to support business decisions. Design, develop, and maintain scalable data pipelines using GCP-native tools (e.g., Cloud Functions, Dataflow, Composer). Write and optimize complex SQL queries in BigQuery for data extraction and transformation. Build and maintain dashboards and reports in Looker Studio to visualize KPIs and campaign performance. Collaborate with cross-functional teams including engineering, operations, product, and client teams to gather requirements and deliver analytics solutions. Monitor data integrity, identify anomalies, and work on data quality improvements. Provide actionable insights and recommendations based on data analysis and trends. Required Qualifications 1+ years of experience in a data analytics or business intelligence role. Hands-on experience with AdTech datasets and understanding of digital advertising concepts. Strong proficiency in SQL, particularly with Google BigQuery. Experience building and managing data pipelines using Google Cloud Platform (GCP) tools. Proficiency in Looker Studio Strong problem-solving skills and attention to detail. Excellent communication skills with the ability to explain technical topics to non-technical stakeholders. Preferred Qualifications Experience with additional visualization tools such as Tableau, Power BI, or Looker (BI). Exposure to data orchestration tools like Apache Airflow (via Cloud Composer). Familiarity with Python for scripting or automation. Understanding of cloud data architecture and AdTech integrations (e.g., DV360, Ad Manager, Google Ads). Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

On-site

Responsibilities As a Data Engineer, you will design, develop, and support data pipelines and related data products and platforms. Your primary responsibilities include designing and building data extraction, loading, and transformation pipelines across on-prem and cloud platforms. You will perform application impact assessments, requirements reviews, and develop work estimates. Additionally, you will develop test strategies and site reliability engineering measures for data products and solutions, participate in agile development & solution reviews, mentor junior Data Engineering Specialists, lead the resolution of critical operations issues, and perform technical data stewardship tasks, including metadata management, security, and privacy by design. Required Skills: ● Design, develop, and support data pipelines and related data products and platforms. ● Design and build data extraction, loading, and transformation pipelines and data products across on- prem and cloud platforms. ● Perform application impact assessments, requirements reviews, and develop work estimates. ● Develop test strategies and site reliability engineering measures for data products and solutions. ● Participate in agile development and solution reviews. ● Mentor junior Data Engineers. ● Lead the resolution of critical operations issues, including post-implementation reviews. ● Perform technical data stewardship tasks, including metadata management, security, and privacy by design. ● Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies ● Demonstrate SQL and database proficiency in various data engineering tasks. ● Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect. ● Develop Unix scripts to support various data operations. ● Model data to support business intelligence and analytics initiatives. ● Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation. ● Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and Dataproc (good to have). Qualifications: ● Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field. ● 4+ years of data engineering experience. ● 2 years of data solution architecture and design experience. ● GCP Certified Data Engineer (preferred). Interested candidates can send their resumes to riyanshi@etelligens.in Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

💼 Senior Backend Developer – Java & GCP 📍 Location: Remote (India) 🕒 Type: Contract (Long-term engagement) We’re looking for a skilled backend engineer with a strong foundation in Java and hands-on experience with Google Cloud technologies to join a high-performing team building cloud-native solutions. 🔧 Key Skills & Requirements ✅ Strong expertise in Java 8 and above ✅ Experience with Python is a big plus ✅ Hands-on experience with Google Cloud Platform (GCP) tools: Dataproc Dataflow BigQuery Pub/Sub ✅ Proficient with containerization technologies : Kubernetes OpenShift Docker ✅ Solid understanding of CI/CD pipelines ✅ Familiarity with observability & monitoring tools like ELK or similar 📌 Why Join? ✔ Work on high-impact, scalable cloud systems ✔ Leverage modern DevOps and GCP practices ✔ 100% remote flexibility Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position: Data Architect Skills: GCP, DA, Development, SQL, Python, Big Query, Dataproc, Dataflow, Data Pipelines. Exp: 10+ Yrs Roles and Responsibilities • 10+ years of relevant work experience, including previous experience leading Data related projects in the field of Reporting and Analytics. • Design, build & maintain scalable data lake and data warehouse in cloud ( GCP ) • Expertise in gathering business requirements, analysing business needs, defining the BI/DW architecture to support and help deliver technical solutions to complex business and technical requirements • Creating solution prototype and participating in technology selection. Perform POC and technical presentations • Architect, develop and test scalable data warehouses and data pipelines architecture in Cloud Technologies ( GCP ) Experience in SQL and No SQL DBMS like MS SQL Server, MySQL, PostgreSQL, DynamoDB, Cassandra, MongoDB. • Design and develop scalable ETL processes, including error handling. • Expert in Query and program languages MS SQL Server, T-SQL, PostgreSQL, MY SQL, Python, R. • Preparing data structures for advanced analytics and self-service reporting using MS SQL, SSIS, SSRS • Write scripts for stored procedures, database snapshots backups and data archiving. • Experience with any of these cloud-based technologies: o PowerBI/Tableau, Azure Data Factory, Azure Synapse, Azure Data Lake o AWS RedShift, Glue, Athena, AWS Quicksight o Google Cloud Platform Good to have: • Agile development environment pairing DevOps with CI/CD pipelines • AI/ML background Interested candidates share cv to dikshith.nalapatla@motivitylabs.com Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences - the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance_ MTS SOFTWARE DEVELOPMENT ENGINEER The Role Performance modelling and evaluation of ACAP workloads to eliminate bottlenecks as early as possible and guide the architecture of future generation devices. This is a challenging role in the FPGA Silicon Architecture Group in AECG business unit of AMD in Hyderabad. About The Team AECG group in AMD designs cutting edge FPGAs and Adaptable SOCs consisting of processor subsystems and associated peripherals, programmable fabric, memory controllers, I/O interfaces and interconnect. Key Responsibilities Modelling and simulation of workload dataflow networks and clock accurate SOC components. Performance analysis and identification of bottlenecks Quick prototyping, long-term design decisions, and exploring novel architectures Enhancement of the existing tools and knowledgebase Collaborating with architects in the development of next generation devices Collaborating with customer facing teams to identify scope of optimization for future market scenarios Breaking down system level designs into simpler dataflow models and identify bottlenecks, capture memory and communication overheads Knowledge sharing with teammates through thorough documentation Preferred Experience Preferred experience in SOC architecture OR Performance analysis. Strong background in Computer architecture, Hardware performance metrics and bottlenecks. Experienced in modelling and simulation of hardware. Experience in performance profiling, creating experiments to address various use-cases and doing design space exploration. Good to have experience of creation of designs for ACAP devices or HLS. Good communication skills Academic Credentials Bachelor’s or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

India

Remote

GCP Data Engineer Remote Type: Fulltime Rate: Market Client -Telus Required Skills: ● 4+ years of industry experience in software development, data engineering, business intelligence, or related field with experience in manipulating, processing, and extracting value from datasets. ● Design, build and deploy internal applications to support our technology life cycle, collaboration and spaces, service delivery management, data and business intelligence among others. ● Building Modular code for multi usable pipeline or any kind of complex Ingestion Framework used to ease the job to load the data into Datalake or Data Warehouse from multiple sources. ● Work closely with analysts and business process owners to translate business requirements into technical solutions. ● Coding experience in scripting and languages (Python, SQL, PySpark). ● Expertise in Google Cloud Platform (GCP) technologies in the data warehousing space ( BigQuery , Google Composer, Airflow, CloudSQL, PostgreSQL, Oracle, GCP Workflows , Dataflow, Cloud Scheduler, Secret Manager, Batch, Cloud Logging, Cloud SDK, Google Cloud Storage, IAM, Vertex AI). ● Maintain highest levels of development practices including: technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, writing clean, modular and self-sustaining code, with repeatable quality and predictability. ● Understanding CI/CD Processes using Pulumi, Github, Cloud Build, Cloud SDK, Docker Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Role: Database Engineer Location : Remote Skills and Experience ● Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. ● Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. ● Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. ● Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. ● Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). ● Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. ● Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. ● Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. ● Knowledge of SQL and understanding of database design principles, normalization, and indexing. ● Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. ● Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. ● Eagerness to develop import workflows and scripts to automate data import processes. ● Knowledge of data security best practices, including access controls, encryption, and compliance standards. ● Strong problem-solving and analytical skills with attention to detail. ● Creative and critical thinking. ● Strong willingness to learn and expand knowledge in data engineering. ● Familiarity with Agile development methodologies is a plus. ● Experience with version control systems, such as Git, for collaborative development. ● Ability to thrive in a fast-paced environment with rapidly changing priorities. ● Ability to work collaboratively in a team environment. ● Good and effective communication skills. ● Comfortable with autonomy and ability to work independently. Show more Show less

Posted 1 month ago

Apply

45.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are seeking a highly experienced Senior Data Engineer with strong expertise in Google Cloud Platform (GCP) and exposure to machine learning engineering (MLE) to support high-impact banking initiatives. The ideal candidate will combine hands-on engineering skills, architectural insight, and a proven track record of building secure, scalable, and intelligent data solutions in financial services. Location: Pune, India (Work from Office - Completely onsite) Experience Required: Minimum 45years Position Type: Full-time Start Date: Immediate or as per notice period Key Responsibilities : Provide technical leadership on GCP data projects, collaborating with business, data science, and ML teams. Design and implement scalable data pipelines using GCP tools (BigQuery, Dataflow, Composer, etc.). Support MLOps workflows, including feature engineering and real-time inference. Ensure secure, compliant, and high-quality data infrastructure aligned with banking standards. Optimize BigQuery performance and cost efficiency for large-scale datasets. Enable BI insights using tools like Power BI and Looker. Own the end-to-end data lifecycle across development, deployment, and monitoring. Required Skills: 6–10 years of experience in data engineering; 3+ on GCP. Deep proficiency in GCP services (BigQuery, Dataflow, Composer, Dataproc). Strong Python and SQL skills; familiarity with Terraform and CI/CD tools. Experience supporting ML pipelines and maintaining compliance in regulated environments. Preferred: GCP certifications (Professional Data Engineer / Architect). Familiarity with MLOps (Vertex AI, Kubeflow), financial data domains, and streaming data. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Roles And Responsibilities Proficiency in building highly scalable ETL and streaming-based data pipelines using Google Cloud Platform (GCP) services and products like Biquark, Cloud Dataflow Proficiency in large scale data platforms and data processing systems such as Google Big Query, Amazon Redshift, Azure Data Lake Excellent Python, PySpark and SQL development and debugging skills, exposure to other Big Data frameworks like Hadoop Hive would be added advantage Experience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g. RabbitMQ and Pub/Sub) Secondary Skills : Cloud Big Table, AI/ML solutions, Compute Engine, Cloud Fusion (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position Overview We are looking for an experienced Lead Data Engineer to join our dynamic team. If you are passionate about building scalable software solutions, and work collaboratively with cross-functional teams to define requirements and deliver solutions we would love to hear from you. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsibilities: Develop and maintain data pipelines and ETL/ELT processes using Python Design and implement scalable, high-performance applications Work collaboratively with cross-functional teams to define requirements and deliver solutions Develop and manage near real-time data streaming solutions using Pub, Sub or Beam Contribute to code reviews, architecture discussions, and continuous improvement initiatives Monitor and troubleshoot production systems to ensure reliability and performance Basic Qualifications: 5+ years of professional software development experience with Python Strong understanding of software engineering best practices (testing, version control, CI/CD) Experience building and optimizing ETL/ELT processes and data pipelines Proficiency with SQL and database concepts Experience with data processing frameworks (e.g., Pandas) Understanding of software design patterns and architectural principles Ability to write clean, well-documented, and maintainable code Experience with unit testing and test automation Experience working with any cloud provider (GCP is preferred) Experience with CI/CD pipelines and Infrastructure as code Experience with Containerization technologies like Docker or Kubernetes Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience) Proven track record of delivering complex software projects Excellent problem-solving and analytical thinking skills Strong communication skills and ability to work in a collaborative environment Preferred Qualifications: Experience with GCP services, particularly Cloud Run and Dataflow Experience with stream processing technologies (Pub/Sub) Familiarity with big data technologies (Airflow) Experience with data visualization tools and libraries Knowledge of CI/CD pipelines with Gitlab and infrastructure as code with Terraform Familiarity with platforms like Snowflake, Bigquery or Databricks, GCP Data engineer certification We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position Overview We are looking for an experienced Lead Data Engineer to join our dynamic team. If you are passionate about building scalable software solutions, and work collaboratively with cross-functional teams to define requirements and deliver solutions we would love to hear from you. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsibilities: Develop and maintain data pipelines and ETL/ELT processes using Python Design and implement scalable, high-performance applications Work collaboratively with cross-functional teams to define requirements and deliver solutions Develop and manage near real-time data streaming solutions using Pub, Sub or Beam Contribute to code reviews, architecture discussions, and continuous improvement initiatives Monitor and troubleshoot production systems to ensure reliability and performance Basic Qualifications: 5+ years of professional software development experience with Python Strong understanding of software engineering best practices (testing, version control, CI/CD) Experience building and optimizing ETL/ELT processes and data pipelines Proficiency with SQL and database concepts Experience with data processing frameworks (e.g., Pandas) Understanding of software design patterns and architectural principles Ability to write clean, well-documented, and maintainable code Experience with unit testing and test automation Experience working with any cloud provider (GCP is preferred) Experience with CI/CD pipelines and Infrastructure as code Experience with Containerization technologies like Docker or Kubernetes Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience) Proven track record of delivering complex software projects Excellent problem-solving and analytical thinking skills Strong communication skills and ability to work in a collaborative environment Preferred Qualifications: Experience with GCP services, particularly Cloud Run and Dataflow Experience with stream processing technologies (Pub/Sub) Familiarity with big data technologies (Airflow) Experience with data visualization tools and libraries Knowledge of CI/CD pipelines with Gitlab and infrastructure as code with Terraform Familiarity with platforms like Snowflake, Bigquery or Databricks, GCP Data engineer certification We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Description Consultant Delivery ( Data Engineer) About Worldline At Worldline, we are pioneers in payments technology, committed to creating innovative solutions that make financial transactions secure, accessible, and seamless worldwide. Our diverse team of professionals collaborates across cultures and disciplines, driving progress that benefits society and businesses of all sizes. We believe that diverse perspectives fuel innovation and are dedicated to fostering an inclusive environment where all individuals can thrive. The Opportunity We are seeking a highly skilled and knowledgeable Data Engineer to join our Data Management team on a transformative Move to Cloud (M2C) project. This role offers a unique opportunity to contribute to a critical initiative, migrating our data infrastructure to the cloud and optimizing our data pipelines for performance and scalability. We welcome applicants from all backgrounds and experiences, believing that our strength lies in our diversity. Technical Skills & Qualifications Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Experience: Minimum of 5 years of experience as a Data Engineer, with a strong focus on cloud-based solutions, preferably within the Google Cloud Platform (GCP) ecosystem. Essential Skills: Strong knowledge of version control systems and CI/CD pipelines. Proficiency in GCP services, particularly DataProc, Dataflow, Cloud Functions, Workflows, Cloud Composer, and BigQuery. Extensive experience with ETL tools, specifically dbt Labs, and a deep understanding of ETL best practices. Proven ability to build and optimize data pipelines, architectures, and datasets from both structured and unstructured data sources. Proficiency in SQL and Python, with experience using Spark. Excellent analytical and problem-solving skills, with the ability to translate complex requirements into technical solutions. Desirable Skills: Relevant certifications in Google Cloud Platform or other data engineering credentials. Preferred Skills Experience migrating data from on-premises data warehouses (e.g., Oracle) to cloud-based solutions. Experience working with large-scale datasets and complex data transformations. Strong communication and interpersonal skills, with the ability to collaborate effectively within a team environment. Why Join Us? At Worldline, we believe that embracing diversity and promoting inclusion drives innovation and success. We foster a workplace where everyone feels valued and empowered to bring their authentic selves. We offer extensive training, mentorship, and development programs to support your growth and help you make a meaningful impact. Join a global team of passionate professionals shaping the future of payments technology—where your ideas, experiences, and perspectives are appreciated and celebrated. Learn more about life at Worldline at Jobs.worldline.com. We are an Equal Opportunity Employer. We do not discriminate based on race, ethnicity, religion, color, national origin, sex (including pregnancy and childbirth), sexual orientation, gender identity or expression, age, disability, or any other legally protected characteristic. We are committed to creating a diverse and inclusive environment for all employees Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies