Jobs
Interviews

929 Etl Tool Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

4 - 7 Lacs

Gurugram

Work from Office

Primary Skills SQL (Advanced Level) SSAS (SQL Server Analysis Services) Multidimensional and/or Tabular Model MDX / DAX (strong querying capabilities) Data Modeling (Star Schema, Snowflake Schema) Secondary Skills ETL processes (SSIS or similar tools) Power BI / Reporting tools Azure Data Services (optional but a plus) Role & Responsibilities Design, develop, and deploy SSAS models (both tabular and multidimensional). Write and optimize MDX/DAX queries for complex business logic. Work closely with business analysts and stakeholders to translate requirements into robust data models. Design and implement ETL pipelines for data integration. Build reporting datasets and support BI teams in developing insightful dashboards (Power BI preferred). Optimize existing cubes and data models for performance and scalability. Ensure data quality, consistency, and governance standards. Top Skill Set SSAS (Tabular + Multidimensional modeling) Strong MDX and/or DAX query writing SQL Advanced level for data extraction and transformations Data Modeling concepts (Fact/Dimension, Slowly Changing Dimensions, etc.) ETL Tools (SSIS preferred) Power BI or similar BI tools Understanding of OLAP & OLTP concepts Performance Tuning (SSAS/SQL) Skills: analytical skills,etl processes (ssis or similar tools),collaboration,multidimensional expressions (mdx),power bi / reporting tools,sql (advanced level),sql proficiency,dax,ssas (multidimensional and tabular model),etl,data modeling (star schema, snowflake schema),communication,azure data services,mdx,data modeling,ssas,data visualization

Posted 1 month ago

Apply

3.0 - 8.0 years

2 Lacs

Hyderabad

Work from Office

Key responsibilities: Understand the programs service catalog and document the list of tasks which has to be performed for each Lead the design, development, and maintenance of ETL processes to extract, transform, and load data from various sources into our data warehouse Implement best practices for data loading, ensuring optimal performance and data quality Utilize your expertise in IDMC to establish and maintain data governance, data quality, and metadata management processes Implement data controls to ensure compliance with data standards, security policies, and regulatory requirements Collaborate with data architects to design and implement scalable and efficient data architectures that support business intelligence and analytics requirements Work on data modeling and schema design to optimize database structures for ETL processes Identify and implement performance optimization strategies for ETL processes, ensuring timely and efficient data loading Troubleshoot and resolve issues related to data integration and performance bottlenecks Collaborate with cross-functional teams, including data scientists, business analysts, and other engineering teams, to understand data requirements and deliver effective solutions Provide guidance and mentorship to junior members of the data engineering team Create and maintain comprehensive documentation for ETL processes, data models, and data flows Ensure that documentation is kept up-to-date with any changes to data architecture or ETL workflows Use Jira for task tracking and project management Implement data quality checks and validation processes to ensure data integrity and reliability Maintain detailed documentation of data engineering processes and solutions Required Skills: Bachelor's degree in Computer Science, Engineering, or a related field Proven experience as a Senior ETL Data Engineer, with a focus on IDMC / IICS Strong proficiency in ETL tools and frameworks (e g , Informatica Cloud, Talend, Apache NiFi) Expertise in IDMC principles, including data governance, data quality, and metadata management Solid understanding of data warehousing concepts and practices Strong SQL skills and experience working with relational databases Excellent problem-solving and analytical skills Qualified candidates should APPLY NOW for immediate consideration! Please hit APPLY to provide the required information, and we will be back in touch as soon as possible Thank you! ABOUT INNOVA SOLUTIONS: Founded in 1998 and headquartered in Atlanta, Georgia, Innova Solutions employs approximately 50,000 professionals worldwide and reports an annual revenue approaching $3 Billion Through our global delivery centers across North America, Asia, and Europe, we deliver strategic technology and business transformation solutions to our clients, enabling them to operate as leaders within their fields Recent Recognitions: One of Largest IT Consulting Staffing firms in the USA Recognized as #4 by Staffing Industry Analysts (SIA 2022) ClearlyRated Client Diamond Award Winner (2020) One of the Largest Certified MBE Companies in the NMSDC Network (2022) Advanced Tier Services partner with AWS and Gold with MS

Posted 1 month ago

Apply

3.0 - 6.0 years

3 - 7 Lacs

Pune

Work from Office

The Apex Group was established in Bermuda in 2003 and is now one of the worlds largest fund administration and middle office solutions providers. Our business is unique in its ability to reach globally, service locally and provide cross-jurisdictional services. With our clients at the heart of everything we do, our hard-working team has successfully delivered on an unprecedented growth and transformation journey, and we are now represented by over circa 13,000 employees across 112 offices worldwide.Your career with us should reflect your energy and passion. Thats why, at Apex Group, we will do more than simply empower you. We will work to supercharge your unique skills and experience. Take the lead and well give you the support you need to be at the top of your game. And we offer you the freedom to be a positive disrupter and turn big ideas into bold, industry-changing realities. For our business, for clients, and for you Middle Office - Analyst - Business Systems - LocationPune Experience3 - 6 years DesignationAssociate Industry/DomainETL/Mapping Tool, VBA, SQL, Capital Market knowledge, Bank Debts, Solvas Apex Group Ltd has an immediate requirement for Middle Office Tech Specialist. As an ETL Techno-Functional Support Specialist at Solvas, you will be the bridge between technical ETL processes and end-users, ensuring the effective functioning and support of data integration solutions. Your role involves addressing user queries, providing technical support for ETL-related issues, and collaborating with both technical and non-technical teams to ensure a seamless data integration environment. You will contribute to the development, maintenance, and enhancement of ETL processes for solvas application, ensuring they align with business requirements. Work Environment: Highly motivated, collaborative, and results driven. Growing business within a dynamic and evolving industry. Entrepreneurial approach to everything we do. Continual focus on process improvement and automation. Functional/ Business Expertise Required Serve as the primary point of contact for end-users seeking technical assistance related to Solvas applications. Serve as a point of contact for end-users, addressing queries related to ETL processes, data transformations, and data loads. Provide clear and concise explanations to non-technical users regarding ETL functionalities and troubleshoot issues. Integrate Client Trade files into the Conversant systemdesign, develop, implement, and test technical solutions based on client and business requirements. Diagnose and troubleshoot ETL-related issues reported by end-users or identified through monitoring systems. Work closely with business analysts and end-users to understand and document ETL requirements. Monitor ETL jobs and processes to ensure optimal performance and identify potential issues. Create user documentation and guides to facilitate self-service issue resolution. Hands on experience in working on any ETL tools is mandatory . Strong command of SQL, VBA and Advance Excel. Good understanding of Solvas or any other loan operation system . Mandatory to have good knowledge of Solvas Bank Debt working . Intermediate knowledge of financial instruments, both listed and unlisted or OTCs , which includes and not limited to derivatives, illiquid stocks, private equity, bankdebts, and swaps. Understanding of the Loan operation industry is necessary. Should have knowledge of market data provider applications (Bloomberg, Refinitiv etc.). Proficiency in any loan operation system, preferably solvas. An ability to work under pressure with changing priorities. Strong analytical and problem -solving skills. Experience and Knowledge: 3+ years of related experience in support/ technical in any loan operation system & accounting system (Solvas/ Geneva). Connect with operation to understand & resolve their issues. Experience working data vendors (Bloomberg/ Refinitiv/ Markit) Able to handle reporting issue/ New requirement raised by operations. Strong analytical, problem solving, and troubleshooting abilities. Strong Excel and Excel functions knowledge for business support. Create and maintain Business documentation, including user manuals and guides. Worked on system upgrade/ migration/ Integration. Other Skills: Good team player, ability to work on a local, regional, and global basis. Good communication & management skills Good understanding of Financial Services/ Capital Markets/ Fund Administration DisclaimerUnsolicited CVs sent to Apex (Talent Acquisition Team or Hiring Managers) by recruitment agencies will not be accepted for this position. Apex operates a direct sourcing model and where agency assistance is required, the Talent Acquisition team will engage directly with our exclusive recruitment partners.

Posted 1 month ago

Apply

5.0 - 7.0 years

18 - 20 Lacs

Mumbai

Work from Office

Work Timing: Standard IST 6 months Contract Exp in batch/real-time integrations with ODI 11g, customizing knowledge modules & design/development expertise Skilled in ETL processes, PL/SQL & building Interfaces, Packages, Load Plans & Sequences in ODI Required Candidate profile Exp in ODI Master & Work Repository, data modeling & ETL design, multi-system integration, error handling, automation & object migration in ODI Performance tuning, unit testing & debugging mappings Perks and benefits MNC

Posted 1 month ago

Apply

7.0 - 12.0 years

25 - 27 Lacs

Kolkata, Bengaluru

Hybrid

Required Experience: • Design, develop, and maintain ETL/ELT workflows using Informatica IICS. • Collaborate with business and technical teams to understand requirements and translate them into robust data integration solutions. • Optimize data pipelines for performance and scalability. • Integrate IICS solutions with cloud-based data stores like Google BigQuery and cloud storage solutions. • Develop data mappings, task flows, parameter files, and reusable objects. • Manage deployments, migrations, and version control for IICS assets. • Perform unit testing, debugging, and troubleshooting of ETL jobs. • Document data flow and architecture as part of the SDLC. • Work in an Agile environment and participate in sprint planning, reviews, and retrospectives. • Provide mentorship and code reviews for junior developers, ensuring adherence to best practices and coding standards. Skills & Qualifications: • Bachelors or Master’s degree in Computer Science, Information Systems, or related field. • 7+ years of experience in ETL development with at least 2–3 years in Informatica IICS. • Strong experience in data integration, transformation, and orchestration using IICS. • Good working knowledge of cloud data platforms, preferably Google Cloud Platform (GCP). • Hands-on experience with Google BigQuery (GBQ) including writing SQL queries, data ingestion, and optimization. • Strong SQL skills and experience with RDBMS (e.g., Oracle, SQL Server, PostgreSQL). • Experience in integrating data from various sources including on-prem, SaaS applications, and cloud data lakes. • Familiarity with data governance, data quality, and data cataloging tools. • Understanding of REST APIs and experience with API integration in IICS. • Excellent problem-solving skills and attention to detail. • Strong communication skills and the ability to work effectively in a team.

Posted 1 month ago

Apply

6.0 - 8.0 years

15 - 18 Lacs

Bengaluru

Work from Office

Role Overview For this newly created role, we are seeking a technically proficient Online Data Analyst to join our team. In this role, you will be responsible for extracting, transforming, and loading (ETL) data from various online sources using APIs, managing and querying large datasets with SQL, and ensuring the reliability and performance of our data systems through Azure monitoring and alerts. Your analytical skills will be essential in uncovering actionable insights from complex datasets, and you will work closely with cross-functional teams to support data-driven decision-making. Key Responsibilities Data Collection and Management: Extract, transform, and load (ETL) data from various sources into our online applications using tools and processes. Utilize APIs to automate data integration from diverse platforms. Maintain and enhance existing data pipelines, ensuring data integrity and consistency. Data Analysis: Conduct in-depth data analysis to uncover trends, patterns, and actionable insights. Utilize SQL for querying, managing, and manipulating large datasets. Create and maintain interactive dashboards and reports to present data insights to stakeholders. Monitoring and Alerts: Implement and manage Azure monitoring and alerting systems to ensure data workflows and applications are functioning optimally. Proactively identify and troubleshoot issues in data processes, ensuring minimal downtime and maximum reliability. Collaboration and Communication: Collaborate with cross-functional teams including marketing, product development, and IT to understand data needs and provide analytical support. Communicate complex data findings and recommendations to both technical and non-technical audiences. Contribute to continuous improvement of data processes, analytical methodologies, and best practices. Critical Competencies for Success Educational Background: Bachelors degree in Data Science, Statistics, Computer Science, or a related field. Technical Skills: Proficient in SQL for data querying and manipulation. Experience with ETL processes and tools. Strong understanding of API integration and data automation. Hands-on experience with Azure monitoring and alerting tools. Knowledge of programming languages such as HTML and Javascript is a plus. Experience: Proven experience in a data analyst role or similar position. Demonstrated experience with online data sources and web analytics. Experience with cloud platforms, particularly Azure, is required. Analytical Skills: Strong problem-solving skills and attention to detail. Ability to analyze large datasets and generate meaningful insights. Excellent statistical and analytical capabilities. Soft Skills: Strong communication and presentation skills. Ability to work independently and collaboratively in a team environment. Good organizational skills and ability to manage multiple projects simultaneously. Role & responsibilities For this newly created role, we are seeking a technically proficient Online Data Analyst to join our team. In this role, you will be responsible for extracting, transforming, and loading (ETL) data from various online sources using APIs, managing and querying large datasets with SQL, and ensuring the reliability and performance of our data systems through Azure monitoring and alerts. Your analytical skills will be essential in uncovering actionable insights from complex datasets, and you will work closely with cross-functional teams to support data-driven decision-making. Preferred candidate profile Bachelors degree in Data Science, Statistics, Computer Science, or a related field Perks and benefits Industrial standards

Posted 1 month ago

Apply

3.0 - 6.0 years

7 - 15 Lacs

Gurugram

Work from Office

Dear Candidate, Greetings!! Hiring For SSIS Developer - Gurgaon(wfo) Responsibilities 1 Must have exp into SSIS packages for ETL processes 2 End to end data migration 3 Must have exp in Oracle Cloud Share resume on abhishek@xinoe.com Regards,

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education High School Diploma/GED Required technical and professional expertise Good Hands on experience in DBT is required. ETL Datastage and snowflake - preferred. Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Chennai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data integration and ETL tools.- Strong understanding of data modeling and database design principles.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Chennai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data integration and ETL tools.- Strong understanding of data modeling and database design principles.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with data warehousing solutions and ETL tools.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Knowledge of data governance and data quality frameworks. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with data warehousing solutions and ETL tools.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Knowledge of data governance and data quality frameworks. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Chennai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and optimize data pipelines to enhance data processing efficiency.- Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Chennai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and optimize data pipelines to enhance data processing efficiency.- Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data is accessible and reliable for decision-making purposes. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with stakeholders to gather and analyze data requirements.- Design and implement scalable data pipelines to support data processing needs. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data is accessible and reliable for decision-making purposes. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with stakeholders to gather and analyze data requirements.- Design and implement scalable data pipelines to support data processing needs. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with data warehousing solutions and ETL tools.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Knowledge of data governance and data quality best practices. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with data warehousing solutions and ETL tools.- Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.- Knowledge of data governance and data quality best practices. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Navi Mumbai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms such as AWS or Azure for data storage and processing.- Knowledge of data warehousing concepts and best practices. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Navi Mumbai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms such as AWS or Azure for data storage and processing.- Knowledge of data warehousing concepts and best practices. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data infrastructure is robust and scalable to meet the evolving needs of the organization. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and implementation of data architecture to support data initiatives.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data infrastructure is robust and scalable to meet the evolving needs of the organization. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and implementation of data architecture to support data initiatives.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Navi Mumbai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain robust data pipelines to support data processing and analytics.- Collaborate with data scientists and analysts to understand data needs and provide appropriate solutions. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes for data migration and transformation.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Mumbai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Navi Mumbai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain robust data pipelines to support data processing and analytics.- Collaborate with data scientists and analysts to understand data needs and provide appropriate solutions. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and processes for data migration and transformation.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Mumbai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Very good experience on Continuous Flow Graph tool used for point based development. Design, develop, and maintain ETL processes using Ab Initio tools. Write, test, and deploy Ab Initio graphs, scripts, and other necessary components. Troubleshoot and resolve data processing issues and improve performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Over all 8 Years and Relevant 5+ years Extract, transform, and load data from various sources into data warehouses, operational data stores, or other target systems. Work with different data formats, including structured, semi-structured, and unstructured data Preferred technical and professional experience Effective communication and presentation skills. Industry expertise / specialization

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies