Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 - 20.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Building Tool Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Building Tool.- Strong understanding of data modeling and architecture principles.- Experience with data integration techniques and tools.- Familiarity with cloud-based data platforms and services.- Ability to troubleshoot and resolve data-related issues efficiently. Additional Information:- The candidate should have minimum 5 years of experience in Data Building Tool.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
12.0 - 15.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Building Tool Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities and understanding.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Building Tool.- Strong understanding of data architecture principles and best practices.- Experience with data integration techniques and tools.- Familiarity with cloud-based data platforms and services.- Ability to troubleshoot and resolve data-related issues efficiently. Additional Information:- The candidate should have minimum 12 years of experience in Data Building Tool.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
12.0 - 15.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Building Tool Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Your role will require you to navigate complex data environments, providing insights and recommendations that drive effective data management and governance practices. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities and foster a culture of continuous improvement.- Monitor and evaluate the performance of data platform components, making recommendations for enhancements and optimizations. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Building Tool.- Strong understanding of data architecture principles and best practices.- Experience with data integration techniques and tools.- Familiarity with cloud-based data platforms and services.- Ability to analyze and troubleshoot data-related issues effectively. Additional Information:- The candidate should have minimum 12 years of experience in Data Building Tool.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
12 - 18 Lacs
Hyderabad
Remote
Role: Senior Data Engineer Azure/Snowflake Duration: 6+ Months Location: Remote Working Hours: 12:30pm IST - 9:30pm IST (3am - 12pm EST) Job Summary: We are seeking a Senior Data Engineer with advanced hands-on experience in Snowflake and Azure to support the development and optimization of enterprise-grade data pipelines. This role is ideal for someone who enjoys deep technical work and solving complex data engineering challenges in a modern cloud environment. Key Responsibilities: Build and enhance scalable data pipelines using Azure Data Factory, Snowflake, and Azure Data Lake Develop and maintain ELT processes to ingest and transform data from various structured and semi-structured sources Write optimized and reusable SQL for complex data transformations in Snowflake Collaborate closely with analytics teams to ensure clean, reliable data delivery Monitor and troubleshoot pipeline performance, data quality, and reliability Participate in code reviews and contribute to best practices around data engineering standards and governance Qualifications: 5+ years of data engineering experience in enterprise environments Deep hands-on experience with Snowflake, Azure Data Factory, Azure Blob/Data Lake, and SQL Proficient in scripting for data workflows (Python or similar) Strong grasp of data warehousing concepts and ELT development best practices Experience with version control tools (e.g., Git) and CI/CD processes for data pipelines Detail-oriented with strong problem-solving skills and the ability to work independently
Posted 4 weeks ago
15.0 - 20.0 years
9 - 13 Lacs
Chennai
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Building Tool Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Building Tool.- Strong understanding of data modeling techniques.- Experience with data integration and ETL processes.- Familiarity with cloud-based data platforms and services.- Ability to troubleshoot and optimize data workflows. Additional Information:- The candidate should have minimum 5 years of experience in Data Building Tool.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 4 weeks ago
3.0 - 4.0 years
9 - 14 Lacs
Mumbai
Work from Office
Job TitleAlteryx Developer About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Job Title: Alteryx Developer Location: Mumbai, India Experience: 8+ years NP: Immediate Joiners Preferred Job Summary: We are seeking a highly experienced Alteryx Developer to join our dynamic team in Mumbai. The ideal candidate will have 8+ years of experience in data analytics, ETL processes, and workflow automation using Alteryx. The role requires strong problem-solving skills, hands-on experience in data transformation, and the ability to work in a fast-paced environment. Immediate joiners will be given preference. Key Responsibilities: Design, develop, and maintain scalable Alteryx workflows and analytical solutions. Collaborate with business stakeholders to understand data requirements and deliver efficient ETL processes. Optimize workflows for performance and maintainability. Integrate Alteryx with various data sources like SQL Server, Excel, APIs, and cloud platforms. Perform data cleansing, transformation, and validation. Document workflows, processes, and data pipelines. Work with cross-functional teams to ensure data quality and consistency. Provide production support and troubleshooting for existing workflows. Required Skills and Qualifications: 8+ years of experience in Data Analytics or ETL Development, with at least 3-4 years hands-on with Alteryx. Strong knowledge of Alteryx Designer, Server, and Gallery. Experience working with databases such as SQL Server, Oracle, and cloud data platforms. Proficient in writing complex SQL queries. Understanding of data governance, data quality, and best practices. Strong analytical and communication skills. Alteryx Certification is a plus. Experience with reporting/visualization tools like Tableau or Power BI is a bonus. Preferred Qualifications: Background in Financial Services, Banking, or Consulting domain. Experience with scripting languages (Python, R) within Alteryx is a plus. Knowledge of Capital Markets or Corporate Banking will be an added advantage. If you are keen to join us, you will be part of an organization that values your contributions, recognizes your potential, and provides ample opportunities for growth. For more information, visit www.capco.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube.
Posted 4 weeks ago
2.0 - 7.0 years
4 - 9 Lacs
Coimbatore
Work from Office
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : GCP Dataflow, Google Pub/Sub, Google Dataproc Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are seeking a skilled GCP Data Engineer to join our dynamic team. The ideal candidate will design, build, and maintain scalable data pipelines and solutions on Google Cloud Platform (GCP). This role requires expertise in cloud-based data engineering and hands-on experience with GCP tools and services, ensuring efficient data integration, transformation, and storage for various business use cases.________________________________________ Roles & Responsibilities: Design, develop, and deploy data pipelines using GCP services such as Dataflow, BigQuery, Pub/Sub, and Cloud Storage. Optimize and monitor data workflows for performance, scalability, and reliability. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and implement solutions. Implement data security and governance measures, ensuring compliance with industry standards. Automate data workflows and processes for operational efficiency. Troubleshoot and resolve technical issues related to data pipelines and platforms. Document technical designs, processes, and best practices to ensure maintainability and knowledge sharing.________________________________________ Professional & Technical Skills:a) Must Have: Proficiency in GCP tools such as BigQuery, Dataflow, Pub/Sub, Cloud Composer, and Cloud Storage. Expertise in SQL and experience with data modeling and query optimization. Solid programming skills in Python ofor data processing and ETL development. Experience with CI/CD pipelines and version control systems (e.g., Git). Knowledge of data warehousing concepts, ELT/ETL processes, and real-time streaming. Strong understanding of data security, encryption, and IAM policies on GCP.b) Good to Have: Experience with Dialogflow or CCAI tools Knowledge of machine learning pipelines and integration with AI/ML services on GCP. Certifications such as Google Professional Data Engineer or Google Cloud Architect.________________________________________ Additional Information: - The candidate should have a minimum of 3 years of experience in Google Cloud Machine Learning Services and overall Experience is 3- 5 years - The ideal candidate will possess a strong educational background in computer science, mathematics, or a related field, along with a proven track record of delivering impactful data-driven solutions. Qualifications 15 years full time education
Posted 1 month ago
4.0 - 5.0 years
6 - 7 Lacs
Karnataka
Work from Office
Develop and manage ETL pipelines using Python. Responsible for transforming and loading data efficiently from source to destination systems, ensuring clean and accurate data.
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Mumbai
Work from Office
Design and implement ETL solutions using IBM InfoSphere DataStage to integrate and process large datasets. You will develop, test, and optimize data pipelines to ensure smooth data transformation and loading. Expertise in IBM InfoSphere DataStage, ETL processes, and data integration is essential for this position.
Posted 1 month ago
8.0 - 12.0 years
6 - 10 Lacs
Noida
Work from Office
Hands-on experience with ETL testing tools Strong SQL skills Experience with file processing (CSV, XML, JSON) and data validation techniques. Familiarity with scripting languages Knowledge of cloud platforms (AWS, Azure) is a plus.
Posted 1 month ago
5.0 - 9.0 years
14 - 19 Lacs
Chennai
Work from Office
Project description We are seeking a highly skilled Senior Power BI Developer with strong expertise in Power BI, SQL Server, and data modeling to join our Business Intelligence team. In this role, you will lead the design and development of interactive dashboards, robust data models, and data pipelines that empower business stakeholders to make informed decisions. You will work collaboratively with cross-functional teams and drive the standardization and optimization of our BI architecture. Responsibilities Power BI Dashboard Development (UI Dashboards) Design, develop, and maintain visually compelling, interactive Power BI dashboards aligned with business needs. Collaborate with business stakeholders to gather requirements, develop mockups, and refine dashboard UX. Implement advanced Power BI features like bookmarks, drill-throughs, dynamic tooltips, and DAX calculations. Conduct regular UX/UI audits and performance tuning on reports. Data Modeling in SQL Server & Dataverse Build and manage scalable, efficient data models in Power BI, Dataverse, and SQL Server. Apply best practices in dimensional modeling (star/snowflake schema) to support analytical use cases. Ensure data consistency, accuracy, and alignment across multiple sources and business areas. Perform optimization of models and queries for performance and load times. Power BI Dataflows & ETL Pipelines Develop and maintain reusable Power BI Dataflows for centralized data transformations. Create ETL processes using Power Query, integrating data from diverse sources including SQL Server, Excel, APIs, and Dataverse. Automate data refresh schedules and monitor dependencies across datasets and reports. Ensure efficient data pipeline architecture for reuse, scalability, and maintenance. Skills Must have Experience6+ years in Business Intelligence or Data Analytics with a strong focus on Power BI and SQL Server. Technical Skills: Expert-level Power BI development, including DAX, custom visuals, and report optimization. Strong knowledge of SQL (T-SQL) and relational database design. Experience with Dataverse and Power Platform integration. Proficiency in Power Query, Dataflows, and ETL development. ModelingProven experience in dimensional modeling, star/snowflake schema, and performance tuning. Data IntegrationSkilled in connecting and transforming data from various sources, including APIs, Excel, and cloud data services. CollaborationAbility to work with stakeholders to define KPIs, business logic, and dashboard UX. Nice to have N/A OtherLanguagesEnglishC1 Advanced SenioritySenior
Posted 1 month ago
3.0 - 5.0 years
12 - 18 Lacs
Gurugram
Hybrid
about you Key responsibilities include: Designing and developing robust ETL pipelines and data integration workflows that meet business and technical requirements Leveraging cloud-native tools and technologies to build scalable and secure data integration solutions. Collaborating with data architects, analysts, and other stakeholders to understand integration needs and ensure alignment with data strategy. Ensuring data quality, consistency, and integrity throughout the ETL process. Monitoring and maintaining ETL jobs, performing troubleshooting, and optimizing performance. Implementing best practices for version control, testing, deployment, and documentation of integration solutions. additional information Main activities Develop the interface for ETL in ABInitio with the flexibility to switch to any iPaaS tool as needed. Manage all technical and configuration aspects of the identified solution, demonstrating advanced competency in understanding functional requirements. Lead design and development activities to create the technical architecture of the interface. Collaborate closely with Solution Leads and Business Analysts to ensure a thorough understanding of requirements. Required Skills: 1. Proficiency in ABInitio development. 2. Knowledge of Java/J2EE 3. Strong knowledge of Unix shell scripting . 3. Familiarity with SQL and its applications in data management. Preferred Skills: Experience with data loading processes into Oracle SaaS applications using Fast Data Integration (FBDI). Understanding of SOAP and REST APIs. Knowledge of iPaaS tools and their application in cloud-based solutions.
Posted 1 month ago
3.0 - 7.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills Proven solid analytical skills and attention to detail Proven ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)
Posted 1 month ago
5.0 - 10.0 years
8 - 13 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Senior Cloud Data Engineer to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for migrating our on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools, Delta lake and Snowflake. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Solid communication and collaboration skills Solid analytical skills and attention to detail Ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 month ago
4.0 - 8.0 years
8 - 15 Lacs
Hyderabad
Hybrid
About the Company: DiLytics is a leading Information Technology (IT) Services provider completely focused on providing services in Analytics, Business Intelligence, Data Warehousing, Data Integration and Enterprise Performance Management areas. We have been growing for 12+ years and have offices in the US, Canada and India. We are an employee-friendly company that offers exciting and stress-free work culture and provides career paths where elements of job enrichment and flexibility to move across roles are inherent. Key Responsibilities: Manage a team of ETL developers, assign tasks, and ensure timely delivery of projects and PoCs. Provide technical leadership and groom a team of ETL developers. Design and develop complex mappings, Process Flows and ETL scripts. Perform data extraction and transformation using SQL query to create data set required for dashboards. Optimize ETL processes for efficiency, scalability, and performance tuning. Utilize appropriate ETL tools and technologies (e.g., ODI, ADF, SSIS, Alteryx, Talend, etc.) Stay up to date on the latest ETL trends and technologies. Exposure to designing and developing BI reports and dashboards using Power Bi/Tableau and other tools to meet business analytic needs. Skills Required: Bachelors degree in computer science or related field. Relevant experience of 4 to 8 years. Extensive experience in designing and implementing ETL processes. Experience in designing / developing ETL processes such as ETL control tables, error logging, auditing, data quality, etc. Expertise in Data Integration tool sets - Azure Data Factory, Oracle Data Integrator, SQL Server Integration Services, Talend, etc. - and PL/SQL. Exposure to one or more of these data visualization tools - OAC, Power BI, Tableau, OBIEE. Excellent written, verbal communication and interpersonal skills.
Posted 1 month ago
3.0 - 6.0 years
5 - 12 Lacs
Chennai
Remote
Job Description We are seeking a skilled Talend Developer with 3 to 6 years of hands-on experience in designing, developing, and optimizing ETL pipelines. The ideal candidate will be proficient in working with Talend, AWS, APIs, and databases and can join on immediate to 15 days notice . Key Responsibilities: Design, develop, and maintain ETL workflows to extract data from AWS S3, transform it per business rules, load into APIs, and retrieve results. Analyze existing ETL workflows and identify areas for performance and design improvements. Build scalable, dynamic ETL pipelines from scratch with future enhancement capability. Collaborate with data engineering and data science teams to ensure data consistency and integrity. Conduct comprehensive unit testing of ETL pipelines and troubleshoot performance issues. Deploy Talend pipelines across different environments using best practices and context variables. Create clear and comprehensive documentation of ETL processes, pipelines, and methodologies. Required Skills & Experience: Minimum 3 years of Talend Development experience Strong expertise in Talend components for File, Database, and API (GET & POST) integration Experience with AWS services and incorporating them in Talend workflows Proven experience in pipeline migration and multi-environment deployment Proficient in SQL and relational databases Working knowledge of Java or Python for automation and logic handling Familiarity with Git for version control and Nexus for artifact management Strong debugging and troubleshooting skills for ETL workflows Excellent attention to detail and analytical mindset Effective communication and collaboration skills Benefits: Competitive salary and performance bonuses Work on cutting-edge data engineering projects Collaborative work culture Learning and growth opportunities How to Apply: Interested candidates meeting the criteria and ready to join within 15 days, please apply directly via Naukri or send your updated resume to hanifarsangeetha@sightspectrum.com
Posted 1 month ago
9.0 - 14.0 years
4 - 7 Lacs
Bengaluru
Work from Office
This role involves the development and application of engineering practice and knowledge in designing, managing and improving the processes for Industrial operations, including procurement, supply chain and facilities engineering and maintenance of the facilities. Project and change management of industrial transformations are also included in this role. - Grade Specific Focus on Industrial Operations Engineering. Fully competent in own area. Acts as a key contributor in a more complex/ critical environment. Proactively acts to understand and anticipates client needs. Manages costs and profitability for a work area. Manages own agenda to meet agreed targets. Develop plans for projects in own area. Looks beyond the immediate problem to the wider implications. Acts as a facilitator, coach and moves teams forward. Skills (competencies)
Posted 1 month ago
8.0 - 13.0 years
14 - 18 Lacs
Bengaluru
Work from Office
The Solution Architect Data Engineer will design, implement, and manage data solutions for the insurance business, leveraging expertise in Cognos, DB2, Azure Databricks, ETL processes, and SQL. The role involves working with cross-functional teams to design scalable data architectures and enable advanced analytics and reporting, supporting the company's finance, underwriting, claims, and customer service operations. Key Responsibilities: Data Architecture & Design: Design and implement robust, scalable data architectures and solutions in the insurance domain using Azure Databricks, DB2, and other data platforms. Data Integration & ETL Processes: Lead the development and optimization of ETL pipelines to extract, transform, and load data from multiple sources, ensuring data integrity and performance. Cognos Reporting: Oversee the design and maintenance of Cognos reporting systems, developing custom reports and dashboards to support business users in finance, claims, underwriting, and operations. Data Engineering: Design, build, and maintain data models, data pipelines, and databases to enable business intelligence and advanced analytics across the organization. Cloud Infrastructure: Develop and manage data solutions on Azure, including Databricks for data processing, ensuring seamless integration with existing systems (e.g., DB2, legacy platforms). SQL Development: Write and optimize complex SQL queries for data extraction, manipulation, and reporting purposes, with a focus on performance and scalability. Data Governance & Quality: Ensure data quality, consistency, and governance across all data solutions, implementing best practices and adhering to industry standards (e.g., GDPR, insurance regulations). Collaboration: Work closely with business stakeholders, data scientists, and analysts to understand business needs and translate them into technical solutions that drive actionable insights. Solution Architecture: Provide architectural leadership in designing data platforms, ensuring that solutions meet business requirements, are cost-effective, and can scale for future growth. Performance Optimization: Continuously monitor and tune the performance of databases, ETL processes, and reporting tools to meet service level agreements (SLAs). Documentation: Create and maintain comprehensive technical documentation including architecture diagrams, ETL process flows, and data dictionaries. Required Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Proven experience as a Solution Architect or Data Engineer in the insurance industry, with a strong focus on data solutions. Hands-on experience with Cognos (for reporting and dashboarding) and DB2 (for database management). Proficiency in Azure Databricks for data processing, machine learning, and real-time analytics. Extensive experience in ETL development, data integration, and data transformation processes. Strong knowledge of Python, SQL (advanced query writing, optimization, and troubleshooting). Experience with cloud platforms (Azure preferred) and hybrid data environments (on-premises and cloud). Familiarity with data governance and regulatory requirements in the insurance industry (e.g., Solvency II, IFRS 17). Strong problem-solving skills, with the ability to troubleshoot and resolve complex technical issues related to data architecture and performance. Excellent verbal and written communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Preferred Qualifications: Experience with other cloud-based data platforms (e.g., Azure Data Lake, Azure Synapse, AWS Redshift). Knowledge of machine learning workflows, leveraging Databricks for model training and deployment. Familiarity with insurance-specific data models and their use in finance, claims, and underwriting operations. Certifications in Azure Databricks, Microsoft Azure, DB2, or related technologies. Knowledge of additional reporting tools (e.g., Power BI, Tableau) is a plus. Key Competencies: Technical Leadership: Ability to guide and mentor development teams in implementing best practices for data architecture and engineering. Analytical Skills: Strong analytical and problem-solving skills, with a focus on optimizing data systems for performance and scalability. Collaborative Mindset: Ability to work effectively in a cross-functional team, communicating complex technical solutions in simple terms to business stakeholders. Attention to Detail: Meticulous attention to detail, ensuring high-quality data output and system performance.
Posted 1 month ago
10.0 - 15.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Title :Senior SQL Developer Experience 10 -15Years Location :Bangalore ExperienceMinimum of 10+ years in database development and management roles. SQL MasteryAdvanced expertise in crafting and optimizing complex SQL queries and scripts. AWS RedshiftProven experience in managing, tuning, and optimizing large-scale Redshift clusters. PostgreSQLDeep understanding of PostgreSQL, including query planning, indexing strategies, and advanced tuning techniques. Data PipelinesExtensive experience in ETL development and integrating data from multiple sources into cloud environments. Cloud ProficiencyStrong experience with AWS services like ECS, S3, KMS, Lambda, Glue, and IAM. Data ModelingComprehensive knowledge of data modeling techniques for both OLAP and OLTP systems. ScriptingProficiency in Python, C#, or other scripting languages for automation and data manipulation. Preferred Qualifications LeadershipPrior experience in leading database or data engineering teams. Data VisualizationFamiliarity with reporting and visualization tools like Tableau, Power BI, or Looker. DevOpsKnowledge of CI/CD pipelines, infrastructure as code (e.g., Terraform), and version control (Git). CertificationsAny relevant certifications (e.g., AWS Certified Solutions Architect, AWS Certified Database - Specialty, PostgreSQL Certified Professional) will be a plus. Azure DatabricksFamiliarity with Azure Databricks for data engineering and analytics workflows will be a significant advantage. Soft Skills Strong problem-solving and analytical capabilities. Exceptional communication skills for collaboration with technical and non-technical stakeholders. A results-driven mindset with the ability to work independently or lead within a team. Qualification: Bachelor's or masters degree in Computer Science, Information Systems, Engineering or equivalent. 10+ years of experience
Posted 1 month ago
6.0 - 10.0 years
10 - 15 Lacs
Chennai
Work from Office
Experience : 5-10 years in ETL development, with 3+ years in a leadership role and extensive hands-on experience in Informatica PowerCenter and Cloud Data Integration. Job Overview: We are seeking a highly skilled and experienced Informatica Lead to join our IT team. The ideal candidate will lead a team of ETL developers and oversee the design, development, and implementation of ETL solutions using Informatica PowerCenter and Cloud Data Integration. This role requires expertise in data integration, leadership skills, and the ability to work in a dynamic environment to deliver robust data solutions for business needs. Key Responsibilities: ETL Development and Maintenance: Lead the design, development, and maintenance of ETL workflows and mappings using Informatica PowerCenter and Cloud Data Integration. Ensure the reliability, scalability, and performance of ETL solutions to meet business requirements. Optimize ETL processes for data integration, transformation, and loading into data warehouses and other target systems. Solution Architecture and Implementation: Collaborate with architects and business stakeholders to define ETL solutions and data integration strategies. Develop and implement best practices for ETL design and development. Ensure seamless integration with on-premises and cloud-based data platforms. Data Governance and Quality: Establish and enforce data quality standards and validation processes. Implement data governance and compliance policies to ensure data integrity and security. Perform root cause analysis and resolve data issues proactively. Team Leadership: Manage, mentor, and provide technical guidance to a team of ETL developers. Delegate tasks effectively and ensure timely delivery of projects and milestones. Conduct regular code reviews and performance evaluations for team members. Automation and Optimization: Develop scripts and frameworks to automate repetitive ETL tasks. Implement performance tuning for ETL pipelines and database queries. Explore opportunities to improve efficiency and streamline workflows. Collaboration and Stakeholder Engagement: Work closely with business analysts, data scientists, and application developers to understand data requirements and deliver solutions. Communicate project updates, challenges, and solutions to stakeholders effectively. Act as the primary point of contact for Informatica-related projects and initiatives. Academic Qualifications: Bachelor's degree in Computer Science, Information Technology, or equivalent. Relevant certifications (e.g., Informatica Certified Specialist, Informatica Cloud Specialist) are a plus. Experience : 6-10 years of experience in ETL development and data integration, with at least 3 years in a leadership role. Proven experience with Informatica PowerCenter, Informatica Cloud Data Integration, and large-scale ETL implementations. Experience in integrating data from various sources such as databases, flat files, and APIs. Technical Skills: Strong expertise in Informatica PowerCenter, Informatica Cloud, and ETL frameworks. Proficiency in SQL, PL/SQL, and performance optimization techniques. Knowledge of cloud platforms like AWS, Azure, or Google Cloud. Familiarity with big data tools such as Hive, Spark, or Snowflake is a plus. Strong understanding of data modeling concepts and relational database systems. Soft Skills: Excellent leadership and project management skills. Strong analytical and problem-solving abilities. Effective communication and stakeholder management skills. Ability to work under tight deadlines in a fast-paced environment
Posted 1 month ago
4.0 - 6.0 years
18 - 20 Lacs
Pune, Chennai, Bengaluru
Work from Office
We are hiring experienced ETL Developers (Ab Initio) for a leading MNC, with positions open in Pune, Chennai, and Bangalore. The ideal candidate should have 5+ years of hands-on experience in ETL development, with strong proficiency in Ab Initio, Unix, and SQL. Exposure to Hadoop and scripting languages like Shell or Python is a plus. This is a work-from-office role and requires candidates to be available for a face-to-face interview. Applicants should be able to join within 15 days to 1 month. Strong development background and the ability to work in a structured, fast-paced environment are essential.
Posted 1 month ago
5.0 - 7.0 years
8 - 16 Lacs
Bengaluru
Work from Office
We are looking for a Senior Data Engineer with deep experience in SnapLogic, SQL, ETL pipelines, and data warehousing, along with hands-on experience with Databricks.in designing scalable data solutions and working across cloud and big data .
Posted 1 month ago
8.0 - 10.0 years
15 - 20 Lacs
Noida
Work from Office
Job Description: We are looking for a highly skilled Senior System Engineer with expertise in server administration, automation, network security, and ETL development. The ideal candidate should have a strong understanding of system infrastructure design, security best practices, and database management. Key Responsibilities: -Administer and maintain Windows & Linux servers, ensuring high availability and performance. -Develop and maintain automation scripts using Python, PowerShell, and Bash. -Configure and manage VPNs, firewalls, and network security protocols. -Implement server security best practices, including patch management, hardening, and access control. -Design and manage ETL processes for data extraction, transformation, and loading. -Work with SQL Server and PostgreSQL for database administration, optimization, and troubleshooting. -Manage and support real-time data acquisition using OPC DA/HAD. -Collaborate with cross-functional teams to design and implement scalable system infrastructure solutions. Required Skills & Qualifications: -8+ years of experience in server administration (Windows & Linux). -Strong scripting skills in Python, PowerShell, and Bash for automation. -Expertise in network security, VPN configurations, and firewall management. -Hands-on experience in ETL development and database management (SQL Server/PostgreSQL). -Familiarity with real-time data acquisition using OPC DA/HAD. -Knowledge of system infrastructure design principles and best practices. -Experience in server security (patching, hardening, access control, monitoring, and compliance). Strong problem-solving and troubleshooting skills. Preferred Qualifications: -Experience with cloud platforms (AWS, Azure, or GCP). -Knowledge of containerization (Docker, Kubernetes). -Familiarity with CI/CD pipelines and DevOps tools. -Proficiency in server administration (Windows & Linux). -Scripting knowledge (Python, PowerShell, Bash) for automation. -Knowledge of network security, VPN, and firewall configurations. -Strong understanding of system infrastructure design principles. -Knowledge of server security best practices (patching, hardening, access control). -Hands-on Experience in ETL development -Experience in SQLServer/Postgres database management -Familiar with real-time data acquisition using OPC DA/HAD ETL Development Process, Window & Linux Server, real-time data acquisition using OPC DA/HAD, Strong scripting skills in Python, PowerShell, and Bash for automation, Expertise in network security, VPN configurations, and firewall management.
Posted 1 month ago
1.0 - 4.0 years
3 - 6 Lacs
Coimbatore
Work from Office
About Responsive Responsive, formerly RFPIO, is the market leader in an emerging new category of SaaS solutions called Strategic Response Management Responsive customers including Google, Microsoft, Blackrock, T Rowe Price, Adobe, Amazon, Visa and Zoom are using Responsive to manage business critical responses to RFPs, RFIs, RFQs, security questionnaires, due diligence questionnaires and other requests for information Responsive has nearly 2,000 customers of all sizes and has been voted ?best in class? by G2 for 13 quarters straight It also has more than 35% of the cloud SaaS leaders as customers, as well as more than 15 of the Fortune 100 Customers have used Responsive to close more than $300B in transactions to-date, About The Role We are seeking a highly skilled Product Data Engineer with expertise in building, maintaining, and optimizing data pipelines using Python scripting The ideal candidate will have experience working in a Linux environment, managing large-scale data ingestion, processing files in S3, and balancing disk space and warehouse storage efficiently This role will be responsible for ensuring seamless data movement across systems while maintaining performance, scalability, and reliability, Essential Functions ETL Pipeline Development: Design, develop, and maintain efficient ETL workflows using Python to extract, transform, and load data into structured data warehouses, Data Pipeline Optimization: Monitor and optimize data pipeline performance, ensuring scalability and reliability in handling large data volumes, Linux Server Management: Work in a Linux-based environment, executing command-line operations, managing processes, and troubleshooting system performance issues, File Handling & Storage Management: Efficiently manage data files in Amazon S3, ensuring proper storage organization, retrieval, and archiving of data, Disk Space & Warehouse Balancing: Proactively monitor and manage disk space usage, preventing storage bottlenecks and ensuring warehouse efficiency, Error Handling & Logging: Implement robust error-handling mechanisms and logging systems to monitor data pipeline health, Automation & Scheduling: Automate ETL processes using cron jobs, Airflow, or other workflow orchestration tools, Data Quality & Validation: Ensure data integrity and consistency by implementing validation checks and reconciliation processes, Security & Compliance: Follow best practices in data security, access control, and compliance while handling sensitive data, Collaboration with Teams: Work closely with data engineers, analysts, and product teams to align data processing with business needs, Education Bachelors degree in Computer Science, Data Engineering, or a related field, Long Description 2+ years of experience in ETL development, data pipeline management, or backend data engineering, Proficiency in Python: Strong hands-on experience in writing Python scripts for ETL processes, Linux Expertise: Experience working with Linux servers, command-line operations, and system performance tuning, Cloud Storage Management: Hands-on experience with Amazon S3, including handling file storage, retrieval, and lifecycle policies, Data Pipeline Management: Experience with ETL frameworks, data pipeline automation, and workflow scheduling (e-g , Apache Airflow, Luigi, or Prefect), SQL & Database Handling: Strong SQL skills for data extraction, transformation, and loading into relational databases and data warehouses, Disk Space & Storage Optimization: Ability to manage disk space efficiently, balancing usage across different systems, Error Handling & Debugging: Strong problem-solving skills to troubleshoot ETL failures, debug logs, and resolve data inconsistencies, Experience with cloud data warehouses (e-g , Snowflake, Redshift, BigQuery), Knowledge of message queues (Kafka, RabbitMQ) for data streaming, Familiarity with containerization tools (Docker, Kubernetes) for deployment, Exposure to infrastructure automation tools (Terraform, Ansible), Knowledge, Ability & Skills Strong analytical mindset and ability to handle large-scale data processing efficiently, Ability to work independently in a fast-paced, product-driven environment,
Posted 1 month ago
6.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Diverse Lynx is looking for Datastage Developer to join our dynamic team and embark on a rewarding career journey Analyzing business requirements and translating them into technical specifications Designing and implementing data integration solutions using Datastage Extracting, transforming, and loading data from various sources into target systems Developing and testing complex data integration workflows, including the use of parallel processing and data quality checks Collaborating with database administrators, data architects, and stakeholders to ensure the accuracy and consistency of data Monitoring performance and optimizing Datastage jobs to ensure they run efficiently and meet SLAs Troubleshooting issues and resolving problems related to data integration Knowledge of data warehousing, data integration, and data processing concepts Strong problem-solving skills and the ability to think creatively and critically Excellent communication and collaboration skills, with the ability to work effectively with technical and non-technical stakeholders
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane