Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
2 - 10 Lacs
India
Remote
Job Title: ETL Automation Tester (SQL, Python, Cloud) Location: [On-site / Remote / Hybrid – City, State or “Anywhere, USA”] Employment Type: [Full-time / Contract / C2C / Part Time ] NOTE : Candidate has to work US Night Shifts Job Summary: We are seeking a highly skilled ETL Automation Tester with expertise in SQL , Python scripting , and experience working with Cloud technologies such as Azure, AWS, or GCP . The ideal candidate will be responsible for designing and implementing automated testing solutions to ensure the accuracy, performance, and reliability of ETL pipelines and data integration processes. Key Responsibilities: Design and implement test strategies for ETL processes and data pipelines. Develop automated test scripts using Python and integrate them into CI/CD pipelines. Validate data transformations and data integrity across source, staging, and target systems. Write complex SQL queries for test data creation, validation, and result comparison. Perform cloud-based testing on platforms such as Azure Data Factory, AWS Glue, or GCP Dataflow/BigQuery. Collaborate with data engineers, analysts, and DevOps teams to ensure seamless data flow and test coverage. Log, track, and manage defects through tools like JIRA, Azure DevOps, or similar. Participate in performance and volume testing for large-scale datasets. Required Skills and Qualifications: 3–7 years of experience in ETL/data warehouse testing. Strong hands-on experience in SQL (joins, CTEs, window functions, aggregation). Proficient in Python for automation scripting and data manipulation. Solid understanding of ETL tools such as Informatica, Talend, SSIS, or custom Python-based ETL. Experience with at least one Cloud Platform : Azure : Data Factory, Synapse, Blob Storage AWS : Glue, Redshift, S3 GCP : Dataflow, BigQuery, Cloud Storage Familiarity with data validation , data quality , and data profiling techniques. Experience with CI/CD tools such as Jenkins, GitHub Actions, or Azure DevOps. Excellent problem-solving, communication, and documentation skills. Preferred Qualifications: Knowledge of Apache Airflow , PySpark , or Databricks . Experience with containerization (Docker) and orchestration tools (Kubernetes). ISTQB or similar testing certification. Familiarity with Agile methodologies and Scrum ceremonies . Job Types: Part-time, Contractual / Temporary, Freelance Contract length: 6 months Pay: ₹18,074.09 - ₹86,457.20 per month Expected hours: 40 per week Benefits: Work from home
Posted 4 days ago
0 years
0 Lacs
Chennai
On-site
Talend - Designing, developing, and documenting existing Talend ETL processes, technical architecture, data pipelines, and performance scaling using tools to integrate Talend data and ensure data quality in a big data environment. Snowflake SQL Writing SQL queries against Snowflake Developing scripts Unix, Python, etc. to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Python, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Perform data analysis, troubleshoot data issues, and provide technical support to end-users. Develop and maintain data warehouse and ETL processes, ensuring data quality and integrity. Complex problem-solving capability and ever improvement approach. Desirable to have Talend / Snowflake Certification Excellent SQL coding skills Excellent communication & documentation skills. Familiar with Agile delivery process. Must be analytic, creative and self-motivated. Work Effectively within a global team environment. Excellent Communication skills About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 4 days ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures Of Outcomes TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Code Outputs Expected: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure Define and govern the configuration management plan. Ensure compliance from the team. Test Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Execute and monitor the release process. Design Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface With Customer Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications Obtain relevant domain and technology certifications. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql
Posted 4 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hi, Greetings from Peoplefy Infosolutions !!! We are hiring for one of our reputed MNC client based in Pune. We are looking for candidates with 10+ years of experience who is currently working as a Data Architect. Job Description: We are seeking a highly skilled and experienced Cloud Data Architect to design, implement, and manage scalable, secure, and efficient cloud-based data solutions. The ideal candidate will possess a strong combination of technical expertise, analytical skills, and the ability to collaborate effectively with cross-functional teams to translate business requirements into technical solutions. Key Responsibilities: Design and implement data architectures, including data pipelines, data lakes, and data warehouses, on cloud platforms. Develop and optimize data models (e.g., star schema, snowflake schema) to support business intelligence and analytics. Leverage big data technologies (e.g., Hadoop, Spark, Kafka) to process and analyze large-scale datasets. Manage and optimize relational and NoSQL databases for performance and scalability. Develop and maintain ETL/ELT workflows using tools like Apache NiFi, Talend, or Informatica. Ensure data security and compliance with regulations such as GDPR and CCPA. Automate infrastructure deployment using CI/CD pipelines and Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Collaborate with analytics teams to integrate machine learning frameworks and visualization tools (e.g., Tableau, Power BI). Provide technical leadership and mentorship to team members. Interested candidates for above position kindly share your CVs on sneh.ne@peoplefy.com
Posted 4 days ago
6.0 years
0 Lacs
Delhi, India
Remote
Job Title: Senior Data Modeler Experience Required: 6+ Years Location: Remote Employment Type: Full-time / Contract (Remote) Domain: Data Engineering / Analytics / Data Warehousing --- Job Summary: We are seeking an experienced and detail-oriented Data Modeler with a strong background in conceptual, logical, and physical data modeling. The ideal candidate will have in-depth knowledge of Snowflake architecture, data modeling best practices (Star/Snowflake schema), and advanced SQL scripting. You will be responsible for designing robust, scalable data models and working closely with data engineers, analysts, and business stakeholders. --- Key Responsibilities: 1. Data Modeling: Design Conceptual, Logical, and Physical Data Models. Create and maintain Star and Snowflake Schemas for analytical reporting. Perform Normalization and Denormalization based on performance and reporting requirements. Work closely with business stakeholders to translate requirements into optimized data structures. Maintain data model documentation and data dictionary. 2. Snowflake Expertise: Design and implement Snowflake schemas with optimal partitioning and clustering strategies. Perform performance tuning for complex queries and storage optimization. Implement Time Travel, Streams, and Tasks for data recovery and pipeline automation. Manage and secure data using Secure Views and Materialized Views. Optimize usage of Virtual Warehouses and storage costs. 3. SQL & Scripting: Write and maintain Advanced SQL queries including: Common Table Expressions (CTEs) Window Functions Recursive queries Build automation scripts for data loading, transformation, and validation. Troubleshoot and optimize SQL queries for performance and accuracy. Support data migration and integration projects. --- Required Skills & Qualifications: 6+ years of experience in Data Modeling and Data Warehouse design. Proven experience with Snowflake platform (min. 2 years). Strong hands-on experience in Dimensional Modeling (Star/Snowflake schemas). Expert in SQL and scripting for automation and performance optimization. Familiarity with tools like Erwin, PowerDesigner, or similar data modeling tools. Experience working in Agile/Scrum environments. Strong analytical and problem-solving skills. Excellent communication and stakeholder engagement skills. --- Preferred Skills (Nice to Have): Experience with ETL/ELT tools like dbt, Informatica, Talend, etc. Exposure to Cloud Platforms like AWS, Azure, or GCP. Familiarity with Data Governance and Data Quality frameworks.
Posted 5 days ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Organization: Leading Global Management Consulting Organization( One of the BIG 3 Consulting Organization) Role :- Sr Data Architect Experience: - 10+ Yrs WHAT YOULL DO Define and design future state data architecture for HR reporting, forecasting and analysis products. Partner with Technology, Data Stewards and various Products teams in an Agile work stream while meeting program goals and deadlines. Engage with line of business, operations, and project partners to gather process improvements. Lead to design / build new models to efficiently deliver the financial results to senior management. Evaluate Data related tools and technologies and recommend appropriate implementation patterns and standard methodologies to ensure our Data ecosystem is always modern. Collaborate with Enterprise Data Architects in establishing and adhering to enterprise standards while also performing POCs to ensure those standards are implemented. Provide technical expertise and mentorship to Data Engineers and Data Analysts in the Data Architecture. Develop and maintain processes, standards, policies, guidelines, and governance to ensure that a consistent framework and set of standards is applied across the company. Create and maintain conceptual / logical data models to identify key business entities and visual relationships. Work with business and IT teams to understand data requirements. Maintain a data dictionary consisting of table and column definitions. Review data models with both technical and business audiences. YOU’RE GOOD AT Design, document & train the team on the overall processes and process flows for the Data architecture. Resolve technical challenges in critical situations that require immediate resolution. Develop relationships with external stakeholders to maintain awareness of data and security issues and trends. Review work from other tech team members and provide feedback for growth. Implement Data security policies that align with governance objectives and regulatory requirements. YOU BRING (EXPERIENCE & QUALIFICATIONS) Essential Education Minimum of a Bachelor's degree in Computer science, Engineering or a similar field Additional Certification in Data Management or cloud data platforms like Snowflake preferred Essential Experience & Job Requirements 12+ years of IT experience with major focus on data warehouse/database related projects Expertise in cloud databases like Snowflake, Redshift etc. Expertise in Data Warehousing Architecture; BI/Analytical systems; Data cataloguing; MDM etc Proficient in Conceptual, Logical, and Physical Data Modelling Proficient in documenting all the architecture related work performed. Proficient in data storage, ETL/ELT and data analytics tools like AWS Glue, DBT/Talend, FiveTran, APIs, Tableau, Power BI, Alteryx etc Experience in building Data Solutions to support Comp Benchmarking, Pay Transparency / Pay Equity and Total Rewards use cases preferred. Experience with Cloud Big Data technologies such as AWS, Azure, GCP and Snowflake a plus Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs) a plus Excellent written, oral communication and presentation skills to present architecture, features, and solution recommendations is a must
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
panchkula, haryana
On-site
We are seeking a skilled and experienced Lead/Senior ETL Engineer with 4-8 years of experience to join our dynamic data engineering team. As a Lead/Sr. ETL Engineer, you will play a crucial role in designing and developing high-performing ETL solutions, managing data pipelines, and ensuring seamless integration across systems. Your expertise in ETL tools, cloud platforms, scripting, and data modeling principles will be pivotal in building efficient, scalable, and reliable data solutions for enterprise-level implementations. Key Skills: - Proficiency in ETL tools such as SSIS, DataStage, Informatica, or Talend. - In-depth understanding of Data Warehousing concepts, including Data Marts, Star/Snowflake schemas, and Fact & Dimension tables. - Strong experience with relational databases like SQL Server, Oracle, Teradata, DB2, or MySQL. - Solid scripting/programming skills in Python. - Hands-on experience with cloud platforms like AWS or Azure. - Knowledge of middleware architecture and enterprise data integration strategies. - Familiarity with reporting/BI tools such as Tableau and Power BI. - Ability to write and review high and low-level design documents. - Excellent communication skills and the ability to work effectively with cross-cultural, distributed teams. Roles and Responsibilities: - Design and develop ETL workflows and data integration strategies. - Collaborate with cross-functional teams to deliver enterprise-grade middleware solutions. - Coach and mentor junior engineers to support skill development and performance. - Ensure timely delivery, escalate issues proactively, and manage QA and validation processes. - Participate in planning, estimations, and recruitment activities. - Work on multiple projects simultaneously, ensuring quality and consistency in delivery. - Experience in Sales and Marketing data domains. - Strong problem-solving abilities with a data-driven mindset. - Ability to work independently and collaboratively in a fast-paced environment. - Prior experience in global implementations and managing multi-location teams is a plus. If you are a passionate Lead/Sr. ETL Engineer looking to make a significant impact in a dynamic environment, we encourage you to apply for this exciting opportunity. Thank you for considering a career with us. We look forward to receiving your application! For further inquiries, please contact us at careers@grazitti.com. Location: Panchkula, India,
Posted 5 days ago
6.0 years
12 - 18 Lacs
Delhi, India
Remote
Skills: Data Modeling, Snowflake, Schemas, Star Schema Design, SQL, Data Integration, Job Title: Senior Data Modeler Experience Required: 6+ Years Location: Remote Employment Type: Full-time / Contract (Remote) Domain: Data Engineering / Analytics / Data Warehousing Job Summary We are seeking an experienced and detail-oriented Data Modeler with a strong background in conceptual, logical, and physical data modeling. The ideal candidate will have in-depth knowledge of Snowflake architecture, data modeling best practices (Star/Snowflake schema), and advanced SQL scripting. You will be responsible for designing robust, scalable data models and working closely with data engineers, analysts, and business stakeholders. Key Responsibilities Data Modeling: Design Conceptual, Logical, and Physical Data Models. Create and maintain Star and Snowflake Schemas for analytical reporting. Perform Normalization and Denormalization based on performance and reporting requirements. Work closely with business stakeholders to translate requirements into optimized data structures. Maintain data model documentation and data dictionary. Snowflake Expertise: Design and implement Snowflake schemas with optimal partitioning and clustering strategies. Perform performance tuning for complex queries and storage optimization. Implement Time Travel, Streams, and Tasks for data recovery and pipeline automation. Manage and secure data using Secure Views and Materialized Views. Optimize usage of Virtual Warehouses and storage costs. SQL & Scripting: Write And Maintain Advanced SQL Queries Including Common Table Expressions (CTEs) Window Functions Recursive queries Build automation scripts for data loading, transformation, and validation. Troubleshoot and optimize SQL queries for performance and accuracy. Support data migration and integration projects. Required Skills & Qualifications 6+ years of experience in Data Modeling and Data Warehouse design. Proven experience with Snowflake platform (min. 2 years). Strong hands-on experience in Dimensional Modeling (Star/Snowflake schemas). Expert in SQL and scripting for automation and performance optimization. Familiarity with tools like Erwin, PowerDesigner, or similar data modeling tools. Experience working in Agile/Scrum environments. Strong analytical and problem-solving skills. Excellent communication and stakeholder engagement skills. Preferred Skills (Nice To Have) Experience with ETL/ELT tools like dbt, Informatica, Talend, etc. Exposure to Cloud Platforms like AWS, Azure, or GCP. Familiarity with Data Governance and Data Quality frameworks.
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
You will be responsible for preparing data, developing models, testing them, and deploying them. This includes designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models. Your role will involve ensuring that algorithms generate accurate user recommendations. Additionally, you will work on turning unstructured data into useful information by auto-tagging images and converting text to speech. Solving complex problems with multi-layered data sets and optimizing existing machine learning libraries and frameworks will be part of your daily tasks. Your responsibilities will also include developing machine learning algorithms to analyze large volumes of historical data for making predictions. You will run tests, perform statistical analysis, and interpret the results, documenting machine learning processes. As a Lead Engineer in ML and Data Engineering, you will oversee the technologies, tools, and techniques used within the team. Collaboration with the team based on business requirements for designing the requirements is essential. You will ensure that development standards, policies, and procedures are adhered to and drive change to implement efficient and effective strategies. Working closely with peers in the business to fully understand the business process and requirements is crucial. Maintenance, debugging, and problem-solving will also be part of your job responsibilities. Ensuring that all software developed within your team meets the business requirements specified and showing flexibility to respond to the changing needs of the business are key aspects of the role. Your technical skills should include 4+ years of experience in Python, API development using Flask/Django, and proficiency in libraries such as Pandas, Numpy, Keras, Scipy, Scikit-learn, PyTorch, Tensor Flow, and Theano. Hands-on experience in Machine Learning (Supervised & Unsupervised) and familiarity with Data Analytics Tools & Libraries are required. Experience in Cloud Data Pipelines and Engineering (Azure/AWS) as well as familiarity with ETL Pipelines/DataBricks/Apache NiFi/Kafka/Talend will be beneficial. Ability to work independently on projects, good written and verbal communication skills, and a Bachelor's Degree in Computer Science/Engineering/BCA/MCA are essential qualifications for this role. Desirable skills include 2+ years of experience in Java.,
Posted 5 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As an Associate Architect (IND) at Elevance Health, you will be responsible for designing and implementing scalable, high-performance ETL solutions for data ingestion, transformation, and loading. You will define and maintain data architecture standards, best practices, and governance policies while collaborating with data engineers, analysts, and business stakeholders to understand data requirements. Your role will involve optimizing existing ETL pipelines for performance, reliability, and scalability, ensuring data quality, consistency, and security across all data flows. In this position, you will lead the evaluation and selection of ETL tools and technologies, providing technical leadership and mentorship to junior data engineers. Additionally, you will be expected to document data flows, architecture diagrams, and technical specifications. It would be beneficial to have experience with Snowflake and Oracle. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Information Systems, or a related field, along with at least 8 years of experience in data engineering or ETL development. Strong expertise in ETL tools such as Informatica, Talend, Apache NiFi, SSIS, or similar is essential, as well as proficiency in SQL and experience with relational and NoSQL databases. Experience with cloud platforms like AWS, Azure, or Google Cloud, familiarity with data modeling, data warehousing, and big data technologies are also required. The ideal candidate will possess strong problem-solving and communication skills, along with good business communication skills. You should be committed, accountable, and able to communicate status to stakeholders in a timely manner. Collaboration and leadership skills are vital for this role, as you will be working with global teams. At Carelon, we promise a world of limitless opportunities to our associates, fostering an environment that promotes growth, well-being, purpose, and a sense of belonging. Our focus on learning and development, innovative culture, comprehensive rewards, and competitive benefits make Carelon an equal opportunity employer dedicated to delivering the best results for our customers. If you require reasonable accommodation during the application process, please request the Reasonable Accommodation Request Form. This is a full-time position based in Bangalore.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an Infoscion, your primary responsibility will be to interface with the client for quality assurance issue resolution and ensure high customer satisfaction. You will be involved in understanding requirements, creating and reviewing designs, validating architecture, and delivering high levels of service offerings to clients in the technology domain. Your role will also include participating in project estimation, providing inputs for solution delivery, conducting technical risk planning, performing code reviews, and unit test plan reviews. Leading and guiding your teams towards developing optimized high-quality code deliverables, ensuring continual knowledge management, and adhering to organizational guidelines and processes are key aspects of your job. If you are passionate about building efficient programs and systems, and helping clients navigate their digital transformation journey, this is the perfect opportunity for you. In addition to the primary responsibilities, you are expected to have knowledge of more than one technology, understand the basics of architecture and design fundamentals, be familiar with testing tools, and have knowledge of agile methodologies. Understanding project life cycle activities on development and maintenance projects, estimation methodologies, quality processes, and the basics of business domain to comprehend business requirements are essential. Your analytical abilities, strong technical skills, good communication skills, and understanding of technology and domain will be crucial in this role. You should be able to demonstrate a sound understanding of software quality assurance principles, SOLID design principles, and modeling methods, as well as be aware of the latest technologies and trends. Excellent problem-solving, analytical, and debugging skills will be beneficial for excelling in this position. Preferred Skills: - Technology: Data Management - Data Integration - Talend,
Posted 5 days ago
4.0 - 6.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Job Title: ETL Developer Talend & Snowflake Location: [Hyderabad] Experience: 4+ Years Employment Type: [Full-Time] Industry: [Insert Industry, e.g., IT Services, Banking, Healthcare] Job Description: We are seeking a talented ETL Developer with hands-on experience in Talend Management Console on Cloud and Snowflake to join our growing data engineering team. The ideal candidate will play a critical role in designing, developing, and optimizing scalable data pipelines to support enterprise-level analytics and reporting across cloud-based environments. Key Responsibilities: Design and develop robust ETL/ELT pipelines using Talend Management Console on Cloud to integrate data from on-premise and cloud-based sources. Implement and optimize data ingestion, transformation, and loading processes in Snowflake to support business intelligence and reporting needs. Manage, monitor, and deploy data jobs through Talend Cloud , ensuring high performance and automation. Collaborate with data architects , business stakeholders, and analytics teams to ensure integration solutions align with business goals and best practices. Troubleshoot and resolve issues related to ETL workflows, data latency, and system performance. Ensure data quality , consistency, and security across all data pipelines. Create and maintain comprehensive documentation for ETL processes, data mappings, workflows, and technical specifications. Participate in code reviews , sprint planning, and other Agile ceremonies as part of a collaborative development team. Work closely with Snowflake specialists to implement best practices in data warehousing . Implement robust error handling and data validation mechanisms for seamless data movement. Required Qualifications: Minimum 2+ years of hands-on experience with Talend , particularly using Talend Management Console on Cloud Strong expertise in Snowflake data warehouse and cloud data integration Solid understanding of ETL/ELT concepts , data modeling , and data architecture principles Proficiency in SQL and experience working with relational databases Experience with cloud platforms such as AWS , Azure , or GCP Knowledge of data quality , performance tuning, and job optimization techniques Ability to troubleshoot complex data integration workflows and deliver solutions under tight deadlines Preferred Qualifications: Exposure to Informatica (optional but a plus) Experience with Python or Shell scripting for automation tasks Familiarity with CI/CD practices , DevOps tools, and version control (e.g., Git, Jenkins) Understanding of data governance and security practices in cloud-based environments Soft Skills: Strong analytical and problem-solving skills Excellent verbal and written communication Ability to work independently and as part of a cross-functional team Detail-oriented with strong documentation and collaboration capabilities
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What you’ll do: Collaborate with ZS internal teams and client teams to shape and implement high quality technology solutions that address critical business problems Develop a deep understanding of the business problems and effectively translate them into technical designs Lead modules and workstreams within projects while participating in hands on implementation Work with technical architects to validate the technical design and implementation approach Apply appropriate development methodologies and best practices to ensure exceptional client and project team experience Support the project lead in project delivery, including project planning, people management, staffing, and risk mitigation Manage a diverse team with various skill sets while providing mentorship and coaching to junior members Lead task planning and distribution among team members for timely completion of projects with high-quality results Guide the project deliverables such as business case development, solution vision and design, user requirements, prototypes, technical architecture, test cases, deployment plans and operations strategy What you’ll bring: Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent work experience. Experience working in the AWS cloud platform. Data engineer with expertise in developing big data and data warehouse platforms. Experience working with structured and semi-structured data. Expertise in developing big data solutions, ETL/ELT pipelines for data ingestion, data transformation, and optimization techniques. Experience working directly with technical and business teams. Able to create technical documentation. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. AWS (Big Data services) - S3, Glue, Athena, EMR Programming - Python, Spark, SQL, Mulesoft,Talend, Dbt Data warehouse - ETL, Redshift / Snowflake Additional Skills: Strong communication skills, both verbal and written, with the ability to structure thoughts logically during discussions and presentations Capability to simplify complex concepts into easily understandable frameworks and presentations Proficiency in working within a virtual global team environment, contributing to the timely delivery of multiple projects Travel to other offices as required to collaborate with clients and internal project teams Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com
Posted 5 days ago
4.0 - 8.0 years
15 - 25 Lacs
Hyderabad
Work from Office
Job Description: We are looking for a Data Engineer with strong hands-on experience in ETL, cloud data platforms, and scripting to work on scalable data integration solutions. Mandatory Skills: SQL Strong expertise in writing optimized queries and procedures. Data Warehousing (DWH) Good understanding of data modeling and warehouse architecture. Shell scripting or Python For automation and custom transformation logic. ETL Tool – Experience with any ETL tool (Talend/Informatica/Datastage etc). DataBricks – Used for data transformation and processing. Azure Data Factory (ADF) – Designing and orchestrating data pipelines. Good to Have: Snowflake – For implementing scalable cloud data warehousing solutions. Azure Ecosystem – General familiarity with Azure services including Data Lake and Storage. Responsibilities: Build and maintain scalable ETL pipelines using Talend, ADF, and DataBricks. Extract, transform, and load data from multiple source systems into Snowflake and/or Azure Data Lake. Interpret technical and functional designs and implement them effectively in the data pipeline. Collaborate with teams to ensure high data quality and performance. Support and guide ETL developers in resolving technical challenges and implementing best practices.
Posted 5 days ago
15.0 - 20.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Talend ETL Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are optimized for performance and usability. You will also engage in problem-solving activities, providing support and guidance to your team members while continuously seeking opportunities for improvement in application design and functionality. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor application performance and implement necessary enhancements. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Good To Have Skills: Experience with data integration tools and methodologies.- Strong understanding of data warehousing concepts and practices.- Experience in developing and maintaining ETL processes.- Familiarity with database management systems and SQL. Additional Information:- The candidate should have minimum 5 years of experience in Talend ETL.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
3.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Talend ETL Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions with your team, providing guidance and support to ensure successful project outcomes. Your role will require you to stay updated on industry trends and best practices to enhance application performance and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate communication between technical teams and stakeholders to ensure alignment on project goals.- Mentor junior team members, providing them with guidance and support in their professional development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Strong understanding of data integration processes and methodologies.- Experience with data warehousing concepts and practices.- Familiarity with SQL and database management systems.- Ability to troubleshoot and resolve technical issues related to application performance. Additional Information:- The candidate should have minimum 3 years of experience in Talend ETL.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
3.0 - 8.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Talend ETL Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the existing infrastructure. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking opportunities for improvement and innovation in application design and functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Good To Have Skills: Experience with data integration tools and methodologies.- Strong understanding of data warehousing concepts and practices.- Familiarity with SQL and database management systems.- Experience in application testing and debugging techniques. Additional Information:- The candidate should have minimum 3 years of experience in Talend ETL.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 5 days ago
6.0 years
0 Lacs
Delhi, India
On-site
Job Title: Lead Azure Data Engineer Experience Level: Mid - Senior Level Location: Delhi Duration: Fulltime Experience Required: 6-8+ Years Description: We are seeking a highly skilled and experienced Lead Azure Data Engineer to join our team. The ideal candidate will have a strong background in data engineering, with a focus on working with Databricks, PySpark, Scala-Spark, and advanced SQL. This role requires hands-on experience in implementing or migrating projects to Unity Catalog, optimizing performance on Databricks Spark, and orchestrating workflows using various tools. Must Have Skills: MS Fabric ADF (Azure Data Factory) Azure Synapse Key Responsibilities: Data engineering and analytics project delivery experience Minimum 6 years Min. 2 project done in past of Databricks Migration (Ex. Hadoop to Databricks, Teradata to Databricks, Oracle to Databricks, Talend to Databricks etc) Hands on with Advanced SQL and Pyspark and/or Scala Spark Min 3 project done in past on Databricks where performance optimization activity was done Design, develop, and optimize data pipelines and ETL processes using Databricks and Apache Spark. Implement and optimize performance on Databricks Spark, ensuring efficient data processing and management. Develop and validate data formulation and data delivery for Big Data projects. Collaborate with cross-functional teams to define, design, and implement data solutions that meet business requirements. Conduct performance tuning and optimization of complex queries and data models. Manage and orchestrate data workflows using tools such as Databricks Workflow, Azure Data Factory (ADF), Apache Airflow, and/or AWS Glue. Maintain and ensure data security, quality, and governance throughout the data lifecycle. Technical Skills: • Extensive experience with PySpark and Scala-Spark. • Advanced SQL skills for complex data manipulation and querying. • Proven experience in performance optimization on Databricks Spark across at least three projects. • Hands-on experience with data formulation and data delivery validation in Big Data projects. • Experience in data orchestration using at least two of the following: Databricks Workflow, Azure Data Factory (ADF), Apache Airflow, AWS Glue. • Experience in Azure Synapse
Posted 5 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Hevo: Postman, Zepto, ThoughtSpot, Whatfix, Shopify, DoorDash, and thousands of other data-driven companies share one thing. They all use Hevo Data's fully managed Automated Pipelines to consolidate their data from multiple sources like Databases, Marketing Applications, Cloud Storage, SDKs, Streaming Services, etc. We are a San Francisco/Bangalore-based company with 2000+ customers spread across 40+ countries in domains such as e-commerce, financial technology, and healthcare. Strongly backed by marquee investors like Sequoia Capital, Chiratae, and Qualgro, we have raised $43 Million to date and are looking forward to our next phase of hyper-growth! Why Do we exist? Every company today wants to leverage cutting-edge technology - Artificial Intelligence, Machine Learning, and Predictive Analytics - to make smarter business decisions. Data is the foundational block on which these advanced techniques can be applied. However, every company's business users struggle to access accurate and reliable data. Data resides fragmented across the 100s of business software that businesses use. Most of the data operators spend time manually consolidating it, and they spend too little time deriving insights. If this ‘collection’ can be automated, it can lead to making business decisions faster, unlocking exponential growth, and delivering a superior experience to customers. At Hevo, our mission is to enable every company to be data-driven. We started on this journey 4 years back, and as the first step in this direction, we built our first product – “Data Pipeline” or simply “Pipeline”. Hevo Pipeline is a no-code platform that helps companies connect all their data sources within the company to get a unified view of their business. The platform offers integrations with 150+ data sources, such as Databases, SaaS applications, Advertising, and Channels. Today, we enable nearly 2000 companies across more than 40+ countries to be more data-driven. We aim to make the technology so simple that anyone can solve their data problems without being limited due to their lack of technical skills. Product Demo and our customers love us - Hevo is rated at the top of the G2 crowd in the Data Pipeline space: Hevo Data Reviews 2025: Details, Pricing, & Features | G2 About the Role: Platform Product Owner – Data Pipelines We’re looking for a product-driven, data-savvy Platform Product Owner to lead the evolution of Hevo’s Data Pipelines Platform. This role blends strategic product thinking with operational excellence and offers full ownership—from defining product outcomes to driving delivery health and platform reliability. You’ll work closely with Engineering, Architecture, and cross-functional teams to shape the platform roadmap, define user value, and ensure successful outcomes through measurable impact. If you're passionate about building scalable, high-impact data products—and excel at balancing strategy with execution—this role is for you. Key Responsibilities: Product Ownership & Strategy Define and evolve the product vision and roadmap in collaboration with Product Leadership Translate vision into a value-driven, structured product backlog focused on scalability, reliability, and user outcomes Craft clear user stories with well-defined acceptance criteria and success metrics Partner with Engineering and Architecture to design and iterate on platform capabilities aligned with long-term strategy Analyze competitive products to identify experience gaps, technical differentiators, and new opportunities Ensure platform capabilities deliver consistent value to internal teams and end users Product Operations & Delivery Insights Define and track key product health metrics (e.g., uptime, throughput, SLA adherence, adoption) Foster a metrics-first culture in product delivery—ensuring every backlog item ties to measurable outcomes Triage bugs and feature requests, assess impact, and feed insights into prioritization and planning Define post-release success metrics and establish feedback loops to evaluate feature adoption and performance Build dashboards and reporting frameworks to increase visibility into product readiness, velocity, and operations Improve practices around backlog hygiene, estimation accuracy, and story lifecycle management Ensure excellence in release planning and launch execution to meet quality and scalability benchmarks Collaboration & Communication Champion the product vision and user needs across all stages of development Collaborate with Support, Customer Success, and Product Marketing to ensure customer insights inform product direction Develop enablement materials (e.g., internal walkthroughs, release notes) to support go-to-market and support teams Drive alignment and accountability throughout the product lifecycle—from planning to post-release evaluation Qualifications: Required Bachelor’s degree in Computer Science or a related engineering field 5+ years of experience as a Product Manager/Product Owner, with time spent on platform/infrastructure products at B2B startups Hands-on experience with ETL tools or modern data platforms (e.g., Talend, Informatica, AWS Glue, Snowflake, BigQuery, Redshift, Databricks) Strong understanding of the product lifecycle with an operations-focused mindset Proven ability to collaborate with engineering teams to build scalable, reliable features Familiarity with data integration, APIs, connectors, and streaming/real-time data pipelines Analytical mindset with experience tracking KPIs and making data-informed decisions Excellent communication and cross-functional collaboration skills Proficiency with agile product development tools (e.g., Jira, Aha!, Linear) Preferred Experience in a data-intensive environment Engineer-turned-Product Manager with a hands-on technical background MBA from a Tier-1 institute
Posted 5 days ago
8.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Experience working on BigID or Collibra. Knowledge of data classification and data products. Understanding of data loss and personal information security. Exposure to Snowflake, S3, Redshift, SharePoint, and Box. Understanding of connecting to various source systems. Deep understanding and practical knowledge of IDEs such as Eclipse, PyCharm, or any Workflow Designer. Experience with one or more of the following languages: Java, JavaScript, Groovy, Python. Deep understanding and hands-on experience of CI/CD processes and tooling such as GitHub. Experience working in DevOps teams based on Kubernetes tools. Hands-on experience in database concepts and a fair idea about data classification, lineage, and storage is a plus. Fantastic written and spoken English, interpersonal skills, and a collaborative approach to delivery. Desirable Skills And Experience Overall IT experience in the range of 8 to 12 years Technical Degree to validate the experience Deep technical expertise Display a solid understanding of the technology requested and problem-solving skills Must be analytical, focused and should be able to independently handle work with minimum supervision Good collaborator management and team player Exposure to platforms like Talend Data Catalog, BigID, or Snowflake is an advantage Basic AWS knowledge is a plus Knowledge and experience of integration technologies like Mulesoft and SnapLogic Excellent Jira skills including the ability to rapidly generate JQL on-the-fly and save JQL queries, filters, views, etc., for publishing to fellow engineers and senior stakeholders Creation of documentation in Confluence Experience of Agile practices, preferably having been part of an Agile team for several years
Posted 5 days ago
7.0 years
5 - 10 Lacs
Hyderābād
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Become part of Operations, Support, & Maintenance team Need someone who is technical and who can review existing scripts and code to debug, fix code/scripts, enhance code. Should be able to write intermediate level SQL and MongoDB queries. This is needed to support customers with their issues and enhancement requests Support applications/products/platforms during testing & post-production Develop new code / scripts. Not heads-down development! Analyze & report on data, manage data (data validation, data clean up) Monitor scheduled jobs and take proactive actions. Resolve job failures, and communicate to stakeholders Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's Degree 7+ years of Relational Database experience (Oracle, SQL Server, DB2, etc.) 5+ years of ETL tool experience (Talend, SQL Server SSIS, Informatica, etc.) 5+ years of programming experience (e.g. Java, Javascript, Visual Basic, etc.) 3+ years of experience with NOSQL (e.g. MongoDB) 3+ years of experience in SDLC development process 2+ years of experience in job scheduler (e.g. Rundeck, Tivoli) Thorough understanding of Production control such as change control process Thorough understanding of REST API services Preferred Qualifications: Understanding of the following: Vaults such as CyberArk, Hashi Corp Document Management such as Nuxeo Version control tool (Git) technology Healthcare terminology Atlassian Tools such as JIRA / Bit Bucket / Crowd / Confluence ETL / ELT tool such as FiveTran Understanding of Agile methodology At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 5 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We have an opening for Senior Manager-Data Governance for a leading consulting firm based in Gurgaon location. Exp: 12yrs -15yrs Location : Gurgaon Mode: Hybrid Pls share your resume on leeba@mounttalent.com Job Description The ideal candidate will have a goal-oriented mindset and enjoy working with cross-functional teams to deliver Data Governance capabilities and support exceptional Data Products. They will have 8+ years of experience in Data Governance, preferably in Compensation, HR, or Finance data, and be well-versed in compensation data structures, analytics, and regulatory compliance challenges. Additionally, they should have strong communication skills with proven experience in stakeholder engagement, including working with senior business and technical teams to showcase the business benefits of Data Governance. Bachelor’s or higher degree in Computer Science, Mathematics, Statistics, Finance, HR, or related fields 5+ years of experience in Data Governance, Compensation Data Management, HR Data Analytics, or Finance Data Strong knowledge of total Compensation data concepts, benefits, base, bonus, pay structures, incentives, and compliance Understanding of Data Governance frameworks (DAMA DMBoK, EDM Council’s DCAM) is beneficial Hands-on experience using Data Governance tools (e.g., Collibra, Talend, Informatica) Experience with Compensation systems and data platforms such as Workday, SAP, or Snowflake Strong stakeholder engagement and communication skills to collaborate with diverse, global teams
Posted 5 days ago
7.0 - 12.0 years
10 - 20 Lacs
Bengaluru
Work from Office
8+ Years of exp in Database Technologies: AWS Aurora-PostgreSQL, NoSQL,DynamoDB, MongoDB,Erwin data modeling Exp in pg_stat_statements, Query Execution Plans Exp in Apache Kafka,AWS Kinesis,Airflow,Talend.AWS Exp in CloudWatch,Prometheus,Grafana, Required Candidate profile Exp in GDPR, SOC2, Role-Based Access Control (RBAC), Encryption Standards. Exp in AWS Multi-AZ, Read Replicas, Failover Strategies, Backup Automation. Exp in Erwin, Lucidchart, Confluence, JIRA.
Posted 5 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: ETL Developer Location: Hyderabad (5 days WFO) Experience Required: 4+ years in ETL Developer We are looking for a talented Talend Developer with hands-on experience in Talend Management Console on Cloud and Snowflake to join our growing team. The ideal candidate will play a key role in building and optimizing ETL/ELT data pipelines, integrating complex data systems, and ensuring high performance across cloud environments. While experience with Informatica is a plus, it is not mandatory for this role. As a Talend Developer, you will be responsible for designing, developing, and maintaining data integration solutions to meet the organization’s growing data needs. You will collaborate with business stakeholders, data architects, and other data professionals to ensure the seamless and secure movement of data across platforms, ensuring scalability and performance. Key Responsibilities: Develop and maintain ETL/ELT data pipelines using Talend Management Console on Cloud to integrate data from various on-premises and cloud-based sources. Design, implement, and optimize data flows for data ingestion, processing, and transformation in Snowflake to support analytical and reporting needs. Utilize Talend Management Console on Cloud to manage, deploy, and monitor data integration jobs, ensuring robust pipeline management and process automation. Collaborate with data architects to ensure that the data integration solutions align with business requirements and follow best practices. Ensure data quality, performance, and scalability of Talend-based data solutions. Troubleshoot, debug, and optimize existing ETL processes to ensure smooth and efficient data integration. Document data integration processes, including design specifications, mappings, workflows, and performance optimizations. Collaborate with the Snowflake team to implement best practices for data warehousing and data transformation. Implement error-handling and data validation processes to ensure high levels of accuracy and data integrity. Provide ongoing support for Talend jobs, including post-deployment monitoring, troubleshooting, and optimization. Participate in code reviews and collaborate in an agile development environment. Required Qualifications: 2+ years of experience in Talend development, with a focus on using the Talend Management Console on Cloud for managing and deploying jobs. Strong hands-on experience with Snowflake data warehouse, including data integration and transformation. Expertise in developing ETL/ELT workflows for data ingestion, processing, and transformation. Experience with SQL and working with relational databases to extract and manipulate data. Experience working in cloud environments (e.g., AWS, Azure, or GCP) with integration of cloud-based data platforms. Strong knowledge of data integration, data quality, and performance optimization in Talend. Ability to troubleshoot and resolve issues in data integration jobs and processes. Solid understanding of data modeling concepts and best practices for building scalable data pipelines. Preferred Qualifications: Experience with Informatica is a plus but not mandatory. Experience with scripting languages such as Python or Shell scripting for automation. Familiarity with CI/CD pipelines and working in DevOps environments for continuous integration of Talend jobs. Knowledge of data governance and data security practices in cloud environments.
Posted 5 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Date Opened 07/23/2025 Job Type Permanent RSD NO 10371 Industry IT Services Min Experience 15+ Max Experience 15+ City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600018 Job Description Job Summary: We are seeking a Data Architect to design and implement scalable, secure, and efficient data solutions that support Convey Health Solutions' business objectives. This role will focus on data modeling, cloud data platforms, ETL processes, and analytics solutions, ensuring compliance with healthcare regulations (HIPAA, CMS guidelines). The ideal candidate will collaborate with data engineers, BI analysts, and business stakeholders to drive data-driven decision-making. Key Responsibilities: Enterprise Data Architecture: Design and maintain the overall data architecture to support Convey Health Solutions’ data-driven initiatives. Cloud & Data Warehousing: Architect cloud-based data solutions (AWS, Azure, Snowflake, BigQuery) to optimize scalability, security, and performance. Data Modeling: Develop logical and physical data models for structured and unstructured data, supporting analytics, reporting, and operational processes. ETL & Data Integration: Define strategies for data ingestion, transformation, and integration, leveraging ETL tools like INFORMATICA, TALEND, DBT, or Apache Airflow. Data Governance & Compliance: Ensure data quality, security, and compliance with HIPAA, CMS, and SOC 2 standards. Performance Optimization: Optimize database performance, indexing strategies, and query performance for real-time analytics. Collaboration: Partner with data engineers, software developers, and business teams to align data architecture with business objectives. Technology Innovation: Stay up to date with emerging data technologies, AI/ML applications, and industry trends in healthcare data analytics. Required Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Engineering, or a related field. Experience: 7+ years of experience in data architecture, data engineering, or related roles. Technical Skills: Strong expertise in SQL, NoSQL, and data modeling techniques Hands-on experience with cloud data platforms (AWS Redshift, Snowflake, Google BigQuery, Azure Synapse) Experience with ETL frameworks (INFORMATICA, TALEND, DBT, Apache Airflow, etc.) Knowledge of big data technologies (Spark, Hadoop, Data-bricks) Strong understanding of data security and compliance (HIPAA, CMS, SOC 2, GDPR) Soft Skills: Strong analytical, problem-solving, and communication skills. Ability to work in a collaborative, agile environment. Preferred Qualifications: Experience in healthcare data management, claims processing, risk adjustment, or pharmacy benefit management (PBM). Familiarity with AI/ML applications in healthcare analytics. Certifications in cloud data platforms (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). At Indium diversity, equity, and inclusion (DEI) are the cornerstones of our values. We champion DEI through a dedicated council, expert sessions, and tailored training programs, ensuring an inclusive workplace for all. Our initiatives, including the WE@IN women empowerment program and our DEI calendar, foster a culture of respect and belonging. Recognized with the Human Capital Award, we are committed to creating an environment where every individual thrives. Join us in building a workplace that values diversity and drives innovation.
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi