Pune, Maharashtra, India
Not disclosed
On-site
Full Time
Job Title: ETL Developer Location: Pune/hybrid Experience: 5-7 Years Budget: 10LPA-13LPA Notice Period: Immediate to 15days Mandatory skills - ETL, SSIS, SSRS, SQL, Data warehousing concepts Tech Stack Table Skills Experience Rating out of 10 ETL SSIS SSRS Data Warehousing Job Summary: We are seeking a skilled ETL Developer with 5+ years of experience in designing, developing, and maintaining ETL processes using SSIS and working with large-scale MS SQL Server data warehouses. The ideal candidate will have a strong techno-functional understanding of data warehousing concepts and best practices. Key Responsibilities: · ETL Development: Design, develop, and maintain ETL processes to extract data from various sources, transform it according to business rules, and load it into target databases or data warehouses using SSIS or other ETL tools. · SQL Development: Develop and optimize SQL queries, stored procedures, and functions for efficient data retrieval and manipulation. · Data Management: Manage data warehousing concepts, including data governance, data architecture, data profiling, and data quality validation. · Documentation: Document data flows using Visio, create business requirement documents in Word, and maintain source-to-target mapping documents in Excel. · Troubleshooting: Identify and resolve production issues effectively. · Engineering: Experience in both forward engineering (requirements to SQL logic) and reverse engineering (converting SQL logic to business requirements). · Communication: Strong ability to communicate technical concepts to non-technical stakeholders. Required Skills: · Proficient hands-on experience with SSIS and SQL Server. · Strong expertise in developing SSRS reports. · Solid understanding of data warehousing concepts and principles. · Excellent documentation skills, including ETL processes and source-to-target mappings. · Strong communication and collaboration skills. Qualifications: · Bachelor’s degree in Computer Science, Information Technology, or a related field. · 5+ years of relevant ETL and data warehousing experience. Show more Show less
Gurugram, Haryana, India
Not disclosed
On-site
Full Time
Job Title: Senior PySpark Data Engineer Share only the quality profiles Location: Pune/Hybrid Experience: 5-8 years Budget: 8LPA-11 LPA Notice Period: Immediate to 15 days Mandatory Skills: · Python · SQL · ETL · Informatica PowerCenter · AWS/Azure Good to Have: · IDMC Tech Stack Table Skills Experience Rating out of 10 Python SQL ETL Informatica PowerCenter Aws/Azure Job Summary: We are seeking a Senior PySpark Data Engineer with extensive experience in developing, optimizing, and maintaining data processing jobs using PySpark. The ideal candidate will possess a robust background in SQL and ETL processes, along with proficiency in cloud platforms such as AWS or Azure. This role will require excellent analytical skills and the ability to communicate effectively with both technical and non-technical stakeholders. Key Responsibilities: · Design, develop, and optimize PySpark jobs for enhanced performance and scalability. · Collaborate with data architects and business analysts to understand data requirements and translate them into technical specifications. · Redesign and maintain complex SQL queries and stored procedures to support data extraction and transformation processes. · Utilize ETL tools, specifically Informatica PowerCenter, to build effective data pipelines. · Troubleshoot and resolve data quality issues and performance bottlenecks. · Mentor and provide technical guidance to a team of developers to enhance productivity and code quality. · Stay updated with new technologies and practices to continually improve data processing capabilities. Qualifications: · Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. · Experience: 5-8 years of experience in data engineering, with a strong focus on PySpark and ETL processes. Technical Skills: Must-Have: 1. Extensive experience with PySpark, focusing on job optimization techniques. 2. Proficiency in SQL, with experience in SQL Server, MySQL, or other relational databases. 3. Strong knowledge of ETL concepts and tools, particularly Informatica PowerCenter and IDMC. 4. Excellent analytical and troubleshooting skills. 5. Strong communication skills for effective collaboration. Good to Have: 1. Basic knowledge of Unix commands and Shell scripting. 2. Experience in leading and mentoring development teams. 3. Familiarity with Azure/Fabric. Kindly share a profile only in this tracker format ,attach the tracker to the body of the mail. Without this tracker format and Tech Stack Table profile will not be considered. S.no Date Position Names of the Candidate Mobile Number Email id Total Experience Relevant Experience CUrrent CTC Expected CTC Notice Period / On Paper Current Organisation Current Location Address with Pin code Reason of leaving DOB Offer in hand VENDOR NAME - Regards Damodar 91-8976334593 info@d-techworks.com D-TechWorks Pvt Ltd USA | INDIA www.d-techworks.com Information Technology Services Technology | Consulting | Development | Staff Augmentation Show more Show less
Pune, Maharashtra, India
None Not disclosed
On-site
Full Time
Very Urgent position for Data Engineer Lead Job Title: Data Engineer Lead Experience: 5-10 Years Budget :10-14LPA(based on experience) Location: Pune -hybrid Notice Period: Immediate to 15days Mandatory skills: Python,GCP,Spark,SQL(EXPERT) Tech Stack Table Skills Experience Rating out of 10 Python GCP Spark SQL-Expert Lead Responsibilities Lead and mentor a team of data engineers, providing technical guidance, setting best practices, and overseeing task execution for the migration project. Design, develop, and architect scalable ETL processes to extract, transform, and load petabytes of data from on-premises SQL Server to GCP Cloud SQL PostgreSQL. Oversee the comprehensive analysis of existing SQL Server schemas, data types, stored procedures, and complex data models, defining strategies for their optimal conversion and refactoring for PostgreSQL. Establish and enforce rigorous data validation, quality, and integrity frameworks throughout the migration lifecycle, ensuring accuracy and consistency. Collaborate strategically with Database Administrators, application architects, business stakeholders, and security teams to define migration scope, requirements, and cutover plans. Lead the development and maintenance of advanced scripts (primarily Python) for automating large-scale migration tasks, complex data transformations, and reconciliation processes. Proactively identify, troubleshoot, and lead the resolution of complex data discrepancies, performance bottlenecks, and technical challenges during migration. Define and maintain comprehensive documentation standards for migration strategies, data mapping, transformation rules, and post-migration validation procedures. Ensure data governance, security, and compliance standards are meticulously applied throughout the migration process, including data encryption and access controls within GCP. Implement Schema conversion or custom schema mapping strategy for SQL Server to PostgreSQL shift Refactor and translate complex stored procedures and T-SQL logic to PostgreSQL-compatible constructs while preserving functional equivalence. Develop and execute comprehensive data reconciliation strategies to ensure consistency and parity between legacy and migrated datasets post-cutover. Design fallback procedures and lead post-migration verification and support to ensure business continuity. Ensuring metadata cataloging and data lineage tracking using GCP-native or integrated tools. Must-Have Skills Expertise in data engineering, specifically for Google Cloud Platform (GCP). Deep understanding of r elational database architecture, advanced schema design, data modeling, and performance tuning. Expert-level SQL proficiency, with extensive hands-on experience in both T-SQL (SQL Server) and PostgreSQL. Hands-on experience with data migration processes, including moving datasets from on-premises databases to cloud storage solutions. Proficiency in designing, implementing, and optimizing complex ETL/ELT pipelines for high-volume data movement, leveraging tools and custom scripting. Strong knowledge of GCP services: Cloud SQL, Dataflow, Pub/Sub, Cloud Storage, Dataproc, Cloud Composer, Cloud Functions, and Bigquery. Solid understanding of data governance, security, and compliance practices in the cloud, including the management of sensitive data during migration. Strong programming skills in Python or Java for building data pipelines and automating processes. Experience with real-time data processing using Pub/Sub, Dataflow, or similar GCP services. Experience with CI/CD practices and tools like Jenkins, GitLab, or Cloud Build for automating the data engineering pipeline. Knowledge of data modeling and best practices for structuring cloud data storage for optimal query performance and analytics in GCP. Familiarity with observability and monitoring tools in GCP (e.g., Stackdriver, Prometheus) for real-time data pipeline visibility and alerting. Good-to-Have Skills Direct experience with GCP Database Migration Service, Storage Transfer Service, or similar cloud-native migration tools. Familiarity with data orchestration using tools like Cloud Composer (based on Apache Airflow) for managing workflows. Experience with containerization tools like Docker and Kubernetes for deploying data pipelines in a scalable manner. Exposure to DataOps tools and methodologies for managing data workflows. Experience with machine learning platforms like AI Platform in GCP to integrate with data pipelines. Familiarity with data lake architecture and the integration of BigQuery with Google Cloud Storage or Dataproc. Kindly share a profile only in this tracker format ,attach the tracker to the body of the mail. Without this tracker format profile will not be considered. Sl. No Date Position Names of the Candidate Mobile Number Email id Total Experience Relevant Experience CUrrent CTC Expected CTC Notice Period / On Paper Current Organisation Current Location Address with Pin code Reason of leaving DOB Offer in hand VENDOR NAME Comments
Hyderabad, Telangana, India
None Not disclosed
On-site
Full Time
Apply if you are: • Passionate about finding pesky bugs and moving forward with fixing them to have a flawless, bug-free product. • You like to be in a fast-paced distributed organization and have a strong sense of urgency • You are highly productive under pressure and pride yourself for producing timely accurate issue diagnostics and test reports • You have a strong commitment to extending your technical knowledge and maintaining expertise in wide range of testing and automation • You have excellent communication skill, detail oriented, driven, mission driven and a team player • You are enthusiastic to contribute, lead, and excited to be in a startup environment Your Role: • Responsible for test infrastructure, allowing the company to exceed its quality standards • Work closely with the Engineering teams to reproduce customer issues, define test strategies, build frameworks, automate customer use cases and assist with customer escalation • Design and implement robust test cases that integrate well with modern CI/CD deployment technologies • Own end-to-end test cycle for product feature from test design to test automation including issue reproduction from customer escalation • Maintain existing automated test cases for functional, regression and performance test suites • Work side-by-side with engineers on testing and automating test cases in sprint cycles • Identify, document, and debug software defects • Maintain and enhance in-house test frameworks • Create test data sources and validate data ingestion & connectivity • Indexing & search testing • Permission & access control testing, role-based testing • AI chat & AI agent testing • UI/UX & usability testing Requirements Qualifications / Experience / Technical Skills • Bachelor’s Degree in Computer Science or equivalent of education • 3+ years of Software Quality Engineering • Expert in distributed systems, container orchestration (Docker, K8S), and Cloud technologies (AWS, Azure, GCP) • Knowledge of at least one programming language (Java, C++, etc...) and one scripting language (Ruby, JavaScript, etc...) • Knowledge of common Web application vulnerabilities such as the OWASP Top 10 • Understanding of authentication protocols (OAuth, SAML), role-based access control, and general security best practices • Experience with test automation frameworks (Playwright, RSpec,…) • Proficient in testing REST APIs, data ingestion pipelines, and familiarity with indexing/search technologies (e.g., Elasticsearch, Solr) • Experience testing AI-driven applications or chatbots • Understanding of various test methodologies: regression, functional, performance • Able to analyze and triage automated test runs and take appropriate actions for failure classification
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.