Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
3 - 6 Lacs
Noida
Work from Office
We are looking for a skilled Snowflake Ingress/Egress Specialist with 6 to 12 years of experience to manage and optimize data flow into and out of our Snowflake data platform. This role involves implementing secure, scalable, and high-performance data pipelines, ensuring seamless integration with upstream and downstream systems, and maintaining compliance with data governance policies. Roles and Responsibility Design, implement, and monitor data ingress and egress pipelines in and out of Snowflake. Develop and maintain ETL/ELT processes using tools like Snowpipe, Streams, Tasks, and external stages (S3, Azure Blob, GCS). Optimize data load and unload processes for performance, cost, and reliability. Coordinate with data engineering and business teams to support data movement for analytics, reporting, and external integrations. Ensure data security and compliance by managing encryption, masking, and access controls during data transfers. Monitor data movement activities using Snowflake Resource Monitors and Query History. Job Bachelor's degree in Computer Science, Information Systems, or a related field. 6-12 years of experience in data engineering, cloud architecture, or Snowflake administration. Hands-on experience with Snowflake features such as Snowpipe, Streams, Tasks, External Tables, and Secure Data Sharing. Proficiency in SQL, Python, and data movement tools (e.g., AWS CLI, Azure Data Factory, Google Cloud Storage Transfer). Experience with data pipeline orchestration tools such as Apache Airflow, dbt, or Informatica. Strong understanding of cloud storage services (S3, Azure Blob, GCS) and working with external stages. Familiarity with network security, encryption, and data compliance best practices. Snowflake certification (SnowPro Core or Advanced) is preferred. Experience with real-time streaming data (Kafka, Kinesis) is desirable. Knowledge of DevOps tools (Terraform, CI/CD pipelines) is a plus. Strong communication and documentation skills are essential.
Posted 2 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Senior Azure Data Engineer with 5 to 10 years of experience to design and implement scalable data pipelines using Azure technologies, driving data transformation, analytics, and machine learning. The ideal candidate will have a strong background in data engineering and proficiency in Python, PySpark, and Spark Pools. Roles and Responsibility Design and implement scalable Databricks data pipelines using PySpark. Transform raw data into actionable insights through data analysis and machine learning. Build, deploy, and maintain machine learning models using MLlib or TensorFlow. Optimize cloud data integration from Azure Blob Storage, Data Lake, and SQL/NoSQL sources. Execute large-scale data processing using Spark Pools and fine-tune configurations for efficiency. Collaborate with cross-functional teams to identify business requirements and develop solutions. Job Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Minimum 5 years of experience in data engineering, with at least 3 years specializing in Azure Databricks, PySpark, and Spark Pools. Proficiency in Python, PySpark, Pandas, NumPy, SciPy, Spark SQL, DataFrames, RDDs, Delta Lake, Databricks Notebooks, and MLflow. Hands-on experience with Azure Data Lake, Blob Storage, Synapse Analytics, and other relevant technologies. Strong understanding of data modeling, data warehousing, and ETL processes. Experience with agile development methodologies and version control systems.
Posted 2 weeks ago
3.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
We are looking for a skilled Data Engineer with 3 to 6 years of experience in processing data pipelines using Databricks, PySpark, and SQL on Cloud distributions like AWS. The ideal candidate should have hands-on experience with Databricks, Spark, SQL, and AWS Cloud platform, especially S3, EMR, Databricks, Cloudera, etc. Roles and Responsibility Design and develop large-scale data pipelines using Databricks, Spark, and SQL. Optimize data operations using Databricks and Python. Develop solutions to meet business needs reflecting a clear understanding of the objectives, practices, and procedures of the corporation, department, and business unit. Evaluate alternative risks and solutions before taking action. Utilize all available resources efficiently. Collaborate with cross-functional teams to achieve business goals. Job Experience working in projects involving data engineering and processing. Proficiency in large-scale data operations using Databricks and overall comfort with Python. Familiarity with AWS compute, storage, and IAM concepts. Experience with S3 Data Lake as the storage tier. ETL background with Talend or AWS Glue is a plus. Cloud Warehouse experience with Snowflake is a huge plus. Strong analytical and problem-solving skills. Relevant experience with ETL methods and retrieving data from dimensional data models and data warehouses. Strong experience with relational databases and data access methods, especially SQL. Excellent collaboration and cross-functional leadership skills. Excellent communication skills, both written and verbal. Ability to manage multiple initiatives and priorities in a fast-paced, collaborative environment. Ability to leverage data assets to respond to complex questions that require timely answers. Working knowledge of migrating relational and dimensional databases on AWS Cloud platform.
Posted 2 weeks ago
5.0 - 8.0 years
3 - 7 Lacs
Noida
Work from Office
We are looking for a skilled Data Engineer with 5 to 8 years of experience to join our team at Apptad Technologies Pvt Ltd. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems and databases. Ensure data quality and integrity through data validation and testing procedures. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve technical issues related to data engineering projects. Job Strong understanding of data engineering principles and practices. Experience with data modeling, database design, and data warehousing concepts. Proficiency in programming languages such as Python or Java. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 2 weeks ago
10.0 - 12.0 years
3 - 7 Lacs
Noida
Work from Office
We are looking for a skilled Data Engineer Specialist with expertise in Snowflake to join our team in Hyderabad and Bangalore. The ideal candidate will have 10-12 years of experience designing and implementing large-scale data lake/warehouse integrations. Roles and Responsibility Design and implement scalable data pipelines using AWS technologies such as ETL, Kafka, DMS, Glue, Lambda, and Step Functions. Develop automated workflows using Apache Airflow to ensure smooth and efficient data processing and orchestration. Design, implement, and maintain Snowflake data warehouses, ensuring optimal performance, scalability, and seamless data availability. Automate cloud infrastructure provisioning using Terraform and CloudFormation. Create high-performance logical and physical data models using Star and Snowflake schemas. Provide guidance on data security best practices and ensure secure coding and data handling procedures. Job Bachelor's degree in computer science, engineering, or a related field. 10-12 years of experience designing and implementing large-scale data lake/warehouse integrations with diverse data storage solutions. CertificationsAWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect (preferred), Snowflake Advanced Architect and/or Snowflake Core Certification (Required). Strong working knowledge of programming languages such as Python, R, Scala, PySpark, and SQL (including stored procedures). Solid understanding of CI/CD pipelines, DevOps principles, and infrastructure-as-code practices using tools like Terraform, JFrog, Jenkins, and CloudFormation. Excellent analytical and troubleshooting skills, with the ability to solve complex data engineering issues and optimize data workflows. Strong interpersonal and communication skills, with the ability to work across teams and with stakeholders to drive data-centric projects.
Posted 2 weeks ago
4.0 - 8.0 years
4 - 7 Lacs
Noida
Work from Office
We are looking for a skilled SSAS Data Engineer with 4 to 8 years of experience to join our team. The ideal candidate will have a strong background in Computer Science, Information Technology, or a related field. Roles and Responsibility Develop, deploy, and manage OLAP cubes and tabular models. Collaborate with data teams to design and implement effective data solutions. Troubleshoot and resolve issues related to SSAS and data models. Monitor system performance and optimize queries for efficiency. Implement data security measures and backup procedures. Stay updated with the latest SSAS and BI technologies and best practices. Job Bachelor's degree in Computer Science, Information Technology, or a related field. Strong understanding of data warehousing, ETL processes, OLAP concepts, and data modeling concepts. Proficiency in SQL, MDX, and DAX query languages. Experience with data visualization tools like Power BI. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities. Experience working in an Agile environment.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 12 Lacs
Chennai
Work from Office
We are looking for a skilled Senior Specialist - Data Engineering with 5 to 10 years of experience to join our team at Apptad Technologies Pvt Ltd. The ideal candidate will have a strong background in data engineering and excellent technical skills. Roles and Responsibility Design, develop, and implement large-scale data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data systems and databases. Ensure data quality, integrity, and security. Optimize data processing workflows for improved performance and efficiency. Stay updated with industry trends and emerging technologies. Job Strong understanding of data engineering principles and practices. Experience with big data technologies such as Hadoop, Spark, and NoSQL databases. Excellent programming skills in languages like Java, Python, or Scala. Strong problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Effective communication and interpersonal skills. For more information, please contact us at 6566536.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 15 Lacs
Kochi
Remote
We are seeking a highly skilled ETL/Data Engineer with expertise in Informatica DEI BDM to design and implement robust data pipelines handling medium to large-scale datasets. The role involves building efficient ETL frameworks that support batch .
Posted 2 weeks ago
6.0 - 9.0 years
18 - 20 Lacs
Bengaluru
Hybrid
Job Title: Data Engineer Experience Range: 6-9 years Location : Bengaluru Notice period : immediate - 15 days Job Summary: We are looking for a skilled Data Engineer to design, build, and maintain robust, scalable data pipelines and infrastructure. This role is essential in enabling data accessibility, quality, and insights across the organization. You will work with modern cloud and big data technologies such as Azure Databricks , Snowflake , and DBT , collaborating with cross-functional teams to power data-driven decision-making. Key Responsibilities: For External Candidates: Data Pipeline Development: Build and optimize data pipelines to ingest, transform, and load data from multiple sources using Azure Databricks, Snowflake, and DBT. Data Modeling & Architecture: Design efficient data models and structures within Snowflake ensuring optimal performance and accessibility. Data Transformation: Implement standardized and reusable data transformations in DBT for reliable analytics and reporting. Performance Optimization: Monitor and tune data workflows for performance, scalability, and fault-tolerance. Cross-Team Collaboration: Partner with data scientists, analysts, and business users to support analytics and machine learning projects with reliable, well-structured datasets. Additional Responsibilities (Internal Candidates): Implement and manage CI/CD pipelines using tools such as Jenkins , Azure DevOps , or GitHub . Develop data lake solutions using Scala and Python in a Hadoop/Spark ecosystem. Work with Azure Data Factory and orchestration tools to schedule, monitor, and maintain workflows. Apply deep understanding of Hadoop architecture , Spark, Hive, and storage optimization. Mandatory Skills: Hands-on experience with: Azure Databricks (data processing and orchestration) Snowflake (data warehousing) DBT (data transformation) Azure Data Factory (pipeline orchestration) Strong SQL and data modeling capabilities Proficiency in Scala and Python for data engineering use cases Experience with big data ecosystems : Hadoop, Spark, Hive Knowledge of CI/CD pipelines (Jenkins, GitHub, Azure DevOps) Qualifications: Bachelors degree in Computer Science , Data Engineering , or a related field 6–9 years of relevant experience in data engineering or data infrastructure roles
Posted 2 weeks ago
6.0 - 8.0 years
3 - 7 Lacs
Noida
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=6 to 8 , jd= Job Title:- Data Engineer Job Location:- Remote Job Type:- full time :- Apptad is looking for a Data Engineer profile. Its a full-time/long term job opportunity with us. Candidate should have Advance Python and Advance SQL and Pyspark. Python Programming Language: LevelAdvanced Key ConceptsMulti-threading, Multi-Processing, Regular Expressions, Exception Handling, etc. LibrariesPandas, Numpyetc. Data Modelling and Data Transformation: LevelAdvanced Key AreasData processing on structured and unstructured data. Relational Databases: LevelAdvanced Key AreasQuery Optimization, Query Building, Experience with ORMs like SQLAlchemy, Exposure to databases such as MSSQL, Postgres, Oracle, etc. Functional and Object-Oriented Programming (OOPS): LevelIntermediate Problem Solving for Feature Development: LevelIntermediate Good experience working with AWS Cloud and its services related to Data engineering like Athena, AWS batch jobs etc. , Title=Data Engineer, ref=6566581
Posted 2 weeks ago
4.0 - 7.0 years
5 - 8 Lacs
Noida
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=4 to 7 , jd=Job Title:-IB Field Data EngineerJob Location:- BangaloreJob Type:- 06 months ContractualExp:- Min 4 to 7 MaxClient:- GenpactNo. of positions:- 2DurationImmediate:- Apptad is looking for a IB Field Data Engineer Profile. It is a long-term job opportunity with us.Medical Device, Field Engineer, RCA, Field Failure data analysis, MS Office (Excel), power BI,SQL, tableau, Title=IB Field Data Engineer, ref=6566554
Posted 2 weeks ago
4.0 - 6.0 years
3 - 6 Lacs
Noida
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=4 to 6 , jd=10 BDC7A SummaryAs a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to design and implement data platform solutions.- Develop and maintain data pipelines for efficient data processing.- Optimize data storage and retrieval processes for improved performance.- Implement data governance policies and ensure data quality standards are met.- Stay updated with industry trends and best practices in data engineering. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Building Tool.- Strong understanding of data modeling and database design principles.- Experience in ETL processes and data integration techniques.- Knowledge of cloud platforms and services for data storage and processing.- Hands-on experience with data visualization tools for reporting and analysis. Additional Information- The candidate should have a minimum of 3 years of experience in Data Building Tool.- This position is based at our Bengaluru office.- A 15 years full time education is required., Title=Data Building Tool, ref=6566428
Posted 2 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=5 to 12 , jd= - Azure Data Engineer Job Type:- Full Time Job Location:- Bangalore JD:- We are looking for a skilled Azure Data Engineer to design, develop, and maintain data solutions on the Microsoft Azure cloud platform. The ideal candidate will have experience in data engineering, data pipeline development, ETL/ELT processes, and cloud-based data services. They will be responsible for implementing scalable and efficient data architectures, ensuring data quality, and optimizing data workflows. Key Responsibilities: Design and implement data pipelines using Azure Data Factory (ADF), Azure Databricks, and Azure Synapse Analytics . Develop and optimize ETL/ELT processes to extract, transform, and load data from various sources into Azure Data Lake, Azure SQL Database, and Azure Synapse . Work with Azure Data Lake Storage (ADLS) and Azure Blob Storage to manage large-scale structured and unstructured data. Implement data modeling, data partitioning, and indexing techniques for optimized performance in Azure-based databases. Develop and maintain real-time and batch processing solutions using Azure Stream Analytics and Event Hub . Implement data governance, data security, and compliance best practices using Azure Purview, RBAC, and encryption mechanisms . Optimize query performance and improve data accessibility through SQL tuning and indexing strategies . Collaborate with data scientists, analysts, and business stakeholders to define and implement data solutions that support business insights and analytics. Monitor and troubleshoot data pipeline failures, performance issues, and cloud infrastructure challenges . Stay updated with the latest advancements in Azure data services, big data technologies, and cloud computing . Required Skills & Qualifications: Bachelor’s or master’s degree in computer science , Information Technology, Data Science, or a related field. 5-8 years of experience in data engineering, cloud data platforms, and ETL development . Strong expertise in Azure services such as: Azure Data Factory (ADF) Azure Synapse Analytics Azure Databricks (PySpark, Scala, or Python) Azure Data Lake Storage (ADLS) Azure Blob Storage Azure SQL Database / Cosmos DB Azure Functions & Logic Apps Azure DevOps for CI/CD automation Proficiency in SQL, Python, Scala, or Spark for data transformation and processing. Experience with Big Data frameworks (Apache Spark, Hadoop) and data pipeline orchestration. Hands-on experience in data warehousing concepts, dimensional modelling, and performance optimization . Understanding of data security, governance, and compliance frameworks . Experience with CI/CD pipelines, Terraform, ARM templates, or Infrastructure as Code (Isac) . Knowledge of Power BI or other visualization tools is a plus. Strong problem-solving and troubleshooting skills. Preferred Qualifications: Familiarity with SAP, Salesforce, or third-party APIs for data integration. , Title=Azure Data Engineer, ref=6566567
Posted 2 weeks ago
2.0 - 5.0 years
5 - 9 Lacs
Ahmedabad
Work from Office
Responsibilities: Interpret and analyze business use-cases and feature requests into technical designs and development tasks. Accountable for analyzing new assignments, developing scripts and tests to validate data, handling change requests, and providing quick and efficient solutions. Deliver real, measurable business value with ownership and accountability for the results. Actively participate in regular design and code reviews. Work with highly skilled teammates in a collaborative and fast-paced team environment. Adhere to and apply software engineering practices and implement automation across all elements of solution delivery. Explore the whole of S&P Global Ratings by performing automation assessments on ideas submitted from all internal departments. Expected Behaviors: Perspective: Understands the teams broader goals and how their work plays a role, applying their skills or knowledge to situations where the output is defined. Impact: Takes initiative to provide support to the team, paying close attention to detail and identifying potential issues, while delivering as part of a broader agenda. Emotional Intelligence: Actively listens, seeking to understand other perspectives and concerns, fostering inclusivity and collaboration. Collaboration: Collaborates effectively with colleagues to achieve common goals, working beyond immediate tasks to contribute to team success. Time Management: Consistently delivers high-quality work and meets deadlines, assessing urgency in tasks allocated to them. Adaptability: Resourceful and solution-oriented, able to shift priorities and actions as team objectives shift. Creativity: Generates new ideas and thinks outside the box, proactively seeking opportunities for innovation and improvement. Communication: Communicates effectively, expressing thoughts clearly and persuasively, and actively participates in collaborative efforts/team discussions. Leadership: Acts as a team player by being supportive and collaborative, demonstrating commitment to work and team. What Were Looking For: Bachelors degree in Computer Science, Engineering, or a related discipline, or equivalent experience. Proficiency with at least one complementary programming language (Python, etc.). Understanding of CI/CD pipelines. Intermediate SQL knowledge. Database and data frames knowledge. Knowledge of financial domains, preferred. Experience with Power BI/Tableau. Experience writing test stories and conducting unit testing (UAT) to validate, debug, and document issues and release succession. Experience with Agile software development processes. Exposure to cloud-based infrastructures, preferably with AWS, Databricks. Fluent in English. Additional Preferred Qualifications: Team player who can coordinate multiple projects and prioritize effectively against a timeline. Demonstrates a thorough understanding of information systems, business processes, the key drivers, and measures of success while choosing the proper methodologies and policies to support broad business goals. Excellent aptitude for learning, experimenting, and picking up new technologies quickly. Ability to work in cross-functional, multi-geography teams displaying cultural sensitivity and championing a global mindset. Aptitude to solve complex problems, critical thinking, and out-of-the-box thinking. Ability to present own ideas and solutions, as well as guide technical discussions. Ability to work in a team-oriented environment and, in addition, can work independently. Office presence twice a week.
Posted 2 weeks ago
8.0 - 13.0 years
8 - 18 Lacs
Chennai
Work from Office
About the Role 7+ years of experience in managing Data & Analytics service delivery, preferably within a Managed Services or consulting environment. Responsibilities Serve as the primary owner for all managed service engagements across all clients, ensuring SLAs and KPIs are met consistently. Continuously improve the operating model, including ticket workflows, escalation paths, and monitoring practices. Coordinate triaging and resolution of incidents and service requests raised by client stakeholders. Collaborate with client and internal cluster teams to manage operational roadmaps, recurring issues, and enhancement backlogs. Lead a >40 member team of Data Engineers and Consultants across offices, ensuring high-quality delivery and adherence to standards. Support transition from project mode to Managed Services including knowledge transfer, documentation, and platform walkthroughs. Ensure documentation is up to date for architecture, SOPs, and common issues. Contribute to service reviews, retrospectives, and continuous improvement planning. Report on service metrics, root cause analyses, and team utilization to internal and client stakeholders. Participate in resourcing and onboarding planning in collaboration with engagement managers, resourcing managers and internal cluster leads. Act as a coach and mentor to junior team members, promoting skill development and strong delivery culture. Qualifications ETL or ELT: Azure Data Factory, Databricks, Synapse, dbt (any two – Mandatory). Data Warehousing: Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone - Mandatory). Data Visualization: Looker, Power BI, Tableau (Basic understanding to support stakeholder queries). Cloud: Azure (Mandatory), AWS or GCP (Good to have). SQL and Scripting: Ability to read/debug SQL and Python scripts. Monitoring: Azure Monitor, Log Analytics, Datadog, or equivalent tools. Ticketing & Workflow Tools: Freshdesk, Jira, ServiceNow, or similar. DevOps: Containerization technologies (e.g., Docker, Kubernetes), Git, CI/CD pipelines (Exposure preferred). Required Skills Strong understanding of data engineering and analytics concepts, including ELT/ETL pipelines, data warehousing, and reporting layers. Experience in ticketing, issue triaging, SLAs, and capacity planning for BAU operations. Hands-on understanding of SQL and scripting languages (Python preferred) for debugging/troubleshooting. Proficient with cloud platforms like Azure and AWS; familiarity with DevOps practices is a plus. Familiarity with orchestration and data pipeline tools such as ADF, Synapse, dbt, Matillion, or Fabric. Understanding of monitoring tools, incident management practices, and alerting systems (e.g., Datadog, Azure Monitor, PagerDuty). Strong stakeholder communication, documentation, and presentation skills. Experience working with global teams and collaborating across time zones.
Posted 2 weeks ago
5.0 - 10.0 years
18 - 25 Lacs
Mumbai, Thane
Work from Office
Role & responsibilities Assess current Synapse Analytics workspace including pipelines, notebooks, datasets, and SQL scripts. Rebuild or refactor Synapse pipelines, notebooks, and data models using Fabric-native services. Collaborate with data engineers, architects, and business stakeholders to ensure functional parity post-migration. Validate data integrity and performance in the new environment. Document the migration process, architectural decisions, and any required support materials. Provide knowledge transfer and guidance to internal teams on Microsoft Fabric capabilities. Preferred candidate profile Proven experience with Azure Synapse Analytics (workspaces, pipelines, dedicated/SQL serverless pools, Spark notebooks). 5 years of synapse azure cloud experience. Probably only see 1 to 2 years experience in Fabric. Hands-on experience with Microsoft Fabric (Data Factory, OneLake, Power BI integration). Strong proficiency in SQL, Python, and Spark. Solid understanding of data modeling, ETL/ELT pipelines, and data integration patterns. Familiarity with Azure Data Lake, Azure Data Factory, and Power BI. Experience with Lakehouse architecture and Delta Lake in Microsoft Fabric. Experience with CI/CD practices for data pipelines. Excellent communication skills and ability to work cross-functionally. Nice-to-Have Skills: Familiarity with DataOps or DevOps practices in Azure environments. Prior involvement in medium to large-scale cloud platform migrations. Knowledge of security and governance features in Microsoft Fabric. Knowledge of Dynamics Dataverse link to Fabric.
Posted 2 weeks ago
3.0 - 7.0 years
8 - 11 Lacs
Pune
Work from Office
Job Summary JD : Data Scientist The Data Scientist III is a subject matter expert in data analytics and business analysis They perform analysis, design, development and optimization activities in support of data quality and data validation They also play a critical role in defining and creating the future state of data and analytics in the enterprise Team members are expected to build trust and relationships through effective communication with both technical and non technical partners within multiple departments to facilitate success A successful individual in this role is a leader who has and imparts business knowledge, data analytics and business analysis expertise related to the sources and uses of data for business analytics and decisioning
Posted 2 weeks ago
5.0 - 10.0 years
7 - 11 Lacs
Pune
Work from Office
About the Role Were looking for a Data Engineer to help build reliable and scalable data pipelines that power reports, dashboards, and business decisions at Hevo. Youll work closely with engineering, product, and business teams to make sure data is accurate, available, and easy to use. Key Responsibilities Independently design and implement scalable ELT workflows using tools like Hevo, dbt, Airflow, and Fivetran. Ensure the availability, accuracy, and timeliness of datasets powering analytics, dashboards, and operations. Collaborate with Platform and Engineering teams to address issues related to ingestion, schema design, and transformation logic. Escalate blockers and upstream issues proactively to minimize delays for stakeholders. Maintain strong documentation and ensure discoverability of all models, tables, and dashboards. Own end-to-end pipeline quality, minimizing escalations or errors in models and dashboards. Implement data observability practices such as freshness checks, lineage tracking, and incident alerts. Regularly audit and improve accuracy across business domains. Identify gaps in instrumentation, schema evolution, and transformation logic. Ensure high availability and data freshness through monitoring, alerting, and incident resolution processes. Set up internal SLAs, runbooks, and knowledge bases (data catalog, transformation logic, FAQs). Improve onboarding material and templates for future engineers and analysts Required Skills & Experience 3-5 years of experience in Data Engineering, Analytics Engineering, or related roles. Proficient in SQL and Python for data manipulation, automation, and pipeline creation. Strong understanding of ELT pipelines, schema management, and data transformation concepts. Experience with modern data stack : dbt, Airflow, Hevo, Fivetran, Snowflake, Redshift, or BigQuery. Solid grasp of data warehousing concepts: OLAP/OLTP, star/snowflake schemas, relational & columnar databases. Understanding of Rest APIs, Webhooks, and event-based data ingestion. Strong debugging skills and ability to troubleshoot issues across systems. Preferred Background Experience in high-growth industries such as eCommerce, FinTech, or hyper-commerce environments. Experience working with or contributing to a data platform (ELT/ETL tools, observability, lineage, etc.). Core Competencies Excellent communication and problem-solving skills Attention to detail and a self-starter mindset High ownership and urgency in execution Collaborative and coachable team player Strong prioritization and resilience under pressure
Posted 2 weeks ago
2.0 - 7.0 years
40 - 45 Lacs
Chandigarh
Remote
We are seeking a highly skilled and motivated Data Science Engineer with strong experience in AI/ML, Data Engineering, and cloud infrastructure (AWS). You will play a critical role in shaping intelligent, scalable, and data-driven solutions that deliver meaningful impact for our clients. As part of a cross-functional team, you will design and build end-to-end data pipelines, develop predictive models, and deploy production-ready data products across a variety of industries. Key Responsibilities: Collaborate with engineering teams, data scientists, and clients to build and deploy impactful data products. Design, develop, and maintain scalable and cost-efficient data pipelines on AWS. Build and integrate AI/ML models to enhance product intelligence and automation. Develop backend and data components using Python, SQL, and PySpark. Leverage AI frameworks and tools to deploy models in production environments. Work directly with stakeholders to understand and translate their data needs into technical solutions. Implement and manage cloud infrastructure including PostgreSQL, Redshift, Airflow, and MongoDB. Follow best practices in data architecture, model training, testing, and performance optimization. Required Qualifications: Experience: Minimum 3 years in Data Engineering, Software Development, or related roles. At least 2 years of hands-on experience applying AI/ML algorithms for real-world use cases. Proven experience in building and managing AWS-based cloud infrastructure. Strong background in data analysis, mining, and model interpretability. Technical Skills: Programming Languages: Python, SQL, PySpark Frameworks/Tools: Airflow, Django (optional), Scikit-learn, TensorFlow or PyTorch Databases: PostgreSQL, Redshift, MongoDB Experience with real-time and batch data workflows Soft Skills: Strong problem-solving and logical reasoning abilities Excellent communication skills for client interactions and internal collaboration Ability to work in a fast-paced, dynamic environment Nice to Have: Experience with MLOps and CI/CD for ML pipelines Exposure to BI tools like Tableau, Power BI, or Metabase Knowledge of data security and governance on AWS
Posted 2 weeks ago
3.0 - 8.0 years
20 - 30 Lacs
Bengaluru
Hybrid
Role & responsibilities Design, develop, and optimize complex SQL queries, stored procedures, and data models for Oracle-based systems Create and maintain efficient data pipelines for extract, transform, and load (ETL) processes using Informatica or Python Implement data quality controls and validation processes to ensure data integrity Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications Document database designs, procedures, and configurations to support knowledge sharing and system maintenance Troubleshoot and resolve database performance issues through query optimization and indexing strategies Integrate Oracle systems with cloud services, particularly AWS S3 and related technologies Participate in code reviews and contribute to best practices for database development Support migration of data and processes from legacy systems to modern cloud-based solutions Work within an Agile framework, participating in sprint planning, refinement, and retrospectives Required Qualifications 3+ years of experience with Oracle databases, including advanced SQL & PLSQL development Strong knowledge of data modelling principles and database design Proficiency with Python for data processing and automation Experience implementing and maintaining data quality controls Experience with AI-assisted development (GH copilot, etc..) Ability to reverse engineer existing database schemas and understand complex data relationships Experience with version control systems, preferably Git/GitHub Excellent written communication skills for technical documentation Demonstrated ability to work within Agile development methodologies Knowledge of concepts, particularly security reference data, fund reference data, transactions, orders, holdings, and fund accounting Additional Qualifications Experience with ETL tools like Informatica and Control-M Unix shell scripting skills for data processing and automation Familiarity with CI/CD pipelines for database code Experience with AWS services, particularly S3, Lambda, and Step Functions Knowledge of database security best practices Experience with data visualization tools (Power BI) Familiarity with domains (Security Reference, Trades, Orders Holdings, Funds, Accounting, Index etc) Preferred candidate profile Role & responsibilities Preferred candidate profile
Posted 2 weeks ago
6.0 - 11.0 years
8 - 14 Lacs
Hyderabad, Pune
Work from Office
The Data Scientist Generative AI & NLP Specialist will be responsible for designing, developing, and deploying AI models and solutions that meet our business needs. With 4+ years of hands-on Data Science experience and at least 2+ years working in Generative AI, you will bring specialized expertise in LLMs and NLP. Project experience in NLP is a must, and experience in developing AI Agents will be considered a strong plus. This role suits a creative, analytical, and proactive individual focused on pushing the capabilities of AI within our projects. Primary Skill Develop and implement AI models focused on NLP taskssuch as text classification, entity recognition, sentiment analysis, and language generation. Leverage deep knowledge of Large Language Models (LLMs) to design, fine-tune, and deploy high-impact solutions across various business domains. Collaborate with cross-functional teams (data engineers, product managers, and domain experts) to define problem statements, build robust data pipelines, and integrate models into production systems. Stay current with advancements in Generative AI and NLP; research and evaluate new methodologies to drive innovation and maintain a competitive edge. Build, test, and optimize AI agents for automated tasks and enhanced user experiences where applicable.
Posted 2 weeks ago
5.0 - 9.0 years
7 - 9 Lacs
Bengaluru
Work from Office
Role & responsibilities Design, build, and maintain scalable ETL pipelines to ingest and transform data from diverse sources. Work extensively with Azure Data Lake , Azure Data Factory (ADF) , and related Azure services for data integration and orchestration. Extract data from various structured and unstructured sources using APIs , flat files, databases, or cloud endpoints. Implement data flows to securely move and transform data into Azure Data Lake storage layers. Write, optimize, and maintain complex SQL queries for data transformation and analysis. Ensure high data quality, integrity, and reliability throughout the data pipeline. Collaborate with data analysts, BI developers, and stakeholders to understand data requirements and deliver robust solutions. Document ETL processes, data mapping, and data lineage for transparency and maintenance. Monitor, troubleshoot, and optimize ETL pipelines for performance and cost efficiency. Preferred candidate profile 3+ years of hands-on experience in ETL development and data engineering . Strong expertise in Azure Data Lake , Azure Data Factory (ADF) , and pipeline creation. Proficiency in SQL and T-SQL for data manipulation and transformation. Solid understanding of ETL principles and the data lifecycle (ingestion to storage). Experience working with REST APIs or other APIs to extract and load data from external sources. Clear understanding of the differences between a Data Lake and a Data Warehouse . Familiarity with data governance , metadata management , and data quality standards. Experience with JSON/XML data structures , flat files , and cloud-native storage. Strong analytical, debugging, and problem-solving skills. Nice to Have: Experience with Power BI or other BI/visualization tools. Knowledge of Azure Synapse , Databricks , or Data Bricks Notebooks . Basic scripting in Python or PySpark . Exposure to CI/CD pipelines using Azure DevOps or Git.
Posted 2 weeks ago
5.0 - 8.0 years
6 - 12 Lacs
Chennai
Work from Office
Design and develop scalable cloud-based data solutions on Google Cloud Platform (GCP) Build and optimize Python-based ETL pipelines and data workflows Work with NoSQL databases (Bigtable, Firestore, MongoDB) for high-performance data management
Posted 2 weeks ago
8.0 - 12.0 years
35 - 40 Lacs
Bengaluru
Hybrid
Key Responsibilities: Develop and maintain scalable data pipelines in Databricks to support the migration of data from Oracle-based Data Warehouse (DWH) and Operational Data Store (ODS) systems. Analyze and understand existing Oracle schemas, stored procedures, and data transformation logic. Translate PL/SQL logic into PySpark/Databricks SQL. Develop and maintain Delta Lake-based datasets with appropriate partitioning, indexing, and optimization strategies. Perform data validation and reconciliation post-migration, including row count, data integrity, and accuracy checks. Build reusable and modular data ingestion and transformation frameworks using Databricks Notebooks, Jobs, and Workflows. Collaborate with DevOps teams for CI/CD pipeline integration and efficient deployment. Optimize performance of existing pipelines and queries in Databricks. Document architecture, transformation logic, and data lineage clearly for operational transparency. Utilize some ETL codes from Pentaho and TIBCO, Perl to support legacy ETL understanding and potential migration to PySpark/Databricks SQL Mandatory Skills & Experience: 6 to 8 years of experience in data engineering, with strong experience in Oracle DWH/ODS environments. Minimum 3+ years hands-on experience in Databricks (including PySpark, SQL, Delta Lake, Workflows). Strong understanding of Lakehouse architecture, cloud data platforms, and big data processing.
Posted 2 weeks ago
7.0 - 12.0 years
30 - 35 Lacs
Pune
Work from Office
Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank. Your key responsibilities Your Role - What Youll Do As a SQL Engineer, you would be responsible for design, development and optimization of complex database systems. You would be writing efficient SQL queries, stored procedures and possess expertise in data modeling, performance optimization and working with large scale relational databases. Key Responsibilities: Design, develop and optimize complex SQL queries, stored procedures, views and functions. Work with large datasets to perform data extraction, transformation and loading(ETL). Develop and maintain scalable database schemas and models Troubleshoot and resolve database-related issues including performance bottlenecks and data quality concerns Maintain data security and compliance with data governance policy. Your skills and experience Skills Youll Need : Must Have: 8+ years of hands-on experience with SQL in relational databases SQL Server, Oracle, MySQL PostgreSQL. Strong working experience of PLSQL and T-SQL. Strong understanding of data modelling, normalization and relational DB design. Desirable skills that will help you excel Ability to write high performant, heavily resilient queries in Oracle / PostgreSQL / MSSQL Working knowledge of Database modelling techniques like Star Schema, Fact-Dimension Models and Data Vault. Awareness of database tuning methods like AWR reports, indexing, partitioning of data sets, defining tablespace sizes and user roles etc. Hands on experience with ETL tools - Pentaho/Informatica/Stream sets. Good experience in performance tuning, query optimization and indexing. Hands-on experience on object storage and scheduling tools Experience with cloud-based data services like data lakes, data pipelines, and machine learning platforms. Educational Qualifications Bachelors degree in Computer Science/Engineering or relevant technology & science Technology certifications from any industry leading cloud providers
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France