Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 8.0 years
8 - 18 Lacs
gurugram, bengaluru
Hybrid
Role & Responsibilities Excellent problem-solving skills with the ability to analyze business problems systematically and deliver effective right-sized solutions in a timely manner. Proven analysis and design/development skills on AWS platforms. Develop, Test, and Implement the changes following the Software Development Life Cycle methodologies and quality concepts. Technical Skills Strong knowledge on AWS Glue/Python/PySpark. Understanding of AWS data/automation services Step functions, Lambda, SNS, SQS. Hands-on experience in ETL projects. Good knowledge of data warehousing concepts. Good analytical and problem-solving skills. Experience in GIT code versioning tools. Strong knowledge of relational database (Oracle/PostgreSQL). Good understanding of Software Development Life Cycle. Nice to have Working knowledge on job scheduling tools like Autosys etc. Experience in working in an Agile environment. Certification in AWS. Experience in PowerShell/Batch scripts. Good organizational skills with the ability to handle several tasks efficiently. Education & Experience Bachelor's or Master's degree in Computer science, IT or related technical field. Minimum 3 to 8 years of overall IT experience. Minimum 3+ years of experience on AWS Glue/Python/PySpark Strong knowledge of AWS Glue/PySpark, relational databases (Oracle/PostgreSQL, SQL Server). Good understanding of AWS data/automation services Step functions, Lambda, SNS, SQS. Demonstrate excellent analytical and logical thinking.
Posted 2 hours ago
9.0 - 14.0 years
22 - 37 Lacs
pune, chennai, bengaluru
Hybrid
Role & responsibilities We are looking for a Senior Data Engineer with strong expertise in SQL, Python, Azure Synapse, Azure Data Factory, Snowflake, and Databricks . The ideal candidate should have a solid understanding of SQL (DDL, DML, query optimization) and ETL pipelines while demonstrating a learning mindset to adapt to evolving technologies. Key Responsibilities: Collaborate with business and IT stakeholders to define business and functional requirements for data solutions. Design and implement scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and Snowflake . Develop detailed technical designs, data flow diagrams, and future-state data architecture . Evangelize modern data modelling practices , including entity-relationship models, star schema, and Kimball methodology . Ensure data governance, quality, and validation by working closely with quality engineering teams . Write, optimize, and troubleshoot complex SQL queries , including DDL, DML, and performance tuning . Work with Azure Synapse, Azure Data Lake, and Snowflake for large-scale data processing . Implement DevOps and CI/CD best practices for automated data pipeline deployments. Support real-time streaming data processing with Spark, Kafka, or similar technologies . Provide technical mentorship and guide team members on best practices in SQL, ETL, and cloud data solutions . Stay up to date with emerging cloud and data engineering technologies and demonstrate a continuous learning mindset . Required Skills & Qualifications: Primary Requirements: SQL Expertise Strong hands-on experience with DDL, DML, query optimization, and performance tuning . Programming Languages – Proficiency in Python or Java for data processing and automation. Data Modelling – Good understanding of entity-relationship modelling, star schema, and Kimball methodology . Cloud Data Engineering – Hands-on experience with Azure Synapse, Azure Data Factory, Azure Data Lake, Databricks and Snowflake ETL Development – Experience building scalable ETL/ELT pipelines and data ingestion workflows. Ability to learn and apply Snowflake concepts as needed. Communication Skills : Strong presentation and communication skills to engage both technical and business stakeholders in strategic discussions. Financial Services Domain (Optional) : Knowledge of financial services. Good to Have Skills: DevOps & CI/CD – Experience with Git, Jenkins, Docker, and automated deployments . Streaming Data Processing – Experience with Spark, Kafka, or real-time event-driven architectures . Data Governance & Security – Understanding of data security, compliance, and governance frameworks . Experience in AWS – Knowledge of AWS cloud data solutions (Glue, Redshift, Athena, etc.) is a plus.
Posted 8 hours ago
5.0 - 10.0 years
5 - 11 Lacs
noida, hyderabad
Work from Office
Job Description: 5+ years of overall IT experience, which includes hands on experience in Big Data technologies. Mandatory - Hands on experience in Python and PySpark. Build pySpark applications using Spark Dataframes in Python . Worked on optimizing spark jobs that processes huge volumes of data. Hands on experience in version control tools like Git . Worked on Amazons Analytics services like Amazon EMR, Amazon Athena, AWS Glue. Worked on Amazons Compute services like Amazon Lambda, Amazon EC2 and Amazon Storage service like S3 and few other services like SNS. Good to have knowledge of data warehousing concepts dimensions, facts, schemas-snowflake, star etc. Have worked with columnar storage formats- Parquet,Avro,ORC etc. Well versed with compression techniques Snappy, Gzip. Good to have knowledge of AWS databases (atleast one) Aurora, RDS, Redshift, ElastiCache, DynamoDB. Bigdata, AWS, python, Pyspark, AWS services - IAM, lambda, EMR, glue
Posted 8 hours ago
8.0 - 13.0 years
16 - 25 Lacs
bengaluru
Remote
Must-Have Skills: Strong experience with Google BigQuery data modeling, query optimization, performance tuning. Proficient in building and managing data pipelines and ETL/ELT workflows. Solid SQL skills and experience working with large datasets. Experience with Looker creating/modifying dashboards and LookML understanding. Experience with version control (e.g., Git) and CI/CD for data solutions. Ability to work in Agile environments and with remote teams. Good-to-Have Skills: Exposure to GCP services beyond BigQuery (e.g., Dataflow, Cloud Functions).
Posted 1 day ago
2.0 - 6.0 years
8 - 12 Lacs
bengaluru
Work from Office
Role & responsibilities Develop and maintain scalable ETL/ELT pipelines using Databricks (PySpark, Delta Lake). Design and optimize data models in AWS Redshift for performance and scalability. Manage Redshift clusters and EC2-based deployments, ensuring reliability and cost efficiency. Integrate data from diverse sources (structured/unstructured) into centralized data platforms. Implement data quality checks, monitoring, and logging across pipelines. Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality datasets. Required Skills & Experience: 36 years of experience in data engineering. Strong expertise in Databricks (Spark, Delta Lake, notebooks, job orchestration). Hands-on experience with AWS Redshift (cluster management, performance tuning, workload optimization). Proficiency with AWS EC2, S3, and related AWS services. Strong SQL and Python skills. Experience with CI/CD and version control (Git). Preferred candidate profile We are seeking a skilled Data Engineer with hands-on experience in Databricks and AWS Redshift (including EC2 deployments) to design, build, and optimize data pipelines that support analytics and business intelligence initiatives.
Posted 1 day ago
9.0 - 14.0 years
22 - 37 Lacs
noida, delhi / ncr
Hybrid
Primary Responsibilities: Collaborate with cross-functional teams to understand business requirements and provide database solutions that support company objectives Work with software design and development teams in designing database architectures and data modelling, building database schemas, tables, procedures, and views Work with the system administrators for hardware and software installations and configurations Administer, monitor, and maintain Postgres SQL databases, ensuring their integrity, security, and high performance Design, implement, and maintain the database architecture, including high-availability and disaster recovery solutions Manage database environments within the cloud platforms Participate in database testing and quality assurance processes to validate database changes and updates Analyze and tune Postgres SQL database for optimal efficiency by Identifying and addressing performance bottlenecks, implementing query optimization, and fine-tuning database configurations to achieve optimal performance Analyze and sustain capacity and performance requirements, including effective use of indexes, enabling parallel query execution, and other DBMS features, such as query store Create and manage efficient database indexes to enhance query performance and reduce database access times Maintain thorough documentation of database configurations, procedures, and best practices Add and remove users, administer roles and permissions, audit, and check for security problems Required Qualifications: 8+ years of experience as a Postgres SQL Server Data Engineer with experience in design, implementation, and management 5+ years of strong experience with tools like Terraform, Kubernetes, Docker, Packer, 5+ Years of hands on experience with CI/CD tools ( Github action, Jenkins etc ) 4+ years proven experience managing Databases on cloud platforms 3+ years of experience PII Personally Identifiable Information or PHI Personal Health Information 5+ years of experience with change management systems, such as ServiceNow and Jira 5+ years of experience with SQL waits, locking, blocking and resource contention Preferred Qualifications: Bachelors degree in Computer Science or related field Cloud Platforms:Familiarity with AWS, Azure, or GCP services. Monitoring & Logging: Knowledge of monitoring tools (e.g., Prometheus, Grafana, Datadog) and logging tools (e.g., ELK Stack). Version Control: Experience with Git and branching strategies. Infrastructure as Code: Proficiency in tools like Terraform and Ansible. System Implementation & Integration: Proven experience in system implementation and integration projects. Consulting Skills: Ability to consult with clients and stakeholders to understand their needs and provide expert advice.
Posted 1 day ago
5.0 - 10.0 years
9 - 12 Lacs
pune
Work from Office
Hiring for a leading MNC for position of Data Engineer , based at Kharadi (Pune) Designation : Data Engineer Shift Timing : 12 PM to 9 PM (Cab Facility Provided) Work Mode: Work from Office Key Responsibilities: - Liaise with stakeholders to define data requirements - Manage Snowflake & SQL databases - Build and optimize semantic models for reporting - Lead modern data architecture adoption - Reverse engineer complex data structures - Mentor peers on data governance best practices - Champion Agile/SCRUM methodologies Preferred Candidates: Experience- 5+ years in data engineering/BI roles - Strong ETL, data modelling, governance, and lineage documentation - Expertise in Snowflake, Azure (SQL Server, Data Factory, Logic Apps, App Services), Power BI - Advanced SQL & Python (OOP, JSON/XML) - Experience with medallion architecture, Fivetran, DBT - Application development using Python, Streamlit, Flask, Node.js, Power Apps - Agile/Scrum project management - Bachelors/Masters in Math, Stats, CS, IT, or Engineering
Posted 1 day ago
6.0 - 10.0 years
14 - 22 Lacs
hyderabad, pune, bengaluru
Hybrid
We seek a Senior-Level AWS Data Engineer who shares our passion for innovation and change. This role is critical to helping our business partners evolve and adapt to consumers' personalized expectations in this new technological era. Looking at highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing, developing, and managing data pipelines, working with cloud technologies, and optimizing data workflows. You will play a key role in supporting our data-driven initiatives and ensuring the seamless integration and analysis of large datasets. What will help you succeed: Fluent English Python, PySpark, SparkSQL, and SQL. AWS data services, including S3, S3 tables, Glue, EMR, EC2, Athena, Redshift, step functions and Lambda Functions. Design Scalable Data Models: Develop and maintain conceptual, logical, and physical data models for structured and semi-structured data in AWS environments. Optimize Data Pipelines: Work closely with data engineers to align data models with AWS-native data pipeline design and ETL best practices. AWS Cloud Data Services: Design and implement data solutions leveraging AWS Redshift, Athena, Glue, S3, Lake Formation, and AWS-native ETL workflows. Design, develop, and maintain scalable data pipelines and ETL processes using AWS services (Glue, Lambda, RedShift). Write efficient, reusable, and maintainable Python and PySpark scripts for data processing and transformation. Optimize SQL queries for performance and scalability. Expertise in writing complex SQL queries and optimizing them for performance. Monitor, troubleshoot, and improve data pipelines for reliability and performance. Focusing on ETL automation using Python and PySpark, responsible for design, build, and maintain efficient data pipelines, ensuring data quality and integrity for various applications. This job can be filled in Pune, Bangalore, Hyderabad locations
Posted 1 day ago
5.0 - 10.0 years
6 - 16 Lacs
bangalore rural, bengaluru
Work from Office
Job Summary: We are seeking a highly skilled and experienced Data Modeler to join our data engineering team. The ideal candidate will bring deep expertise in designing scalable and efficient data models for cloud platforms, particularly with a strong background in Oracle Data Warehousing and Databricks Lakehouse architecture. You will play a critical role in our strategic migration from an on-prem Oracle data warehouse to a modern cloud-based Databricks platform. Required Skills & Experience: 5+ years of hands-on experience in data modeling, including conceptual, logical, and physical design. Proven experience migrating large-scale Oracle DWH environments to Databricks Lakehouse or similar platforms. Strong expertise in Oracle database schemas, PL/SQL, and performance tuning. Proficiency in Databricks, Delta Lake, Spark SQL, and DataFrame APIs. Experience designing models optimized for cloud platforms (preferably AWS or Azure). Deep knowledge of dimensional modeling techniques (Star/Snowflake). Familiarity with tools and practices for metadata management, data lineage, and governance. Strong analytical and communication skills with the ability to work collaboratively in Agile teams. Ability to document and communicate data model designs to both technical and non-technical stakeholders.
Posted 1 day ago
0.0 years
3 - 4 Lacs
gurugram
Work from Office
We are looking for a motivated and enthusiastic Trainee Data Engineer to join our Engineering team. This is an excellent opportunity for recent graduates to start their career in data engineering, work with modern technologies, and learn from experienced professionals. The candidate should be eager to learn, curious about data, and willing to contribute to building scalable and reliable data systems. Responsibilities: Understand and align with the values and vision of the organization. Adhere to all company policies and procedures. Support in developing and maintaining data pipelines under supervision. Assist in handling data ingestion, processing, and storage tasks. Learn and contribute to database management and basic data modeling. Collaborate with team members to understand project requirements. Document assigned tasks, processes, and workflows. Stay proactive in learning new tools, technologies, and best practices in data engineering. Required Candidate profile: Bachelor's degree in Computer Science, Information Technology, or related field. Fresh graduates or candidates with up to 1 year of experience are eligible. Apply Link - https://leewayhertz.zohorecruit.in/jobs/Careers/32567000019403095/Trainee-Data-Engineer?source=CareerSite LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants.
Posted 2 days ago
18.0 - 25.0 years
15 - 30 Lacs
hyderabad
Work from Office
Greetings from Technogen !!! We thank you for taking time about your competencies and skills, while allowing us an opportunity to explain about us and our Technogen , we understand that your experience and expertise are relevant the current open with our clients. About Technogen : https://technogenindia.com/ Technogen India Pvt. Ltd. is a boutique Talent & IT Solutions company, founded in 2008, has been serving global customers for over last 2 decades,. Talent Solutions: We assist several GCCs, Global MNCs and IT majors on their critical and unique IT talent needs through our services around Recruitment Process Outsourcing (RPO), contract staffing, permanent hiring, Hire-Train-Deploy (HTD), Build-Operate-Transfer (BOT) and Offshore staffing. Job Title : Data Engineer Required Experience : 8 years Work Mode: WFO-4 Days from Office. Shift Time : UK Shift Time-12:00 PM IST to 09:00 PM IST. Location : Hyderabad. Job Summary:- The Opportunity seeking a Data Engineer and Problem Manager , based out of our Technology & Innovation Center in Hyderabad, India, reporting to the IT Director for Enterprise Data and Analytics. The person in this role will be responsible for managing, monitoring, and maintaining scalable data integration and analytics pipelines to support enterprise reporting and data-driven decision-making. This role requires close collaboration with cross-functional teams to integrate data from various source systems into a centralized, cloud-based data warehouse Primarily leveraging tools such as Google Big Query, Python, SQL, DBT, and Cloud Composer (Airflow). The Data Engineer will also be responsible for implementing data quality checks, managing orchestration workflows, and delivering business-ready datasets aligned with enterprise data strategy. What Your Impact Will Be:- Experience in Incident Management + Problem Management + RCA activity ITIL Certification will be preferred Experience in O2C, R2P business processes Monitoring and analysing data integration pipelines to ingest structured and semi-structured data from enterprise systems (e.g., ERP, CRM, E-commerce, Order Management) into a centralized cloud data warehouse using Google BigQuery. Build analytics-ready pipelines that transform raw data into trusted, curated datasets for reporting, dashboards, and advanced analytics. Implement transformation logic using DBT to create modular, maintainable, and reusable data models that evolve with business needs. Apply BigQuery best practicesincluding partitioning, clustering, and query optimizationto ensure high performance and scalability. Automate data workflows using Cloud Composer (Airflow), ensuring reliable execution, task dependency management, and timely data delivery. Develop efficient, reusable Python and SQL code for data ingestion, transformation, validation, and performance tuning across the pipeline lifecycle. Establish robust data quality checks and testing strategies to validate both technical accuracy and alignment with business logic. Collaborate with cross-functional teamsincluding data analysts, BI developers, and product owners—to understand integration needs and deliver impactful, business-aligned data solutions. Leverage modern ETL platforms such as Ascend.io, Databricks, Dataflow, or Fivetran to accelerate development and improve observability and orchestration. Contribute to technical documentation, CI/CD workflows, and monitoring processes to drive transparency, reliability, and continuous improvement across the data engineering ecosystem. What We’re Looking For:- Bachelor's or master's degree in computer science, Data Engineering, Information Systems, or related technical field. 8+ years of hands-on experience in data engineering with a focus on data integrations, warehousing, and analytics pipelines. Hands on experience in troubleshooting and diagnosis problem and root cause finding and communicate to development team Techno Functional knowledge in ERP application integration in O2C and R2P area Hands-on experience with:- Google Big Query as a centralized data warehousing and analytics platform. Python for scripting, data processing, and integration logic. SQL for data transformation, complex querying, and performance tuning. DBT for building modular, maintainable, and reusable transformation models. Airflow / Cloud Composer for orchestration, dependency management, and job scheduling. Solid understanding of ITIL Incident management and problem management. Strong knowledge of data testing frameworks, validation methods, and best practices. Preferred Skills (Optional):- Experience with Ascend.io or comparable ETL platforms such as Databricks, Dataflow, or Fivetran. Familiarity with data cataloging and governance tools like Collibra. Knowledge of CI/CD practices, Git-based workflows, and infrastructure automation tools. Exposure to event-driven or real-time streaming pipelines using tools like Pub/Sub or Kafka. Strong problem-solving and analytical mindset with the ability to think broadly and identify innovative solutions and able to quick to learn new technologies, programming languages, and frameworks. Excellent communication skills, both written and verbal. Ability to work in a fast-paced and collaborative environment. Good experience in Agile Methodologies like Scrum, Kanban, and managing IT backlogs. What It’s Like to Work Here:- We are a purpose-driven company aiming to empower the next generation to explore the wonder of childhood and reach their full potential. We live up to our purpose employing the following behaviors: We collaborate: Being a part of means being part of one team with shared values and common goals. Every person counts and working closely together always brings better results. Partnership is our process, and our collective capabilities are our superpowers. We innovate: At we always aim to find new and better ways to create innovative products and experiences. No matter where you work in the organization, you can always make a difference and have a real impact. We welcome new ideas and value new initiatives that challenge conventional thinking. We execute: We are a performance driven company. We strive for excellence and are focused on pursuing best in class outcomes. We believe in accountability and ownership and know that our people are at their best when they are empowered to create and deliver results.
Posted 2 days ago
5.0 - 10.0 years
15 - 25 Lacs
mumbai, delhi / ncr, bengaluru
Work from Office
Tech Stack - AWS Big Data Stack Expertise in ETL, SQL, Python and AWS tools like Redshift,S3, Glue, Data pipeline, Lambda is a must. Good to have knowledge on Glue Workflows, Step Functions, Quick sight, Athena, Terraform and Dockers. Responsibilities -Assists in the analysis, design and development of a roadmap, design pattern, and implementation based upon a current vs. future state from a architecture viewpoint. Participates in the data related technical and business discussions relative to future serverless architecture. Responsible for working with our Enterprise customers and migrate data into Cloud. Set up scalable ETL process to move data into Cloud warehouse. Deep understanding in Data Warehousing, Dimensional Modelling, ETL Architect, Data Conversion/Transformation, Database Design, Data Warehouse Optimization, Data Mart Development etc. Location-Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad
Posted 2 days ago
3.0 - 5.0 years
6 - 10 Lacs
bangalore rural, bengaluru
Work from Office
Job Description Experience: 4+ years Required Skills: Use SQL to query databases, extract and manipulate data for reporting purposes, and perform complex joins and aggregations to generate insights related to growth metrics. Strong in Python Proficient in ETL (Extract, Transform, Load) processes, data warehousing solutions (Databricks), and big data technologies (e.g., Hadoop, Spark). Proficient in Structured Streaming and Delta file structure Must Have experience in Data streaming or Kafka Note: Interested Candidate Can send their resume to Mail: jyotiprakash@mirafra.com
Posted 2 days ago
5.0 - 9.0 years
20 - 32 Lacs
pune, chennai, bengaluru
Hybrid
Interested please share your CV and below filled details to snidafazli@altimetrik.com Name(as per aadhar card): Number: EmailID: Current CTC: Fixed CTC: Expected CTC: holding any offers: Current Company: Payroll Company: Notice PEriod: Mention exact LWD: Current Location: Preferred Location: Total Experience: Relevant Experience please mention in years below, Python: Git: GenAI: MLops:
Posted 2 days ago
10.0 - 20.0 years
37 - 50 Lacs
hyderabad/secunderabad
Hybrid
Job Objective: We are looking for Azure Data and Analytics Solutions Architect focusing on Data engineering to join our growing team of Data and Analytics specialized Consulting organization. This plays a key role in Digital journey of our clients spread globally with focus on Data & AI at the core of it. This role is in our setup to drive the exploratory sessions to find the opportunities, arrive at the Data and Analytics solutions that could involve technical, process and people. Once the solutions are found, this role will orchestrate, architect and technically drive the solutions and do the hands on when and where needed. This role also will mentor, train and enable the consultants at various experience and expertise level. Its expected that the potential applicant has an opinion-based market research, technical trends and learnings from experience. Required Qualifications: Education: BE, ME/MTech, MCA, MSc, MBA, or equivalent industry experience Preferred Qualifications & Skills: 10 -18 years of relevant experience in architecting, designing, developing and delivering Data solutions both on premise and predominantly Azure Cloud Data & AI services. Batch, Real time and hybrid solutions with high velocity and large volumes. Experience with traditional big data framework like Hadoop will be beneficial. Architectural advisory for big data solutions with Azure and Microsoft technical stack Pre sales, account mining to find new opportunities and proposing right solutions to a given context Data warehousing concepts, dimensional modelling, tabular modelling, Start and Snowflake models, MDX/DAX etc. Strong technical knowledge including hands on most of, SQL, SQL Warehouse, Azure Data Factory, Azure Storage accounts, Data Lake, Data bricks, Azure Functions, Synapse, Stream Analytics, Power BI/Any other visualization Working with No SQL data bases like Cosmos DB Working with Various file formats, Storage types ETL and ELT based data orchestration for batch and real time data Strong Programming skills - experience and expertise in one of the following: Java, Python, Scala, C/.Net. Driving decisions collaboratively, resolving conflicts and ensuring follow through with exceptional verbal and written communication skills. Experience of working on real time end to end projects using Agile/Waterfall methodology and associated tolls Responsibilities: Understand the client scenario, derive or understand business requirements and propose the right data solutions architecture using both on premise and cloud-based services Create scalable and efficient data architecture that respects the data integrity, quality, security, reuse among other aspects which lays foundation for present and future scalable solutions Understand and communicate not only data and technical terms but also functional and business terms with the stake holders. Focus on the true business value delivery with efficient architecture and technical solutions. Have an opinion and advise the clients in the right path aligned with their business, data and technical strategies while aligning with market trends in the data & AI solutions Establish Data Architecture with modern data driven principles like Flexibility at scale, parallel and distributed processing, democratized data access that enables them to be more productive. Thought leadership to provide Point of views, ideate and deliver Webinars, be the custodian of the best practices. Ensure that solution exhibits high levels of performance, security, scalability, maintainability, appropriate reusability and reliability upon deployment Maintain and upgrade technical skills, keeping up to date with market trends. Educate and guide both customers and fellow colleagues Nice-to-Have Skills: Experience or knowledge in other cloud-based data solutions like AWS, GCP Expertise or knowledge in Visualization / Reporting like Qlik/Power BI/Tableau Expertise in one or more data domains
Posted 2 days ago
6.0 - 11.0 years
25 - 35 Lacs
bengaluru
Work from Office
Hi, (On the Spot Offer) We are looking for suitable Candidates for Face to Face Drive for a Tier A Company on Saturday i.e. 23rd Aug 2025 in Bengaluru Location for the role of Gen AI Developer. Only Interested Candidate revert their Resume on Kanishk.mittal@thehrsolution.in. Face to Face Interview Direct Pay role on the spot selection and Joining Exp Range- 4 to 12Years Locations- Bengaluru, India JD - Gen AI Engineer/Developer Role Overview: We are seeking a skilled OpenAI Developer to join our dynamic team. The ideal candidate will have a robust understanding of AI and machine learning, with particular expertise in integrating OpenAI models with Microsoft's Power Platform (including Power Automate and Power Virtual Agents), Azure Cognitive Services, and agentic AI frameworks. This role also requires experience in Azure deployment of AI solutions and modern Autogen-based architectures for building intelligent multi-agent systems. Key Responsibilities: Design, develop, and optimize OpenAI-powered applications, including multi-agent frameworks, chatbots, and AI-based customer support systems. Leverage agentic technologies (e.g., Autogen framework) to implement AI agents capable of collaborating and completing complex tasks autonomously. Build and maintain backend services using Python or .NET to support OpenAI integrations. Work with custom datasets, applying techniques such as chunking, embedding, and vector search for model fine-tuning and retrieval-augmented generation (RAG). Integrate Azure Cognitive Services (e.g., Text Analytics, Translator, Form Recognizer) to enhance functionality and intelligence of AI solutions. Ensure scalable and secure deployment of AI solutions on Azure, using services like Azure Functions, Azure App Service, and Azure Kubernetes Service (AKS). Regularly evaluate and fine-tune GPT models to ensure high performance and relevance. Collaborate with cross-functional teams, including product, design, and QA, to ensure seamless development and deployment pipelines. Stay up to date with advancements in OpenAI, Azure AI, and agent-based technologies, and proactively contribute innovative ideas. Required Skills and Experience: Bachelor's degree in Computer Science, Engineering, or related field. Proficient in Python for backend and AI integration development. Strong experience with OpenAI GPT models and their practical applications. Experience with agentic technologies (e.g., Autogen, LangGraph, CrewAI) for multi-agent systems. Familiarity with Microsoft Azure Cognitive Services and Azure AI Studio. Solid grasp of machine learning fundamentals, including training, fine-tuning, and evaluating language models. Practical knowledge of Natural Language Processing (NLP) techniques. Hands-on experience with CI/CD tools such as Google Cloud Build, Jenkins, GitHub Actions, or Azure DevOps. Ability to work collaboratively in an agile, team-oriented environment under tight deadlines. Excellent problem-solving, debugging, and analytical skills. Desirable Skills: Masters degree or higher in AI, ML, or a related technical field. Prior experience in chatbot development for customer service or enterprise use cases. Certification(s) in Microsoft Azure, OpenAI, or machine learning technologies. Experience deploying and managing OpenAI solutions in Azure environments. Familiarity with RAG architectures, Azure AI Search, and blob storage indexing. Knowledge of Copilot Studio, Power Platform AI builder, or Conversational AI tools.
Posted 2 days ago
5.0 - 10.0 years
15 - 25 Lacs
chennai
Work from Office
Hiring For Top IT Company- Designation: AWS Data Engineer Skills: AWS, Glue, Lambda, SQL, Python, Redshift,S3. Loc: Chennai Exp:5+ yrs Call: Akshita-9785478741 Surbhi :8058357806 Ambika : 9672301543 Thanks, Team Converse
Posted 3 days ago
8.0 - 10.0 years
12 - 16 Lacs
indore, hyderabad, ahmedabad
Work from Office
Notice Period: Immediate joiners or within 15 days preferred Share Your Resume With: Current CTC Expected CTC Notice Period Preferred Job Location Primary Skills: MSSQL, Redshift, Snowflake T-SQL, LinkSQL, Stored Procedures ETL Pipeline Development Query Optimization & Indexing Schema Design & Partitioning Data Quality, SLAs, Data Refresh Source Control (Git/Bitbucket), CI/CD Data Modeling, Versioning Performance Tuning & Troubleshooting What You Will Do: Design scalable, partitioned schemas for MSSQL, Redshift, and Snowflake. Optimize complex queries, stored procedures, indexing, and performance tuning. Build and maintain robust data pipelines to ensure timely, reliable delivery of data. Own SLAs for data refreshes, ensuring reliability and consistency. Collaborate with engineers, analysts, and DevOps to align data models with product and business needs. Troubleshoot performance issues, implement proactive monitoring, and improve workflows. Enforce best practices for data security, governance, and compliance. Utilize schema migration/versioning tools for database changes. What Youll Bring: Bachelors or Masters in Computer Science, Engineering, or related field. 8+ years of experience in database engineering or backend data systems. Expertise in MySQL, Redshift, Snowflake, and schema optimization. Strong experience in writing functions, procedures, and robust SQL scripts. Proficiency with ETL processes, data modeling, and data freshness SLAs. Experience handling production performance issues and being the go-to database expert. Hands-on with Git, CI/CD pipelines, and data observability tools. Strong problem-solving, collaboration, and analytical skills. If youre interested and meet the above criteria, please share your resume with your current CTC, expected CTC, notice period, and preferred job location. Immediate or 15-day joiners will be prioritized.
Posted 3 days ago
4.0 - 8.0 years
9 - 19 Lacs
gurugram, chennai, bengaluru
Work from Office
Role & responsibilities Qualifications: Bachelor's degree in information technology, or related field. . Experience: 4+ years of experience in data engineering or related roles. Proven experience with Informatica IDMC. Strong Unix/Linux skills, including scripting and system administration Proficiency with AWS cloud services and RDS databases. . Proficiency in batch orchestration tools such as AutoSys and Apache Airflow, . Excellent problem-solving and analytical skills Strong understanding of data integration and ETL processes. Ability to working a fast-paced, dynamic environment . Strong communication and interpersonal skills. . Proficiency with CICD pipelines and DevOps practices
Posted 3 days ago
9.0 - 13.0 years
30 - 45 Lacs
bengaluru
Remote
Lead Data Engineer - What You Will Do: As a PR3 Lead Data Engineer, you will be instrumental in driving our data strategy, ensuring data quality, and leading the technical execution of a small, impactful team. Your responsibilities will include: Team Leadership: Establish the strategic vision for the evolution of our data products and our technology solutions, then provide technical leadership and guidance for a small team of Data Engineers in executing the roadmap. Champion and enforce best practices for data quality, governance, and architecture within your team's work. Embody a product mindset over the teams data. Oversee the team’s use of Agile methodologies (e.g., Scrum, Kanban), ensuring smooth and predictable delivery, and overtly focusing on continuous improvement. Data Expertise & Domain Knowledge: Actively seek out, propose, and implement cutting-edge approaches to data transfer, transformation, analytics, and data warehousing to drive innovation. Design and implement scalable, robust, and high-quality ETL processes to support growing business demand for information, delivering data as a reliable service that directly influences decision making. Develop a profound understanding and "feel" for the business meaning, lineage, and context of each data field within our domain. Communication & Stakeholder Partnership: Collaborate with other engineering teams and business partners, proactively managing dependencies and holding them accountable for their contributions to ensure successful project delivery. Actively engage with data consumers to achieve deep understanding of their specific data usage, pain points, and current gaps, then plan initiatives to implement improvements collaboratively. Clearly articulate project goals, technical strategies, progress, challenges, and business value to both technical and non-technical audiences. Produce clear, concise, and comprehensive documentation. Your Qualifications: At Vista, we value the experience and potential that individual team members add to our culture. Please don’t hesitate to apply even if you don’t meet the exact qualifications, we look forward to learning more about you! Bachelor's or Master's degree in computer science, data engineering, or a related field . 10+ years of professional experience, with at least 6 years of hands-on Data Engineering, specifically in e-commerce or direct to consumer, and 4 years of team leadership Demonstrated experience in leading a team of data engineers, providing technical guidance, and coordinating project execution Stakeholder management experience and excellent communication skills Strong knowledge of SQL and data warehousing concepts is a must Strong knowledge of Data Modeling concepts and hands-on experience designing complex multi-dimension data models Strong hands-on experience in designing and managing scalable ETL pipelines in cloud environments with large volume datasets (both structured/unstructured data) Proficiency with cloud services in AWS (Preferred), including S3, EMR, RDS, Step Functions, Fargate, Glue etc. Critical hands-on experience with cloud-based data platforms (Snowflake strongly preferred) Data Visualization experience with reporting and data tools (preferably Looker with LookML skills) Coding mastery in at least one modern programming language: Python (strongly preferred), Java, Golang, PySpark, etc. Strong knowledge in production standards such as versioning, CI/CD, data quality, documentation, automation, etc. Problem solving and multi-tasking ability in a fast-paced, globally distributed environment Nice To Have: Experience with API development on enterprise platforms, with GraphQL APIs being a clear plus Hands-on experience designing DBT data pipelines Knowledge of finance, accounting, supply chain, logistics, operations, procurement data is a plus Experience managing work in Jira and writing documentation in Confluence Proficiency in AWS account management, including IAM, infrastructure, and monitoring for health, security and cost optimization Experience with Gen AI/ML tools for enhancing data pipelines or automating analysis. Why You'll Love Working Here There is a lot to love about working at Vista. We are an award winning Remote-First company. We’re an inclusive community. We’re growing (which means you can too). And to help orient us all in the same direction, we have our Vista Behaviors which exemplify the behavioral attributes that make us a culturally strong and high-performing team. Our Team: Enterprise Business Solutions Vistas Enterprise Business Solutions (EBS) domain is working to make our company one of the most data-driven organizations to support Finance, Supply Chain, and HR functions. The cross-functional team includes product owners, analysts, technologists, data engineers and more – all focused on providing Vista with cutting-edge tools and data we can use to deliver jaw-dropping customer value. EBS team members are empowered to learn new skills, communicate openly, and be active problem-solvers. Join our EBS Domain as a Lead Data Engineer! This Lead level within the organization will be responsible for the work of a small team of data engineers, focusing not only on implementations but also operations and support. The Lead Data Engineer will implement best practices, data standards, and reporting tools. The role will oversee and manage the work of other data engineers as well as being an individual contributor. This role has a lot of opportunity to impact general ETL development and implementation of new solutions. We will look to the Lead Data Engineer to modernize data technology solutions in EBS, including the opportunity to work on modern warehousing, finance, and HR datasets and integration technologies. This role will require an in-depth understanding of cloud data integration tools and cloud data warehousing, with a strong and pronounced ability to lead and execute initiatives to tangible results.
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
A career at HARMAN Technology Services (HTS) offers you the opportunity to be part of a global, multi-disciplinary team dedicated to leveraging the power of technology to drive innovation and shape the future. At HARMAN HTS, you will tackle challenges by creating cutting-edge solutions that combine the physical and digital realms, making technology a dynamic force for addressing challenges and meeting humanity's needs. Working at the forefront of cross-channel UX, cloud computing, insightful data analysis, IoT, and mobility, you will empower companies to innovate, enter new markets, and enhance customer experiences. As a Data Engineer - Microsoft Fabric at HARMAN, your primary responsibility will be to develop and implement data engineering projects, including enterprise data hubs or big data platforms. You will design and establish data pipelines to enhance the efficiency and repeatability of data science projects, ensuring that data architecture solutions align with business requirements and organizational needs. Collaborating with stakeholders, you will identify data requirements, develop data models, and create data flow diagrams. Working closely with cross-functional teams, you will integrate, transform, and load data across various platforms and systems effectively, while also implementing data governance policies to ensure secure and efficient data management. To excel in this role, you should possess expertise in ETL and data integration tools such as Informatica, Qlik Talend, and Apache NiFi, along with knowledge of cloud computing platforms like AWS, Azure, or Google Cloud. Proficiency in programming languages such as Python, Java, or Scala, as well as experience with data visualization tools like Tableau, Power BI, or QlikView, is essential. Additionally, familiarity with analytics, machine learning concepts, relational databases (e.g., MySQL, PostgreSQL, Oracle), and NoSQL databases (e.g., MongoDB, Cassandra) is required. A strong background in big data technologies such as Hadoop, Spark, Snowflake, Databricks, and Kafka will be beneficial, along with expertise in data modeling, data warehousing, and data integration techniques. As a key contributor to the growth of the Center of Excellence (COE) and a leader in influencing client revenues through data and analytics solutions, you will guide a team of data engineers, oversee the development and deployment of data solutions, and define new data services and offerings. Your role will involve building strong client relationships, aligning with business goals, and driving innovation in data services. You will also stay updated on the latest data trends, collaborate with stakeholders, and communicate the capabilities and achievements of the Data team effectively. To be eligible for this position, you should have 4-5 years of experience in the information technology industry, with a focus on data engineering and architecture, and a proven track record of leading and setting up data practices in IT services or niche organizations. A master's or bachelor's degree in relevant fields such as computer science, data science, or engineering is preferred, along with experience in creating data and analytics solutions across various domains. Strong problem-solving, communication, and collaboration skills, along with expertise in data visualization and reporting tools, are essential for success in this role. At HARMAN, we offer employee discounts on our premium products, professional development opportunities through HARMAN University, and an inclusive work environment that fosters personal and professional growth. Join our talented team at HARMAN and be part of a culture that values diversity, encourages innovation, and supports individuality. If you are ready to make a lasting impact through innovation and technology, we invite you to join our talent community today.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Do you have a curious mind, want to be involved in the latest technology trends, and like to solve problems that have a meaningful benefit to hundreds of users across the bank Join our Tech Services- Group Chief Technology Office team and become a core contributor for the execution of the bank's global AI Strategy, particularly to help the bank deploy AI models quickly and efficiently! We are looking for an experienced Data Engineer or ML Engineer to drive the delivery of an innovative ecosystem of tools and services. In this AI focused role, you will contribute to the development of an SDK for Data Producers across the firm to build high-quality autonomous Data Products for cross-divisional consumption and Data Consumers (e.g. Data Scientists, Quantitative Analysts, Model Developers, Model Validators, and AI agents) to easily discover, access data, and build AI use-cases. Responsibilities may include: - Country lead of other Senior Engineers - Direct interaction with product owners and internal users to identify requirements, development of technical solutions, and execution - Lead Development of an SDK (Software Development Kit) to automatically capture data product, dataset, and AI / ML model metadata. Also, leverage LLMs to generate descriptive information about assets - Integration and publication of metadata into UBS's AI Use-case inventory, model artifact registry, and Enterprise Data Mesh data product and dataset catalogue for discovery and regulatory compliance purposes - Design and implementation of services that seamlessly collect runtime evidence and operational information about a data product or model and publish it to appropriate visualization tools - Creation of a collection of starters/templates that accelerate the creation of new data products by leveraging a collection of the latest tools and services and providing diverse and rich experiences to the Devpod ecosystem - Design and implementation of data contract and fine-grained access mechanisms to enable data consumption on a "need to know" basis You will be part of the Data Mesh & AI Tooling team, which is a newly established function within Group Chief Technology Office. We provide solutions to help the firm embrace Artificial Intelligence and Machine Learning. We work with the divisions and functions of the firm to provide innovative solutions that integrate with their existing platforms to provide new and enhanced capabilities. One of our current aims is to help a data scientist get a model into production in an accelerated timeframe with the appropriate controls and security. We offer a number of key capabilities: data discovery that uses AI/ML to help users find data and obtain access in a secure and controlled manner, an AI Inventory that describes the models that have been built to help users build their own use cases and validate them with Model Risk Management, a containerized model development environment for a user to experiment and produce their models and a streamlined MLOps process that helps them track their experiments and promote their models. At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing, and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. UBS is the world's largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management, and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills, and experiences within our workforce.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
You should have at least 5 years of experience working as a Data Engineer. Your expertise should include a strong background in Azure Cloud services and proficiency in tools such as Azure Databricks, PySpark, and Delta Lake. It is essential to have solid experience in Python and FastAPI for API development, as well as familiarity with Azure Functions for serverless API deployments. Experience in managing ETL pipelines using Apache Airflow is also required. Hands-on experience with databases like PostgreSQL and MongoDB is necessary. Strong SQL skills and the ability to work with large datasets are key for this role.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
andhra pradesh
On-site
As a Senior Data Engineer, you will be responsible for designing, implementing, and maintaining scalable data pipelines for our organization. Your primary location of work will be in Visakhapatnam, Andhra Pradesh. This is a permanent position with a compensation package that aligns with industry standards. Your key responsibilities will include developing efficient data processing solutions, optimizing data workflows, and ensuring the reliability and integrity of our data infrastructure. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the company. The ideal candidate for this role will have a strong background in data engineering, experience with big data technologies, and a proven track record of delivering high-quality data solutions. Additionally, strong analytical skills, attention to detail, and the ability to work in a fast-paced environment are essential for success in this position. If you are passionate about leveraging data to drive business insights and are looking for a challenging opportunity to make a significant impact, we encourage you to apply for this Senior Data Engineer position.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
A career at HARMAN Technology Services (HTS) offers you the opportunity to be part of a global, multi-disciplinary team dedicated to leveraging the power of technology to drive innovation and shape the future. At HARMAN HTS, you will tackle challenges by creating cutting-edge solutions that combine the physical and digital realms, making technology a dynamic force for problem-solving and meeting the needs of humanity. You will work at the forefront of cross-channel UX, cloud technologies, insightful data, IoT, and mobility, empowering companies to develop new digital business models, enter new markets, and enhance customer experiences. As a Data Engineer- Microsoft Fabric at HARMAN, you will be responsible for developing and implementing data engineering projects, including enterprise data hubs, Big Data platforms, data lake houses, and more. Your role will involve creating data pipelines to streamline data science projects, designing and implementing data architecture solutions, collaborating with stakeholders to identify data requirements, and ensuring effective data integration, transformation, and loading across various platforms and systems. You will also play a key role in developing and implementing data governance policies, evaluating new technologies to enhance data management processes, and ensuring compliance with regulatory standards for data security. To excel in this role, you should have the ability to evaluate, design, and develop ETL jobs and data integration approaches, along with cloud native data platform experience in AWS or the Microsoft stack. You should stay updated on the latest data trends, possess a robust knowledge of ETL, data transformation, and data standardization approaches, and be able to lead and guide a team of data engineers effectively. Additionally, you should have experience in working on data and analytics solutions, a strong educational background in computer science or related fields, and a proven track record in creating data and analytics solutions. At HARMAN, we offer access to employee discounts on a range of world-class products, professional development opportunities through HARMAN University, and an inclusive work environment that fosters personal and professional growth. We believe in creating a supportive culture where every employee is valued, empowered, and encouraged to share their ideas and unique perspectives. Join us at HARMAN and be part of a team that is committed to innovation, excellence, and making a lasting impact in the world of technology.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data engineer job market in India is rapidly growing as organizations across various industries are increasingly relying on data-driven insights to make informed decisions. Data engineers play a crucial role in designing, building, and maintaining data pipelines to ensure that data is accessible, reliable, and secure for analysis.
The average salary range for data engineer professionals in India varies based on experience and location. Entry-level data engineers can expect to earn anywhere between INR 4-6 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
The typical career progression for a data engineer in India may include roles such as Junior Data Engineer, Data Engineer, Senior Data Engineer, Lead Data Engineer, and eventually Chief Data Engineer. As professionals gain more experience and expertise in handling complex data infrastructure, they may move into management roles such as Data Engineering Manager.
In addition to strong technical skills in data engineering, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and Java. Familiarity with cloud platforms like AWS, GCP, or Azure, as well as proficiency in data warehousing technologies, is also beneficial for data engineers.
As you explore data engineer jobs in India, remember to showcase your technical skills, problem-solving abilities, and experience in handling large-scale data projects during interviews. Stay updated with the latest trends in data engineering and continuously upskill to stand out in this competitive job market. Prepare thoroughly, apply confidently, and seize the opportunities that come your way!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City