Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
12 - 22 Lacs
Chennai
Hybrid
Greetings from Getronics! We have permanent opportunities for GCP Data Engineers for Chennai Location . Hope you are doing well! This is Jogeshwari from Getronics Talent Acquisition team. We have multiple opportunities for GCP Data Engineers. Please find below the company profile and Job Description. If interested, please share your updated resume, recent professional photograph and Aadhaar proof at the earliest to jogeshwari.k@getronics.com. Company : Getronics (Permanent role) Client : Automobile Industry Experience Required : 6+ Years in IT and minimum 4+ years in GCP Data Engineering Location : Chennai Skill Required: - GCP Data Engineer, Hadoop, Spark/Pyspark, Google Cloud Platform (Google Cloud Platform) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Compose, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. - 6+ years of professional experience: Data engineering, data product development and software product launches. - 4+ years of cloud data engineering experience building scalable, reliable, and cost- effective production batch and streaming data pipelines using: Data warehouses like Google BigQuery. Workflow orchestration tools like Airflow. Relational Database Management System like MySQL, PostgreSQL, and SQL Server. Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub. LOOKING FOR IMMEDIATE TO 30 DAYS NOTICE CANDIDATES ONLY. Regards, Jogeshwari Senior Specialist
Posted 1 week ago
14.0 - 16.0 years
25 - 32 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
Job Title : Business Intelligence Engineering Lead Locations open - Hyderabad, Chennai, Bangalore, Delhi/NCR Position Description : The Data Engineering team is looking for an experienced Business Intelligence Engineering Lead to join our team. The successful candidate will contribute to all aspects of Data Management, Business Intelligence, and Analytics capabilities in the organization, including leading end-to-end Power BI reporting and dashboarding initiatives. Roles and Responsibilities: Create and design policies for effective data management, reporting, and analytics for the organization. Formulate techniques for quality data collection to ensure adequacy, accuracy, and legitimacy of data. Devise and implement efficient, secure, and scalable procedures for data handling, modeling, and visualization with attention to all technical and business aspects. Establish rules and procedures for data sharing, visualization, and usage across departments and with external stakeholders. Understand the current business model of Clients insurance data and propose operational and application improvements. Analyse user needs and software requirements to determine feasibility of design within time and cost constraints. Create data models and designs to meet specific business and reporting needs using Power BI and data warehouse best practices. Design and implement Power BI dashboards and reports that deliver actionable insights and business metrics across teams. Create source-to-target mappings and ETL design for integration of new/modified data streams into the Data Warehouse. Develop SSIS packages and Power BI Dataflows to load different producer/agent files and populate the staging and target data warehouse schemas. Monitor and analyse information and data systems, evaluate reporting performance, and identify enhancements (new technologies, upgrades, automation). Ensure databases, data models, and dashboards are secured and properly governed to prevent data breaches and losses. Troubleshoot data-related issues and report performance metrics; authorize system maintenance and enhancements. Skills Required: Strong experience with Power BI Desktop, Power BI Service, Power Query, DAX, and Dataflows. Ability to develop and optimize interactive dashboards, KPIs, paginated reports, and automated refreshes. Solid understanding of data storytelling and visualization best practices. Excellent SQL and T-SQL/PL-SQL skills; strong background in data modelling, warehousing (star/snowflake), and ETL pipelines. Experience with Microsoft BI Stack: SSIS, SSRS, SSAS, and Power Pivot. Experience in navigating large, complex, matrixed organizations. Knowledge of working with Big Data technologies and Azure data services is a plus. High-level analytical and critical thinking skills. Strong communication skills and ability to work collaboratively with cross-functional teams. Experience Required: BSc/BA in Computer Science, Engineering, or related field. 14-16 years of experience in Data Engineering and Business Intelligence. Minimum 5-6 years of experience with Microsoft BI Stack SSIS, SSRS, SSAS. At least 3-4 years hands-on experience with Power BI and DAX. Strong track record of delivering high-performance reports and data solutions. Experience in data governance, quality, and security standards. Share resume on standon@vbeyondapac.com
Posted 1 week ago
3.0 - 8.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. Key Responsibilities : A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.
Posted 1 week ago
8.0 - 11.0 years
20 - 35 Lacs
Bengaluru
Work from Office
• 8+ years of experience in designing and developing enterprise data solutions. • 3+ years of hands-on experience with Snowflake. • 3+ years of experience in Python development. • Strong expertise in SQL and Python for data processing and transformation. • Experience with Spark, Scala, and Python in production environments. • Hands-on experience with data orchestration tools (e.g., Airflow, Informatica, Automic). • Knowledge of metadata management and data lineage. • Strong problem-solving skills with an ability to work in a fast-paced, agile environment. • Excellent communication and collaboration skills.
Posted 1 week ago
7.0 - 12.0 years
22 - 37 Lacs
Bengaluru, Mumbai (All Areas)
Hybrid
Hiring: Data Engineering Senior Software Engineer / Tech Lead / Senior Tech Lead - Mumbai & Bengaluru - Hybrid (3 Days from office) | Shift: 2 PM 11 PM IST - Experience: 5 to 12+ years (based on role & grade) Open Grades/Roles : Senior Software Engineer : 58 Years Tech Lead : 710 Years Senior Tech Lead : 10–12+ Years Job Description – Data Engineering Team Core Responsibilities (Common to All Levels) : Design, build and optimize ETL/ELT pipelines using tools like Pentaho , Talend , or similar Work on traditional databases (PostgreSQL, MSSQL, Oracle) and MPP/modern systems (Vertica, Redshift, BigQuery, MongoDB) Collaborate cross-functionally with BI, Finance, Sales, and Marketing teams to define data needs Participate in data modeling (ER/DW/Star schema) , data quality checks , and data integration Implement solutions involving messaging systems (Kafka) , REST APIs , and scheduler tools (Airflow, Autosys, Control-M) Ensure code versioning and documentation standards are followed (Git/Bitbucket) Additional Responsibilities by Grade Senior Software Engineer (5–8 Yrs) : Focus on hands-on development of ETL pipelines, data models, and data inventory Assist in architecture discussions and POCs Good to have: Tableau/Cognos, Python/Perl scripting, GCP exposure Tech Lead (7–10 Yrs) : Lead mid-sized data projects and small teams Decide on ETL strategy (Push Down/Push Up) and performance tuning Strong working knowledge of orchestration tools, resource management, and agile delivery Senior Tech Lead (10–12+ Yrs) : Drive data architecture , infrastructure decisions , and internal framework enhancements Oversee large-scale data ingestion, profiling, and reconciliation across systems Mentoring junior leads and owning stakeholder delivery end-to-end Advantageous: Experience with AdTech/Marketing data , Hadoop ecosystem (Hive, Spark, Sqoop) - Must-Have Skills (All Levels): ETL Tools: Pentaho / Talend / SSIS / Informatica Databases: PostgreSQL, Oracle, MSSQL, Vertica / Redshift / BigQuery Orchestration: Airflow / Autosys / Control-M / JAMS Modeling: Dimensional Modeling, ER Diagrams Scripting: Python or Perl (Preferred) Agile Environment, Git-based Version Control Strong Communication and Documentation
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Location: Bangalore/Hyderabad/Pune Experience level: 7+ Years About the Role We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.
Posted 1 week ago
8.0 - 10.0 years
10 - 12 Lacs
Bengaluru
Work from Office
Location: Bangalore/Hyderabad/Pune Experience level: 8+ Years About the Role We are looking for a technical and hands-on Lead Data Engineer to help drive the modernization of our data transformation workflows. We currently rely on legacy SQL scripts orchestrated via Airflow, and we are transitioning to a modular, scalable, CI/CD-driven DBT-based data platform. The ideal candidate has deep experience with DBT , modern data stack design , and has previously led similar migrations improving code quality, lineage visibility, performance, and engineering best practices. Key Responsibilities Lead the migration of legacy SQL-based ETL logic to DBT-based transformations Design and implement a scalable, modular DBT architecture (models, macros, packages) Audit and refactor legacy SQL for clarity, efficiency, and modularity Improve CI/CD pipelines for DBT: automated testing, deployment, and code quality enforcement Collaborate with data analysts, platform engineers, and business stakeholders to understand current gaps and define future data pipelines Own Airflow orchestration redesign where needed (e.g., DBT Cloud/API hooks or airflow-dbt integration) Define and enforce coding standards, review processes, and documentation practices Coach junior data engineers on DBT and SQL best practices Provide lineage and impact analysis improvements using DBTs built-in tools and metadata Must-Have Qualifications 8+ years of experience in data engineering Proven success in migrating legacy SQL to DBT , with visible results Deep understanding of DBT best practices , including model layering, Jinja templating, testing, and packages Proficient in SQL performance tuning , modular SQL design, and query optimization Experience with Airflow (Composer, MWAA), including DAG refactoring and task orchestration Hands-on experience with modern data stacks (e.g., Snowflake, BigQuery etc.) Familiarity with data testing and CI/CD for analytics workflows Strong communication and leadership skills; comfortable working cross-functionally Nice-to-Have Experience with DBT Cloud or DBT Core integrations with Airflow Familiarity with data governance and lineage tools (e.g., dbt docs, Alation) Exposure to Python (for custom Airflow operators/macros or utilities) Previous experience mentoring teams through modern data stack transitions
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
Sr. Python Developer Experience 5+Years Location Bangalore/Hyderabad Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us? Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!
Posted 1 week ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Bachelors or masters degree in computer science, Information Technology, Data Science, or a related field. Must have minimum 4 years of relevant experience Proficient in Python with hands-on experience building ETL pipelines for data extraction, transformation, and validation. Strong SQL skills for working with structured data. Familiar with Grafana or Kibana for data visualization and monitoring/dashboards. Experience with databases such as MongoDB, Elasticsearch, and MySQL. Comfortable working in Linux environments using common Unix tools. Hands-on experience with Git, Docker and virtual machines.
Posted 1 week ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
About the Role: We are seeking a skilled and detail-oriented Data Migration Specialist with hands-on experience in Alteryx and Snowflake. The ideal candidate will be responsible for analyzing existing Alteryx workflows, documenting the logic and data transformation steps and converting them into optimized, scalable SQL queries and processes in Snowflake. The ideal candidate should have solid SQL expertise, a strong understanding of data warehousing concepts. This role plays a critical part in our cloud modernization and data platform transformation initiatives. Key Responsibilities: Analyze and interpret complex Alteryx workflows to identify data sources, transformations, joins, filters, aggregations, and output steps. Document the logical flow of each Alteryx workflow, including inputs, business logic, and outputs. Translate Alteryx logic into equivalent SQL scripts optimized for Snowflake, ensuring accuracy and performance. Write advanced SQL queries , stored procedures, and use Snowflake-specific features like Streams, Tasks, Cloning, Time Travel , and Zero-Copy Cloning . Implement data ingestion strategies using Snowpipe , stages, and external tables. Optimize Snowflake performance through query tuning , partitioning, clustering, and caching strategies. Collaborate with data analysts, engineers, and stakeholders to validate transformed logic against expected results. Handle data cleansing, enrichment, aggregation, and business logic implementation within Snowflake. Suggest improvements and automation opportunities during migration. Conduct unit testing and support UAT (User Acceptance Testing) for migrated workflows. Maintain version control, documentation, and audit trail for all converted workflows. Required Skills: Bachelors or masters degree in computer science, Information Technology, Data Science, or a related field. Must have aleast 4 years of hands-on experience in designing and developing scalable data solutions using the Snowflake Data Cloud platform Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. 1+ years of experience with Alteryx Designer, including advanced workflow development and debugging. Strong proficiency in SQL, with 3+ years specifically working with Snowflake or other cloud data warehouses. Python programming experience focused on data engineering. Experience with data APIs , batch/stream processing. Solid understanding of data transformation logic like joins, unions, filters, formulas, aggregations, pivots, and transpositions. Experience in performance tuning and optimization of SQL queries in Snowflake. Familiarity with Snowflake features like CTEs, Window Functions, Tasks, Streams, Stages, and External Tables. Exposure to migration or modernization projects from ETL tools (like Alteryx/Informatica) to SQL-based cloud platforms. Strong documentation skills and attention to detail. Experience working in Agile/Scrum development environments. Good communication and collaboration skills.
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Lead Python Developer Experience 7+Years Location Bangalore/Hyderabad Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us? Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us? Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us? Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us? Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!
Posted 1 week ago
7.0 - 11.0 years
30 - 35 Lacs
Bengaluru
Hybrid
Lead Data Engineer We're Hiring: Lead Data Engineer | Bangalore | 7 - 11 Years Experience Location: Bangalore(Hybrid) Position Type: Permanent Mode of Interview: Face to Face Experience: 7 - 11yrs. Skills: Snowflake, ETL tools(Informatica/BODS/Datastage), Scripting (Python/Powershell/Shell), SQL, Data Warehousing Candidate who are available for a Face to Face discussion can apply. Interested? Send your updated CV to: radhika@theedgepartnership.com Do connect to me on LinkedIn: https://www.linkedin.com/in/radhika-gm-00b20a254/ Skills and Qualification (Functional and Technical Skills) Functional Skills: Team Player: Support peers, team, and department management. Communication: Excellent verbal, written, and interpersonal communication skills. Problem Solving: Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality. Partnership and Collaboration: Develop and maintain partnership with business and IT stakeholders Attention to Detail: Ensure accuracy and thoroughness in all tasks. Technical/Business Skills: Data Engineering: Experience in designing and building Data Warehouse and Data lakes. Good knowledge of data warehouse principles, and concepts. Technical expertise working in large scale Data Warehousing applications and databases such as Oracle, Netezza, Teradata, and SQL Server. Experience with public cloud-based data platforms especially Snowflake and AWS. Data integration skills: Expertise in design and development of complex data pipelines Solutions using any industry leading ETL tools such as SAP Business Objects Data Services (BODS), Informatica Cloud Data Integration Services (IICS), IBM Data Stage. Experience of ELT tools such as DBT, Fivetran, and AWS Glue Expert in SQL - development experience in at least one scripting language (Python etc.), adept in tracing and resolving data integrity issues. Strong knowledge of data architecture, data design patterns, modeling, and cloud data solutions (Snowflake, AWS Redshift, Google BigQuery). Data Model: Expertise in Logical and Physical Data Model using Relational or Dimensional Modeling practices, high volume ETL/ELT processes. Performance tuning of data pipelines and DB Objects to deliver optimal performance. Experience in Gitlab version control and CI/CD processes. Experience working in Financial Industry is a plus.
Posted 1 week ago
11.0 - 14.0 years
13 - 16 Lacs
Hyderabad
Work from Office
We are looking for a skilled Senior Azure/Fabric Data Engineer with 11-14 years of experience to join our team at Apps Associates (I) Pvt. Ltd, located in the IT Services & Consulting industry. Roles and Responsibility Design and implement scalable data pipelines using Azure and Fabric. Develop and maintain large-scale data architectures to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Ensure data quality and integrity by implementing robust testing and validation procedures. Optimize data storage and retrieval processes for improved performance and efficiency. Provide technical guidance and mentorship to junior team members. Job Strong understanding of data engineering principles and practices. Experience with Azure and Fabric technologies is highly desirable. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a fast-paced environment. Strong communication and interpersonal skills. Experience with agile development methodologies is preferred.
Posted 1 week ago
2.0 - 5.0 years
4 - 5 Lacs
Dhule
Work from Office
The Development Lead will oversee the design, development, and delivery of advanced data solutions using Azure Databricks, SQL, and data visualization tools like Power BI. The role involves leading a team of developers, managing data pipelines, and creating insightful dashboards and reports to drive data-driven decision-making across the organization. The individual will ensure best practices are followed in data architecture, development, and reporting while maintaining alignment with business objectives. Key Responsibilities: Data Integration & ETL Processes: Design, build, and optimize ETL pipelines to manage the flow of data from various sources into data lakes, data warehouses, and reporting platforms. Data Visualization & Reporting: Lead the development of interactive dashboards and reports using Power BI, ensuring that business users have access to actionable insights and performance metrics. SQL Development & Optimization: Write, optimize, and review complex SQL queries for data extraction, transformation, and reporting, ensuring high performance and scalability across large datasets. Azure Cloud Solutions: Implement and manage cloud-based solutions using Azure services (Azure Databricks, Azure SQL Database, Data Lake) to support business intelligence and reporting initiatives. Collaboration with Stakeholders: Work closely with business leaders and cross-functional teams to understand reporting and analytics needs, translating them into technical requirements and actionable data solutions. Quality Assurance & Best Practices: Implement and maintain best practices in development, ensuring code quality, version control, and adherence to data governance standards. Performance Monitoring & Tuning: Continuously monitor the performance of data systems, reporting tools, and dashboards to ensure they meet SLAs and business requirements. Documentation & Training: Create and maintain comprehensive documentation for all data solutions, including architecture diagrams, ETL workflows, and data models. Provide training and support to end-users on Power BI reports and dashboards. Required Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Proven experience as a Development Lead or Senior Data Engineer with expertise in Azure Databricks, SQL, Power BI, and data reporting/visualization. Hands-on experience in Azure Databricks for large-scale data processing and analytics, including Delta Lake, Spark SQL, and integration with Azure Data Lake. Strong expertise in SQL for querying, data transformation, and database management. Proficiency in Power BI for developing advanced dashboards, data models, and reporting solutions. Experience in ETL design and data integration across multiple systems, with a focus on performance optimization. Knowledge of Azure cloud architecture, including Azure SQL Database, Data Lake, and other relevant services. Experience leading agile development teams, with a strong focus on delivering high-quality, scalable solutions. Strong problem-solving skills, with the ability to troubleshoot and resolve complex data and reporting issues. Excellent communication skills, with the ability to interact with both technical and non-technical stakeholders. Preferred Qualifications: Knowledge of additional Azure services (e.g., Azure Synapse, Data Factory, Logic Apps) is a plus. Experience in Power BI for data visualization and custom calculations.
Posted 1 week ago
4.0 - 6.0 years
1 - 2 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Responsibilities: Design and implement scalable data pipelines to ingest, process, and analyze large volumes of structured and unstructured data from various sources. Develop and optimize data storage solutions, including data warehouses, data lakes, and NoSQL databases, to support efficient data retrieval and analysis. Implement data processing frameworks and tools such as Apache Hadoop, Spark, Kafka, and Flink to enable real-time and batch data processing. Collaborate with data scientists and analysts to understand data requirements and develop solutions that enable advanced analytics, machine learning, and reporting. Ensure data quality, integrity, and security by implementing best practices for data governance, metadata management, and data lineage. Monitor and troubleshoot data pipelines and infrastructure to ensure reliability, performance, and scalability. Develop and maintain ETL (Extract, Transform, Load) processes to integrate data from various sources and transform it into usable formats. Stay current with emerging technologies and trends in big data and cloud computing, and evaluate their applicability to enhance our data engineering capabilities. Document data architectures, pipelines, and processes to ensure clear communication and knowledge sharing across the team. Strong programming skills in Java, Python, or Scala.Strong understanding of data modelling, data warehousing, and ETL processes. Min 4 to Max 6yrs of Relevant exp.Strong understanding of Big Data technologies and their architectures, including Hadoop, Spark, and NoSQL databases. Locations : Mumbai, Delhi NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 week ago
6.0 - 8.0 years
15 - 18 Lacs
Bengaluru
Work from Office
We are seeking an experienced professional in AI and machine learning with a strong focus on large language models (LLMs) for a 9-month project. The role involves hands-on expertise in developing and deploying agentic systems and working with transformer architectures, fine-tuning, prompt engineering, and task adaptation. Responsibilities include leveraging reinforcement learning or similar methods for goal-oriented autonomous systems, deploying models using MLOps practices, and managing large datasets in production environments. The ideal candidate should excel in Python, ML libraries (Hugging Face Transformers, TensorFlow, PyTorch), data engineering principles, and cloud platforms (AWS, GCP, Azure). Strong analytical and communication skills are essential to solve complex challenges and articulate insights to stakeholders.
Posted 1 week ago
3.0 - 6.0 years
25 - 30 Lacs
Chennai
Work from Office
Zalaris is looking for Senior Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
3.0 - 6.0 years
25 - 30 Lacs
Pune
Work from Office
Diverse Lynx is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
3.0 - 5.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Total Yrs. of Experience* 3-5 years Relevant Yrs. of experience* 3+ yrs. Detailed JD *(Roles and Responsibilities) Must Have Skills: Proficient in Data Engineering Hands-on experience in Python, Azure Data Factory, Azure Data bricks (PySpark )and ETL Knowledge of Data Lake storage (storage container) and MSSQL A quick and enthusiastic learner (must) and who is willing to work on new technologies depending on requirement. Configuring and deploying using Azure DevOps pipelines Airflow Good to have Skills: SQL knowledge and experience working with relational databases. Understanding of banking domain concepts Understanding of the project lifecycles: waterfall and agile. Work Experience: 3 - 5 years of experience in Data Engineering Mandatory skills* Azure Databricks, Azure data Factory and Python coding skills
Posted 1 week ago
3.0 - 5.0 years
8 - 9 Lacs
Hyderabad, Pune, Chennai
Work from Office
Total Yrs. of Experience-3-5 years Relevant Yrs. of experience-3+ yrs. Detailed JD *(Roles and Responsibilities)- Must Have Skills: Proficient in Data Engineering Hands-on experience in Python, Azure Data Factory, Azure Data bricks (PySpark )and ETL Knowledge of Data Lake storage (storage container) and MSSQL A quick and enthusiastic learner (must) and who is willing to work on new technologies depending on requirement. Configuring and deploying using Azure DevOps pipelines Airflow Good to have Skills: SQL knowledge and experience working with relational databases. Understanding of banking domain concepts Understanding of the project lifecycles: waterfall and agile. Work Experience: 3 - 5 years of experience in Data Engineering
Posted 1 week ago
8.0 - 12.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Technical Skills Required: ETL Concepts: Strong understanding of Extract, Transform, Load (ETL) processes. Ability to design, develop, and maintain robust ETL pipelines. Database Fundamentals: Proficiency in working with relational databases (e.g., MySQL, PostgreSQL, Oracle, or MS SQL Server). Knowledge of database design and optimization techniques. Basic Data Visualization: Ability to create simple dashboards or reports using visualization tools (e.g., Tableau, Power BI, or similar). Query Optimization: Expertise in writing efficient, optimized queries to handle large datasets. Testing and Documentation: Experience in validating data accuracy and integrity through rigorous testing. Ability to document data workflows, processes, and technical specifications clearly. Key Responsibilities: Data Engineering Tasks: Design, develop, and implement scalable data pipelines to support business needs. Ensure data quality and integrity through testing and monitoring. Optimize ETL processes for performance and reliability. Database Management: Manage and maintain databases, ensuring high availability and security. Troubleshoot database-related issues and optimize performance. Collaboration: Work closely with data analysts, data scientists, and other stakeholders to understand and deliver on data requirements. Provide support for data-related technical issues and propose solutions. Documentation and Reporting: Create and maintain comprehensive documentation for data workflows and technical processes. Develop simple reports or dashboards to visualize key metrics and trends. Learning and Adapting: Stay updated with new tools, technologies, and methodologies in data engineering. Adapt quickly to new challenges and project requirements. Additional Requirements: Strong communication skills, both written and verbal. Analytical mindset with the ability to solve complex data problems. Quick learner and willingness to adopt new tools and technologies as needed. Flexibility to work in shifts, if required. Preferred Skills (Not Mandatory): Experience with cloud platforms (e.g., AWS, Azure, or GCP). Familiarity with big data technologies such as Hadoop or Spark. Basic understanding of machine learning concepts and data science workflows.
Posted 1 week ago
5.0 - 10.0 years
10 - 15 Lacs
New Delhi, Chennai, Bengaluru
Work from Office
We are looking for an experienced Data Engineer with a strong background in data engineering, storage, and cloud technologies. The role involves designing, building, and optimizing scalable data pipelines, ETL/ELT workflows, and data models for efficient analytics and reporting. The ideal candidate must have strong SQL expertise, including complex joins, stored procedures, and certificate-auth-based queries. Experience with NoSQL databases such as Firestore, DynamoDB, or MongoDB is required, along with proficiency in data modeling and warehousing solutions like BigQuery (preferred), Redshift, or Snowflake. The candidate should have hands-on experience working with ETL/ELT pipelines using Airflow, dbt, Kafka, or Spark. Proficiency in scripting languages such as PySpark, Python, or Scala is essential. Strong hands-on experience with Google Cloud Platform (GCP) is a must. Additionally, experience with visualization tools such as Google Looker Studio, LookerML, Power BI, or Tableau is preferred. Good-to-have skills include exposure to Master Data Management (MDM) systems and an interest in Web3 data and blockchain analytics.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France