Jobs
Interviews

2846 Data Engineering Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

navi mumbai, maharashtra

On-site

Arcadis is the world's leading company delivering sustainable design, engineering, and consultancy solutions for natural and built assets. We are more than 36,000 people, in over 70 countries, dedicated to improving quality of life. Everyone has an important role to play. With the power of many curious minds, together we can solve the worlds most complex challenges and deliver more impact together. Individual Accountabilities Collaboration Collaborates with domain architects in the DSS, OEA, EUS, and HaN towers and if appropriate, the respective business stakeholders in architecting data solutions for their data service needs. Collaborates with the Data Engineering and Data Software Engineering teams to effectively communicate the data architecture to be implemented. Contributes to prototype or proof of concept efforts. Collaborates with InfoSec organization to understand corporate security policies and how they apply to data solutions. Collaborates with the Legal and Data Privacy organization to understand the latest policies so they may be incorporated into every data architecture solution. Suggest architecture design with Ontologies, MDM team. Technical Skills & Design Significant experience working with structured and unstructured data at scale and comfort with a variety of different stores (key-value, document, columnar, etc.) as well as traditional RDBMS and data warehouses. Deep understanding of modern data services in leading cloud environments, and able to select and assemble data services with maximum cost efficiency while meeting business requirements of speed, continuity, and data integrity. Creates data architecture artifacts such as architecture diagrams, data models, design documents, etc. Guides domain architect on the value of a modern data and analytics platform. Research, design, test, and evaluate new technologies, platforms and third-party products. Working experience with Azure Cloud, Data Mesh, MS Fabric, Ontologies, MDM, IoT, BI solution and AI would be greater assets. Expert troubleshoot skills and experience. Leadership Mentors aspiring data architects typically operating in data engineering and software engineering roles. Key Shared Accountabilities Leads medium to large data services projects. Provides technical partnership to product owners Shared stewardship, with domains architects, of the Arcadis data ecosystem. Actively participates in Arcadis Tech Architect community. Key Profile Requirements Minimum of 7 years of experience in designing and implementing modern solutions as part of variety of data ingestion and transformation pipelines Minimum of 5 years of experience with best practice design principles and approaches for a range of application styles and technologies to help guide and steer decisions. Experience working in large scale development and cloud environment. Why Arcadis We can only achieve our goals when everyone is empowered to be their best. We believe everyone's contribution matters. Its why we are pioneering a skills-based approach, where you can harness your unique experience and expertise to carve your career path and maximize the impact we can make together. Youll do meaningful work, and no matter what role, youll be helping to deliver sustainable solutions for a more prosperous planet. Make your mark, on your career, your colleagues, your clients, your life and the world around you. Together, we can create a lasting legacy. Join Arcadis. Create a Legacy. Our Commitment to Equality, Diversity, Inclusion & Belonging We want you to be able to bring your best self to work every day, which is why we take equality and inclusion seriously and hold ourselves to account for our actions. Our ambition is to be an employer of choice and provide a great place to work for all our people.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

punjab

On-site

Responsibilities: Responsible for selling one or more of the following Digital Engineering Services/SaaS/ for the US Markets. Customer industries: Automotive,e-commerce, media, Logistics, Energy, Hi-Tech, Industrial SaaS companies, ISVs. Cloud Services / Solutions, Application Development, Full Stack Development, DevOps, Mobile Apps Development, SRE, Servicenow, workflow automation. Data Analytics, Data Engineering, Governance and Pipelining, Data Lake Development And Re-Architecture, DataOps, MLOps Industrial IoT Prospecting: Identify and research potential as part of daily reach out activities. Outbound Calling: Reach out to prospects via phone and email to introduce our IT services & solutions and build initial interest. Sales Pipeline Management: Maintain accurate and up-to-date records of leads, opportunities, and client interactions in the CRM system. Achieve Targets: Meet or exceed monthly and quarterly leads quotas and targets. Must Have Qualifications: Education: B. Tech / B.E In Engineering; Experience: 3 to 5 years of experience with Proven track record of success in inside sales. 3+ years experience selling Digital Engineering Services to U.S. and European customers (Germany/Ireland preferred). Domain Knowledge: Domain knowledge in Digital Engineering Services (Must Have) Vertical Knowledge/Experience: as described in the JD Excellent communication and interpersonal skills. Familiarity with CRM software and sales automation tools. Job Types: Full-time, Permanent Benefits: Food provided Health insurance Leave encashment Provident Fund Schedule: Evening shift Fixed shift Monday to Friday Night shift US shift Experience: total work: 2 years (Required) Work Location: In person Speak with the employer +91 9034340735,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Scientist at Amdocs in Pune, you will be responsible for the design, development, modification, debugging, and maintenance of software systems. Your role will involve hands-on experience in handling GenAI use cases and developing Databricks jobs for data ingestion for learnings. You will create partnerships with project stakeholders to provide technical assistance for important decisions and work on the development and implementation of Gen AI use-cases in live production as per business/user requirements. Your technical skills should include mandatory expertise in deep learning engineering (mostly on MLOps), strong NLP/LLM experience, and processing text using LLM. You should be proficient in Pyspark/Databricks & Python programming, building backend applications using Python and deep learning frameworks, and deploying models while building APIs (FAST API, FLASK API). Experience with working with GPUs, vector databases like Milvus, Azure cognitive search, and quadrant, as well as transformers and hugging face models like llama, Mixtral AI, and embedding models is essential. It would be good to have knowledge and experience in Kubernetes, Docker, cloud experience working with VMs and Azure storage, and sound data engineering experience. In this role, you will be challenged to design and develop new software applications, providing you with opportunities for personal growth in a growing organization. Your job will involve minimal travel and will be located in Pune. Join Amdocs and help build the future to make it amazing by unlocking innovative potential for next-generation communication and media experiences for end users and enterprise customers.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

tamil nadu

On-site

As a data engineer, you will be expected to be proficient in Python, SQL, and either Java or Scala, especially for Spark/Beam pipelines. Experience with BigQuery, Dataflow, Apache Beam, Airflow, and Kafka will be beneficial for this role. You will be responsible for building scalable batch and streaming pipelines to support machine learning or campaign analytics. Familiarity with ad tech, bid logs, or event tracking pipelines is considered a plus. Your primary role will involve constructing the foundational data infrastructure to handle the ingestion, processing, and serving of bid logs, user events, and attribution data from various sources. Key responsibilities include building scalable data pipelines for real-time and batch ingestion from DSPs, attribution tools, and order management systems. You will need to design clean and queryable data models to facilitate machine learning training and campaign optimization. Additionally, you will be required to enable data joins across 1st, 2nd, and 3rd-party data sets such as device, app, geo, and segment information. Optimizing pipelines for freshness, reliability, and cost efficiency is crucial, along with supporting event-level logging of auction wins, impressions, conversions, and click paths. The ideal candidate for this role should possess skills in Apache Beam, Airflow, Kafka, Scala, SQL, BigQuery, attribution, Java, Dataflow, Spark, machine learning, and Python. If you are enthusiastic about data engineering and have a background in building scalable data pipelines, this position could be a great fit for you.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

You are a talented Data Engineer with a strong background in data engineering, and we are seeking your expertise to design, build, and maintain data pipelines using various technologies, focusing on the Microsoft Azure cloud platform. Your responsibilities will include designing, developing, and implementing data pipelines using Azure Data Factory (ADF) or other orchestration tools. You will be required to write efficient SQL queries for data extraction, transformation, and loading (ETL) into Azure Synapse Analytics. Utilizing PySpark and Python, you will handle complex data processing tasks on large datasets within Azure Databricks. Collaboration with data analysts to understand data requirements and ensure data quality is a key aspect of your role. You will also be responsible for designing and developing Datalakes and Warehouses, implementing data governance practices for security and compliance, monitoring and maintaining data pipelines for optimal performance, and developing unit tests for data pipeline code. Working collaboratively with other engineers and data professionals in an Agile development environment is essential. Preferred Skills & Experience: - Good knowledge of PySpark & working knowledge of Python - Full stack Azure Data Engineering skills (Azure Data Factory, DataBricks, and Synapse Analytics) - Experience with large dataset handling - Hands-on experience in designing and developing Datalakes and Warehouses Job Types: Full-time, Permanent Schedule: - Day shift - Monday to Friday Application Question(s): - When can you join Mention in days. - Are you serving Notice Period (Yes/No) - What is your current and expected CTC Education: - Bachelor's (Preferred) Experience: - Total work: 6 years (Required) - Data engineering-Azure: 6 years (Required) Location: - Pune, Maharashtra (Required) Work Location: In person Only immediate joiners are preferred.,

Posted 1 week ago

Apply

16.0 - 20.0 years

0 Lacs

karnataka

On-site

A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. As part of our Analytics and Insights Consumption team, you'll analyze data to drive useful insights for clients to address core business issues or to drive strategic outcomes. You'll use visualization, statistical and analytics models, AI/ML techniques, Modelops, and other techniques to develop these insights. Candidates with 16+ years of hands-on experience are required for this position. **Required Skills:** - 15 years of relevant experience in pharma & life sciences analytics, with knowledge of industry trends, regulations, and challenges. - Proven track record of working within the pharma and life sciences domain, addressing industry-specific issues and leveraging domain knowledge to drive results. - Knowledge of drug development processes, clinical trials, regulatory compliance, market access strategies, and commercial operations. - Strong knowledge of healthcare industry trends, regulations, and challenges. - Proficiency in data analysis and statistical modeling techniques. - Good knowledge of statistics, Data analysis hypothesis testing, and preparation for machine learning use cases. - Expertise in GenAI, AI/ML, and data engineering. - Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib). - Familiarity with programming in SQL and Python/Pyspark to guide teams. - Familiarity with visualization tools for e.g.: Tableau, PowerBI, AWS QuickSight etc. - Excellent problem-solving and critical-thinking abilities. - Strong communication and presentation skills, with the ability to effectively convey complex concepts to both technical and non-technical stakeholders. - Leadership skills, with the ability to manage and mentor a team. - Project management skills, with the ability to prioritize tasks and meet deadlines. **Responsibilities:** - Lead and manage the pharma life sciences analytics team, providing guidance, mentorship, and support to team members. - Collaborate with cross-functional teams to identify business challenges and develop data-driven solutions tailored to the pharma and life sciences sector. - Leverage in-depth domain knowledge across the pharma life sciences value chain, including R&D, drug manufacturing, commercial, pricing, product planning, product launch, market access, and revenue management. - Utilize data science, GenAI, AI/ML, and data engineering tools to extract, transform, and analyze data, generating insights and actionable recommendations. - Develop and implement statistical models and predictive analytics to support decision-making and improve healthcare outcomes. - Stay up-to-date with industry trends, regulations, and best practices, ensuring compliance and driving innovation. - Present findings and recommendations to clients and internal stakeholders, effectively communicating complex concepts in a clear and concise manner. - Collaborate with clients to understand their business objectives and develop customized analytics solutions to meet their needs. - Manage multiple projects simultaneously, ensuring timely delivery and high-quality results. - Continuously evaluate and improve analytics processes and methodologies, driving efficiency and effectiveness. - Stay informed about emerging technologies and advancements in the pharma life sciences space, identifying opportunities for innovation and growth to provide thought leadership and subject matter expertise. **Professional And Educational Background:** BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. Job Overview The AWS Architect with Data Engineering Skills will be responsible for designing, implementing, and managing scalable, robust, and secure cloud infrastructure and data solutions on AWS. This role requires a deep understanding of AWS services, data engineering best practices, and the ability to translate business requirements into effective technical solutions. Key Responsibilities Architecture Design: Design and architect scalable, reliable, and secure AWS cloud infrastructure. Develop and maintain architecture diagrams, documentation, and standards. Data Engineering Design and implement ETL pipelines using AWS services such as Glue, Lambda, and Step Functions. Build and manage data lakes and data warehouses using AWS services like S3, Redshift, and Athena. Ensure data quality, data governance, and data security across all data platforms. AWS Services Management Utilize a wide range of AWS services (EC2, S3, RDS, Lambda, DynamoDB, etc.) to support various workloads and applications. Implement and manage CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy. Monitor and optimize the performance, cost, and security of AWS resources. Collaboration And Communication Work closely with cross-functional teams including software developers, data scientists, and business stakeholders. Provide technical guidance and mentorship to team members on best practices in AWS and data engineering. Security And Compliance Ensure that all cloud solutions follow security best practices and comply with industry standards and regulations. Implement and manage IAM policies, roles, and access controls. Innovation And Improvement Stay up to date with the latest AWS services, features, and best practices. Continuously evaluate and improve existing systems, processes, and architectures. (ref:hirist.tech),

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Senior Manager Engineering- Technology at a Leading Global Management Consulting firm in Bangalore, with over 10 years of experience, your role involves taking ownership of technical envisioning, feasibility, scoping timelines, and executing enterprise-grade software applications. You will excel in project management, ensuring smooth operations and end-to-end responsibility for project deliverables in various areas such as Software Engineering, Data Engineering, Data Science, and DevOps. Your responsibilities also include conducting design and code reviews, providing constructive feedback to team members, and collaborating with cross-functional teams to support case teams, development teams, and clients. In addition to technical oversight, you will lead the selling process to partners and clients, write proposal documents, and present value propositions related to Software Development. People management and collaboration are essential aspects of this role, where you will be responsible for managing software development teams, conducting learning needs assessments, and upskilling team members as needed. Creating a supportive and inclusive work environment where team members feel empowered to share their ideas and opinions is crucial. Your problem-solving skills and mentoring abilities will be put to use as you guide the team through software delivery life cycles, provide technical advice, and coach developers to build future-ready engineering teams. Strong experience in building cloud-native PaaS solutions, object-oriented design principles, and polyglot programming is required for this role. To be successful in this position, you should hold a Bachelor's or Master's degree in computer science engineering or a related field, with a minimum of 10-13 years of software development experience, including significant experience in engineering management. Strong leadership qualities, excellent communication skills, proactive organization, and the ability to manage geographically dispersed teams are key attributes. Your expertise in project management, performance evaluation, and change management will be valuable assets in driving complex projects to successful outcomes. Contributions to open-source projects, blogs, or forums in relevant technologies will be considered an advantage.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

The ideal candidate for the Azure Data Engineer position will have 4-6 years of experience in designing, implementing, and maintaining data solutions on Microsoft Azure. As an Azure Data Engineer at our organization, you will be responsible for designing efficient data architecture diagrams, developing and maintaining data models, and ensuring data integrity, quality, and security. You will also work on data processing, data integration, and building data pipelines to support various business needs. Your role will involve collaborating with product and project leaders to translate data needs into actionable projects, providing technical expertise on data warehousing and data modeling, as well as mentoring other developers to ensure compliance with company policies and best practices. You will be expected to maintain documentation, contribute to the company's knowledge database, and actively participate in team collaboration and problem-solving activities. We are looking for a candidate with a Bachelor's degree in Computer Science, Information Technology, or a related field, along with proven experience as a Data Engineer focusing on Microsoft Azure. Proficiency in SQL and experience with Azure data services such as Azure Data Factory, Azure SQL Database, Azure Databricks, and Azure Synapse Analytics is required. Strong understanding of data architecture, data modeling, data integration, ETL/ELT processes, and data security standards is essential. Excellent problem-solving, collaboration, and communication skills are also important for this role. As part of our team, you will have the opportunity to work on exciting projects across various industries like High-Tech, communication, media, healthcare, retail, and telecom. We offer a collaborative environment where you can expand your skills by working with a diverse team of talented individuals. GlobalLogic prioritizes work-life balance and provides professional development opportunities, excellent benefits, and fun perks for its employees. Join us at GlobalLogic, a leader in digital engineering, where we help brands design and build innovative products, platforms, and digital experiences for the modern world. Headquartered in Silicon Valley, GlobalLogic operates design studios and engineering centers worldwide, serving customers in industries such as automotive, communications, financial services, healthcare, manufacturing, media and entertainment, semiconductor, and technology.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

The SQL Server Data Engineer will be responsible for the development and sustainment of the SQL Server Warehouse, ensuring its operational readiness (security, health, and performance), executing data loads, and performing data modeling in support of multiple development teams. The data warehouse supports an enterprise application suite of program management tools. You must be capable of working independently and collaboratively. Responsibilities: - Manage SQL Server databases through multiple product lifecycle environments, from development to mission-critical production systems. - Troubleshoot issues related to memory-optimized tables in MS SQL Server 2019. - Configure and maintain database servers and processes, including monitoring of system health and performance, to ensure high levels of performance, availability, and security. - Apply data modeling techniques to ensure development and implementation support efforts meet integration and performance expectations. - Independently analyze, solve, and correct issues in real-time, providing problem resolution end-to-end. - Assist developers with complex query tuning and schema refinement. Requirements: The ideal candidates must have: - Minimum 5 years of relevant experience in data modeling, database & data warehouse management, performance tuning on Microsoft SQL Server Databases. - 3+ years of experience with programming and database scripting. - Experience with clustered configurations, redundancy, backup, and recovery. - Good knowledge of SSIS development, T-SQL, Views, Stored Procedures, Functions, and Triggers. - Experience in AWS and Azure Cloud will be an added advantage.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

Genpact is a global professional services and solutions firm focused on delivering outcomes that shape the future. With over 125,000 employees in more than 30 countries, we are driven by curiosity, agility, and the desire to create lasting value for our clients. Our purpose is the relentless pursuit of a world that works better for people, serving and transforming leading enterprises, including Fortune Global 500 companies, through deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Lead Consultant-Databricks Developer - AWS. As a Databricks Developer in this role, you will be responsible for solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Stay updated on new and emerging technologies and explore their potential applications for service offerings and products. - Collaborate with architects and lead engineers to design solutions that meet functional and non-functional requirements. - Demonstrate knowledge of relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess excellent coding skills, particularly in Python or Scala, with a preference for Python. Qualifications: Minimum qualifications: - Bachelor's Degree in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - Stay informed about new technologies and their potential applications. - Collaborate with architects and lead engineers to develop solutions. - Demonstrate knowledge of industry trends and standards. - Exhibit strong analytical and technical problem-solving skills. - Proficient in Python or Scala coding. - Experience in the Data Engineering domain. - Completed at least 2 end-to-end projects in Databricks. Additional qualifications: - Familiarity with Delta Lake, dbConnect, db API 2.0, and Databricks workflows orchestration. - Understanding of Databricks Lakehouse concept and its implementation in enterprise environments. - Ability to create complex data pipelines. - Strong knowledge of Data structures & algorithms. - Proficiency in SQL and Spark-SQL. - Experience in performance optimization to enhance efficiency and reduce costs. - Worked on both Batch and streaming data pipelines. - Extensive knowledge of Spark and Hive data processing framework. - Experience with cloud platforms (Azure, AWS, GCP) and common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. - Skilled in writing unit and integration test cases. - Excellent communication skills and experience working in teams of 5 or more. - Positive attitude towards learning new skills and upskilling. - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience in CI/CD to build pipelines for Databricks jobs. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. This is a full-time position based in India-Gurugram. The job posting was on August 5, 2024, and the unposting date is set for October 4, 2024.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

Sales , BDM (Information Technolgy) Only people with IT Sales Should Apply. Job Title: Sales & Bus Dev Manager - Big Data Services Delhi /NCR Job Summary: Looking for a seasoned sales expert with a proven track record of selling Big Data services into enterprises. Sales expert will help match our technical skills with the requirements of the customers, set up assessment meetings and help with closing the sales. Key Responsibilities: 1. Sales Execution: Develop and implement a strategic sales plan focused on expertise in BI, Data Analytics, Data Engineering and AI/ML. Prepare realistic revenue plans and deliver on the committed revenue numbers. 2. Domain Knowledge: Understanding of Big Data tools and being able to match it with the requirements of the customers. 3. Prospecting: Identify and target organizations in need of advanced Big Data and AI/ML solutions, with a focus on workload and AI/ML pipelines. 4. Solution Presentation: Effectively communicate the unique value proposition of our services, including use of self-developed tools to provide best outcome. 5. Client Engagement: Build and nurture strong relationships with key decision-makers within client organizations who are looking to harness Big Data and AI/ML technologies. 6. Proposal Development: Prepare tailored proposals that showcase how our specialized services can address each client's specific needs. 7. Negotiation and Closing: Skilfully negotiate contracts, pricing, and terms to successfully close deals. 8. Account Management: Continuously engage with existing clients to identify opportunities for upselling and cross-selling specialized solutions. 9. Sales Reporting: Maintain accurate records of sales activities, pipeline, and forecasts using CRM tools. 10. Building and growing sales team: As the sales proliferate, sales expert will help in recruiting and building sales team Qualification - Bachelors degree in engineering, sales, or a related field (Master's degree preferred). - A proven track record of at least 5-7 years in selling services for Big Data, BI, Data Analytics, AI/ML - Good knowledge of Big Data, BI, Data Analytics tools (Power BI, Azure Synapse, AWS EMR, Spark, Hadoop etc.) - Exceptional communication, negotiation, and presentation skills - Ability to work independently and collaboratively as part of a team - Experience of selling services in at least two of North America, EMEA, APAC, India - Representing in developer/user conferences and visiting customers (globally) Please Write in CTC ECTC NP with CV to alsvidindia@gmail.com 08355934128,

Posted 1 week ago

Apply

4.0 - 9.0 years

20 - 35 Lacs

Chennai

Hybrid

Hi there! I am hiring for a client of mine seeking a Data Engineer to join their team in Chennai. Overview of the position below: Role & responsibilities: Design, develop, test and support ETL/ELT solutions automating data loading processes in line with architectural standards and best practices Follow migration process to move ETL objects from development to QA, stage and production environments Preferred candidate profile At least 5 years of working experience in SAP Data Services (BODS) in ETL projects and support work Experience in SAP Data Services (BODS) version 4.2/4.3 Strong knowledge of relational data design and structured query language (SQL) Strong knowledge of Python programming for scripting, data transformation, automation, and integration with external APIs and services. Experience with Snowflake, SAP HANA, PostgreSQL, MS-SQL and Oracle Experience writing technical specifications and testing documents Experience in Data Services Management Console for monitoring, execution and scheduling jobs Experience with file transfer protocols

Posted 1 week ago

Apply

10.0 - 20.0 years

20 - 30 Lacs

Pune

Remote

Role & responsibilities Minimum 10+ years of Developing, designing, and implementing of Data Engineering. Collaborate with data engineers and architects to design and optimize data models for Snowflake Data Warehouse. Optimize query performance and data storage in Snowflake by utilizing clustering, partitioning, and other optimization techniques. Experience working on projects were housed within an Amazon Web Services (AWS) cloud environment. Experience working on projects housed within a Tableau and DBT Work closely with business stakeholders to understand requirements and translate them into technical solutions. Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Lead Data Engineer with DBT, Snowflake, SQL, and data warehousing expertise. Design, build, and maintain scalable data pipelines, ensure data quality, and solve complex data problems. ETL tool adaptability essential. SAP data and enterprise platform

Posted 1 week ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Hyderabad

Work from Office

Location : Hyderabad Work Model (Hybrid / WFO) : Hybrid & 3 Days per week (Tue, Wed, Thu) Experience Required: 5+ Employment Type (Full-Time / Part-Time / Contract): Full-time Mandatory Skills: Advanced SQL, Tableau / Snowflake / Teradata Python would be a plus Job Summary & Key Responsibilities: Design, develop, and support dashboards and reports. Provide analysis to support business management and executive decision-making. Design and develop effective SQL scripts to transform and aggregate large data sets, create derived metrics, and embed them into business solutions An effective and good communicator has demonstrated experience in handling larger and more complex business analytics projects 6+ years of relevant experience in design, development, and maintenance of Business Intelligence, reporting, and data applications Advanced hands-on experience with Tableau or similar BI dashboard visualisation tools Expertise in programmatically processing large data sets from multiple source systems, integration, data mining, summarisation, and presentation of results to exec audience. Advanced knowledge of SQL in related technologies like Oracle, Teradata, performance debugging, and tuning activities Strong understanding of dev to production processes, User Acceptance and Production Validation testing, waterfall and agile development, and code deployment methodologies Experience in core data warehousing concepts, dimensional data modeling, RDBMS, OLAP, ROLAP Experience with ETL tools used to automate manual processes (SQL scripts, Airflow/Nifi) Experience with R, Python, Hadoop Only Immediate Joiner Thanks & Regards, Milki Bisht- 9151206474 Email id milki.bisht@nlbtech.in

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Gurugram

Work from Office

About the Opportunity Job Type: PermanentApplication Deadline: 31 July 2025 Title Senior Analyst - Data Science Department Enterprise Data & Analytics Location Gurgaon Reports To Gaurav Shekhar Level Data Scientist 4 About your team Join the Enterprise Data & Analytics team collaborating across Fidelitys global functions to empower the business with data-driven insights that unlock business opportunities, enhance client experiences, and drive strategic decision-making. About your role As a key contributor within the Enterprise Data & Analytics team, you will lead the development of machine learning and data science solutions for Fidelity Canada. This role is designed to turn advanced analytics into real-world impactdriving growth, enhancing client experiences, and informing high-stakes decisions. Youll design, build, and deploy ML models on cloud and on-prem platforms, leveraging tools like AWS SageMaker, Snowflake, Adobe, Salesforce etc. Collaborating closely with business stakeholders, data engineers, and technology teams, youll translate complex challenges into scalable AI solutions. Youll also champion the adoption of cloud-based analytics, contribute to MLOps best practices, and support the team through mentorship and knowledge sharing. This is a high-impact role for a hands-on problem solver who thrives on ownership, innovation, and seeing their work directly influence strategic outcomes. About you You have 47 years of experience working in data science domain, with a strong track record of delivering advanced machine learning solutions for business. Youre skilled in developing models for classification, forecasting, recommender systems and hands-on with frameworks like Scikit-learn, TensorFlow, or PyTorch. You bring deep expertise in developing and deploying models on AWS SageMaker, strong business problem-solving abilities, and are familiar with emerging GenAI trends. A background in engineering, mathematics, or economics from a Tier 1 institution will be preferred. For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team.

Posted 1 week ago

Apply

2.0 - 6.0 years

1 - 4 Lacs

Mysuru

Work from Office

Job Overview: We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices: Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering: Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture: Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams: Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development: Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality: Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise: Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture: Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages: Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs: Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise: Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture: Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development: Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience: Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us Growth Opportunities: Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture: A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation: We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects: Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!

Posted 1 week ago

Apply

2.0 - 5.0 years

3 - 6 Lacs

Mumbai

Work from Office

Sr. Python Developer Experience 5+Years Location Bangalore/Hyderabad Job Overview We are seeking an experienced and highly skilled Senior Data Engineer to join our team. This role requires a combination of software development and data engineering expertise. The ideal candidate will have advanced knowledge of Python and SQL, a solid understanding of API creation (specifically REST APIs and FastAPI), and experience in building reusable and configurable frameworks. Key Responsibilities: Develop APIs & Microservices Design, build, and maintain scalable, high-performance REST APIs using FastAPI and other frameworks. Data Engineering Work on data pipelines, ETL processes, and data processing for robust data solutions. System Architecture Collaborate on the design and implementation of configurable and reusable frameworks to streamline processes. Collaborate with Cross-Functional Teams Work closely with software engineers, data scientists, and DevOps teams to build end-to-end solutions that cater to both application and data needs. Slack App Development Design and implement Slack integrations and custom apps as required for team productivity and automation. Code Quality Ensure high-quality coding standards through rigorous testing, code reviews, and writing maintainable code. SQL Expertise Write efficient and optimized SQL queries for data storage, retrieval, and analysis. Microservices Architecture Build and manage microservices that are modular, scalable, and decoupled. Required Skills & Experience: Programming Languages Expert in Python, with solid experience building APIs and microservices. Web Frameworks & APIs Strong hands-on experience with FastAPI and Flask (optional), designing RESTful APIs. Data Engineering Expertise Strong knowledge of SQL, relational databases, and ETL processes. Experience with cloud-based data solutions is a plus. API & Microservices Architecture Proven ability to design, develop, and deploy APIs and microservices architectures. Slack App Development Experience with integrating Slack apps or creating custom Slack workflows. Reusable Framework Development: Ability to design modular and configurable frameworks that can be reused across various teams and systems. Excellent Problem-Solving Skills: Ability to break down complex problems and deliver practical solutions. Software Development Experience Strong software engineering fundamentals, including version control, debugging, and deployment best practices. Why Join Us Growth Opportunities Youll work with cutting-edge technologies and continuously improve your technical skills. Collaborative Culture A dynamic and inclusive team where your ideas and contributions are valued. Competitive Compensation We offer a competitive salary, comprehensive benefits, and a flexible work environment. Innovative Projects Be a part of projects that have a real-world impact and help shape the future of data and software development. If you're passionate about working on both data and software engineering, and enjoy building scalable and efficient systems, apply today and help us innovate!

Posted 1 week ago

Apply

1.0 - 4.0 years

2 - 5 Lacs

Gurugram

Work from Office

LocationBangalore/Hyderabad/Pune Experience level8+ Years About the Role We are looking for a technical and hands-on Lead Data Engineer to help drive the modernization of our data transformation workflows. We currently rely on legacy SQL scripts orchestrated via Airflow, and we are transitioning to a modular, scalable, CI/CD-driven DBT-based data platform. The ideal candidate has deep experience with DBT , modern data stack design , and has previously led similar migrations improving code quality, lineage visibility, performance, and engineering best practices. Key Responsibilities Lead the migration of legacy SQL-based ETL logic to DBT-based transformations Design and implement a scalable, modular DBT architecture (models, macros, packages) Audit and refactor legacy SQL for clarity, efficiency, and modularity Improve CI/CD pipelines for DBTautomated testing, deployment, and code quality enforcement Collaborate with data analysts, platform engineers, and business stakeholders to understand current gaps and define future data pipelines Own Airflow orchestration redesign where needed (e.g., DBT Cloud/API hooks or airflow-dbt integration) Define and enforce coding standards, review processes, and documentation practices Coach junior data engineers on DBT and SQL best practices Provide lineage and impact analysis improvements using DBTs built-in tools and metadata Must-Have Qualifications 8+ years of experience in data engineering Proven success in migrating legacy SQL to DBT , with visible results Deep understanding of DBT best practices , including model layering, Jinja templating, testing, and packages Proficient in SQL performance tuning , modular SQL design, and query optimization Experience with Airflow (Composer, MWAA), including DAG refactoring and task orchestration Hands-on experience with modern data stacks (e.g., Snowflake, BigQuery etc.) Familiarity with data testing and CI/CD for analytics workflows Strong communication and leadership skills; comfortable working cross-functionally Nice-to-Have Experience with DBT Cloud or DBT Core integrations with Airflow Familiarity with data governance and lineage tools (e.g., dbt docs, Alation) Exposure to Python (for custom Airflow operators/macros or utilities) Previous experience mentoring teams through modern data stack transitions

Posted 1 week ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Chennai

Work from Office

Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 1 week ago

Apply

10.0 - 20.0 years

16 - 20 Lacs

Kolkata

Work from Office

Mode of work : Hybrid or Remote (Hyderabad or Pune candidates preferred for Hybrid working mode or any candidates from Pan India for remote is also fine) Job Description for AI Architect : Mandatory Skills : - SQL - Data science - AI/ML - GenAI - Data Engineering Optional Skills : Data Modelling Client : Genzeon Position : AI Architect Role & Responsibilities : - Strong Python programming expertise - Hands-on experience with AI-related Python tasks (e.g., data analysis, modeling) - Experience with analyzing discrete datasets - Proficiency in working with pre-trained AI models - Problem-solving ability in real-world AI use cases Python Proficiency : - Speed and accuracy in writing code - Understanding of syntax and debugging without external help - Ability to solve complex problems with minimal Google reliance. AI & Data Analysis Skills : - Hands-on expertise in data manipulation and analysis - Understanding of AI modeling and implementation Problem-Solving & Debugging : - Ability to analyze error logs and fix issues quickly - Logical approach to troubleshooting without relying on trivial searches Practical Application & Use Cases : - Ability to apply AI/ML techniques to real-world datasets - Experience with implementing pre-trained models effectively Interview Task : - Given a dataset , candidates must perform multivariate analysis. - Implement modeling using pre-trained models. - Debugging ability without excessive reliance on Google. - Live coding assessment with an open-book approach (Google allowed but should not rely on basic algorithm searches). Role-Specific Expertise : - If applying for AI Architect, the assessment will focus on AI-related tasks. - If applying for Data Engineer, tasks will be aligned with data engineering requirements Location - Hyderabad,Bangalore,Chennai,Pune,Noida,Delhi,Anywhere in India/Multiple Locations

Posted 1 week ago

Apply

10.0 - 15.0 years

12 - 18 Lacs

Bengaluru

Work from Office

Mode: Contract As an Azure Data Architect, you will: Lead architectural design and migration strategies, especially from Oracle to Azure Data Lake Architect and build end-to-end data pipelines leveraging Databricks, Spark, and Delta Lake Design secure, scalable data solutions integrating ADF, SQL Data Warehouse, and on-prem/cloud systems Optimize cloud resource usage and pipeline performance Set up CI/CD pipelines with Azure DevOps Mentor team members and align architecture with business needs Qualifications: 10-15 years in Data Engineering/Architecture roles Extensive hands-on with: Databricks, Azure Data Factory, Azure SQL Data Warehouse Data integration, migration, cluster configuration, and performance tuning Azure DevOps and cloud monitoring tools Excellent interpersonal and stakeholder management skills.

Posted 1 week ago

Apply

8.0 - 10.0 years

9 - 13 Lacs

Mumbai

Work from Office

Company Overview : Zorba Consulting India is a leading consultancy firm focused on delivering innovative solutions and strategies to enhance business performance. With a commitment to excellence, we prioritize collaboration, integrity, and customer-centric values in our operations. Our mission is to empower organizations by transforming data into actionable insights and enabling data-driven decision-making. We are dedicated to fostering a culture of continuous improvement and supporting our team members' professional development. Role Responsibilities : - Design and implement data pipelines using MS Fabric. - Develop data models to support business intelligence and analytics. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Collaborate with cross-functional teams to gather and define data requirements. - Ensure data quality and integrity in all data processes. - Implement best practices for data management, storage, and processing. - Conduct performance tuning for data storage and retrieval for enhanced efficiency. - Generate and maintain documentation for data architecture and data flow. - Participate in troubleshooting data-related issues and implement solutions. - Monitor and optimize cloud-based solutions for scalability and resource efficiency. - Evaluate emerging technologies and tools for potential incorporation in projects. - Assist in designing data governance frameworks and policies. - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Stay updated with industry trends and best practices in data engineering. Qualifications : - 8+ years of experience in data engineering roles. - Strong expertise in MS Fabric and related technologies. - Proficiency in SQL and relational database management systems. - Experience with data warehousing solutions and data modeling. - Hands-on experience in ETL tools and processes. - Knowledge of cloud computing platforms (Azure, AWS, GCP). - Familiarity with Python or similar programming languages. - Ability to communicate complex concepts clearly to non-technical stakeholders. - Experience in implementing data quality measures and data governance. - Strong problem-solving skills and attention to detail. - Ability to work independently in a remote environment. - Experience with data visualization tools is a plus. - Excellent analytical and organizational skills. - Bachelor's degree in Computer Science, Engineering, or related field. - Experience in Agile methodologies and project management.

Posted 1 week ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Chennai

Work from Office

We are seeking a highly experienced Big Data Lead with strong expertise in Apache Spark, Spark SQL, and Spark Streaming The ideal candidate should have extensive hands-on experience with the Hadoop ecosystem, a solid grasp of multiple programming languages including Java, Scala, and Python, and a proven ability to design and implement data processing pipelines in distributed environments Roles & Responsibilities Lead design and development of scalable data processing pipelines using Apache Spark , Spark SQL , and Spark Streaming Work with Java , Scala , and Python to implement big data solutions Design efficient data ingestion pipelines leveraging Sqoop , Kafka , HDFS , and MapReduce Optimize and troubleshoot Spark jobs for performance and reliability Interface with relational databases ( Oracle , MySQL , SQL Server ) and NoSQL databases Work within Unix/Linux environments, employing tools like Git , Jenkins , and CI/CD pipelines Collaborate with cross-functional teams to ensure delivery of robust big data solutions Ensure code quality through unit testing , BDD/TDD practices , and automated testing frameworks Competencies Required 6+ years of hands-on experience in Apache Spark , Spark SQL , and Spark Streaming Strong proficiency in Java , Scala , and Python as applied to Spark applications In-depth experience with the Hadoop ecosystem : HDFS , MapReduce , Hive , HBase , Sqoop , and Kafka Proficiency in working with both relational and NoSQL databases Hands-on experience with build and automation tools like Maven , Gradle , Jenkins , and version control systems like Git Experience working in Linux/Unix environments and developing RESTful services Familiarity with modern testing methodologies including unit testing , BDD , and TDD

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies