Home
Jobs

1188 Adf Jobs - Page 38

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

India

Remote

Role: Data Platform Architect Location: India(remote) Type of Employment: Contract Required Skills: C, C#, .NET ,Python, SCALA, Data bricks, ADF, event hub, event grid, adls, adx, data explorer, fluent d, azure app services, architect new solutions Responsibilities: - Design and architect scalable data platform solutions. - Lead implementation of data pipelines and integration workflows. - Collaborate with stakeholders to define data strategies. - Ensure platform performance, security, and compliance. - Hands on experience architecting solutions for Data Platform holding 1-2 Petabytes of Data Required Skills: - Strong experience in data engineering and architecture - Expertise in modern data engineering tools and Azure services. - Strong programming background in C-family languages and Python. - Strong problem-solving and analytical skills. - Excellent communication and teamwork abilities. - Azure certifications, experience with real-time data processing. Preferred: - Knowledge of industry best practices and standards. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description As a member of the Support organization, your focus is to deliver post-sales support and solutions to the Oracle customer base while serving as an advocate for customer needs. This involves resolving post-sales non-technical customer inquiries via phone and electronic means, as well as, technical questions regarding the use of and troubleshooting for our Electronic Support Services. A primary point of contact for customers, you are responsible for facilitating customer relationships with Support and providing advice and assistance to internal Oracle employees on diverse customer situations and escalated issues. Career Level - IC4 Responsibilities Education & Experience: BE, BTech, MCA , CA or equivalent preferred. Other qualifications with adequate experience may be considered. 5+ years relevant working experience ##Functional/Technical Knowledge & Skills: Must have good understanding of the following Oracle Cloud Financials version 12+ capabilities: We are looking for a techno-functional person who has real-time hands-on functional/product and/or technical experience; and/or worked with L2 or L3 level support; and/or having equivalent knowledge. We expect candidate to have: Strong business processes knowledge and concepts. Implementation/Support experience on either of the area - ERP - Cloud Financial Modules like GL, AP, AR, FA, IBY, PA, CST, ZX and PSA or HCM - Core HR, Benefits, Absence, T&L, Payroll, Compensation, Talent Management or SCM - Inventory, OM, Procurement Candidate must have hands on experience minimum in any of the 5 modules on the above pillars. Ability to relate the product functionality to business processes, and thus offer implementation advices to customers on how to meet their various business scenarios using Oracle Cloud Financials. Technically Strong with Expert Skills in SQL, PLSQL, OTBI/ BIP/FRS reports, FBDI, ADFDI, BPM workflows, ADF Faces, BI Extract for FTP, Payment Integration and Personalisation. Ability to relate the product functionality to business processes, and thus offer implementation advice to customers on how to meet their various business scenarios using Oracle Cloud. Strong problem solving skills. Strong Customer interactions and service orientation so you can understand customer’s critical situations and accordingly provide the response, and mobilise the organisational resources, while setting realistic expectations to customers. Strong operations management and innovation orientation so you can continually improve the processes, methods, tools, and utilities. Strong team player so you leverage each other’s strengths. You will be engaged in collaboration with peers within/across the teams often. Strong learning orientation so you keep abreast of the emerging business models/processes, applications product solutions, product features, technology features – and use this learning to deliver value to customers on a daily basis. High flexibility so you remain agile in a fast changing business and organisational environment. Create and maintain appropriate documentation for architecture, design, technical, implementation, support and test activities. # Personal Attributes: Self driven and result oriented Strong problem solving/analytical skills Strong customer support and relation skills Effective communication (verbal and written) Focus on relationships (internal and external) Strong willingness to learn new things and share them with others Influencing/negotiating Team player Customer focused Confident and decisive Values Expertise (maintaining professional expertise in own discipline) Enthusiasm Flexibility Organizational skills Values and enjoys coaching/knowledge transfer ability Values and enjoys teaching technical courses Note: Shift working is mandatory. Candidate should be open to work in evening and night shifts on rotation basis. Career Level - IC3/IC4/IC5 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

15 - 16 Lacs

Gurugram

Work from Office

Strong experience in SQL development with solid expertise in AWS cloud services. Proficient in Azure Data Factory (ADF) for building and managing data pipelines in cloud-based data integration solutions. Mail:kowsalya.k@srsinfoway.com

Posted 1 month ago

Apply

7.0 years

4 - 9 Lacs

Gurgaon

Remote

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. At AHEAD, we prioritize creating a culture of belonging, where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer, and do not discriminate based on an individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, marital status, or any other protected characteristic under applicable law, whether actual or perceived. We embrace all candidates that will contribute to the diversification and enrichment of ideas and perspectives at AHEAD. AHEAD is looking for a Sr. Data Engineer (L3 support) to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients’ advanced analytics, data science, and other data engineering initiatives. This consultant will build and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. The appropriate candidate must be a subject matter expert in managing data platforms. Responsibilities: A Sr. Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as EventHub’s, ADF and other cloud native tools as required to address streaming use cases Engineers and maintain ELT processes for loading data lake (Cloud Storage, data lake gen2) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and escalations and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Should possess ownership and leadership skills to collaborate effectively with Level 1 and Level 2 teams. Must have experience in raising tickets with Microsoft and engaging with them to address any service or tool outages in production. Qualifications: 7+ years of professional technical experience 5+ years of hands-on Data Architecture and Data Modelling – SME level 5+ years of experience building highly scalable data solutions using Azure data factory, Spark, Databricks, Python 5+ years of experience working in cloud environments (AWS and/or Azure) 3+ years of programming languages such as Python, Spark and Spark SQL. Should have strong knowledge on architecture of ADF and Databricks. Able to work with Level1 and Level 2 teams to resolve platform outages in production environments. Strong client-facing communication and facilitation skills Strong sense of urgency, ability to set priorities and perform the job with little guidance Excellent written and verbal interpersonal skills and the ability to build and maintain collaborative and positive working relationships at all levels Strong interpersonal and communication skills (Written and oral) required Should be able to work in shifts Should have knowledge on azure Dev Ops process. Key Skills: Azure Data Factory, Azure Data bricks, Python, ETL/ELT, Spark, Data Lake, Data Engineering, EventHub’s, Azure delta, Spark streaming Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include: Medical, Dental, and Vision Insurance 401(k) Paid company holidays Paid time off Paid parental and caregiver leave Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (“OTE”) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidate’s relevant experience, qualifications, and geographic location.

Posted 1 month ago

Apply

5.0 years

4 - 16 Lacs

Gurgaon

On-site

Position: SQL+ ADF (Azure Data Factory) Experience Required: Minimum 5+ Years Location: Gurgaon/ Hybrid Job Type: Permanent Work Timings: 1PM – 10 PM Notice Period: Immediate only Mode of Interview: Virtual Required Experience: Must have strong experience in SQL development. Must have experience in AWS Cloud. Must have experience in ADF (Azure Data factory) Job Type: Full-time Pay: ₹400,000.00 - ₹1,600,000.00 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Work Location: In person

Posted 1 month ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design, develop, and implement scalable data pipelines using Azure Databricks Develop PySpark-based data transformations and integrate structured and unstructured data from various sources Optimize Databricks clusters for performance, scalability, and cost-efficiency within the Azure ecosystem Monitor, troubleshoot, and resolve performance bottlenecks in Databricks workloads Manage orchestration and scheduling of end to end data pipeline using tool like Apache airflow, ADF scheduling, logic apps Effective collaboration with Architecture team in designing solutions and with product owners with validating the implementations Implementing best practices to enable data quality, monitoring, logging and alerting the failure scenarios and exception handling Documenting step by step process to trouble shoot the potential issues and deliver cost optimized cloud solutions Provide technical leadership, mentorship, and best practices for junior data engineers Stay up to date with Azure and Databricks advancements to continuously improve data engineering capabilities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications - External Required Qualifications: B.Tech or equivalent 7+ years of overall experience in IT industry and 6+ years of experience in data engineering with 3+ years of hands-on experience in Azure Databricks Hands-on experience with Delta Lake, Lakehouse architecture, and data versioning Experience with CI/CD pipelines for data engineering solutions (Azure DevOps, Git) Solid knowledge of performance tuning, partitioning, caching, and cost optimization in Databricks Deep understanding of data warehousing, data modeling (Kimball/Inmon), and big data processing Solid expertise in the Azure ecosystem, including Azure Synapse, Azure SQL, ADLS, and Azure Functions Proficiency in PySpark, Python and SQL for data processing in Databricks Proven excellent written and verbal communication skills Proven excellent problem-solving skills and ability to work independently Proven ability to balance multiple and competing priorities and execute accordingly Proven highly self-motivated with excellent interpersonal and collaborative skills Proven ability to anticipate risks and obstacles and develop plans for mitigation Proven excellent documentation experience and skills Preferred Qualifications: Azure certifications DP-203, AZ-304 etc. Experience in infrastructure as code, scheduling as code, and automating operational activities using Terraform scripts At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Experience: 5+ years Notice Period: Immediate joiners Work Timings: 1PM – 10 PM Location: Gurgaon Work Mode: Hybrid Strong experience in SQL development along with experience in cloud AWS and good experience in ADF Role Responsibilities Design, develop, and implement database solutions using SQL Server and Azure Data Factory (ADF). Create and optimize complex SQL queries to ensure efficiency and effectiveness in data fetching. Build and manage data pipelines using ADF for data ingestion and transformation. Collaborate with stakeholders to gather requirements and understand data needs. Perform database maintenance tasks, including backups and recovery. Analyze and enhance SQL performance to reduce execution time. Work with Data Analysts to help visualize data and support reporting needs. Conduct code reviews and provide constructive feedback to peers. Document development processes and ensure adherence to best practices. Support system testing and troubleshoot issues as they arise. Participate in team meetings to discuss project updates and challenges. Ensure data security and compliance with relevant regulations. Continuously learn and apply the latest industry trends in database technologies. Assist in training junior developers and onboarding new team members. Contribute to agile project management by updating task progress in tracking tools. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. Minimum of 3 years of experience as a SQL Developer. Proficiency in SQL Server and ADF. Hands-on experience with ETL tools and methodologies. Strong understanding of database architectures and design. Familiarity with cloud technologies, especially Azure. Experience with version control systems like Git. Solid analytical and problem-solving skills. Ability to work effectively in a collaborative team environment. Excellent verbal and written communication skills. Knowledge of data warehousing concepts is a plus. Certifications in SQL or Azure Data Services are an advantage. Capability to handle multiple tasks and prioritize effectively. Detail-oriented with a focus on quality deliverables. Commitment to continuous learning and professional development. " Skills: agile methodologies,performance tuning,data analysis,adf,azure data factory,cloud technologies,data warehousing,git,problem solving,azure data factory (adf),sql server,database management,azure,sql,team collaboration,version control systems,etl tools Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem-solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Primary skills: Technology Big Data Big Table | Technology Cloud Integration | Azure Data Factory (ADF) | Technology | Data on Cloud - Platform | AWS Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Anupgarh, Rajasthan, India

Remote

Somos una corporación multinacional de bebidas y alimentos fundada en 1885 con operaciones en más 14 países, con más de 15,000 colaboradores. Tenemos el portafolio de bebidas más grande de la región, y contamos con socios estratégicos como PepsiCo y AB InBev. El último año hemos tenido una expansión a nivel global que nos ha llevado a dividirnos en 4 unidades de negocio: apex (transformación), cbc (distribución), beliv (innovación en bebidas) y bia (alimentos); y como parte de nuestra estrategia dinámica de expansión y crecimiento requerimos talentos para unirse a nuestra corporación. Apply directly at getonbrd.com. Funciones del cargo Diseñar e implementar soluciones de ingeniería de datos escalables, eficientes y mantenibles utilizando tecnologías como Azure Data Factory (ADF), Databricks y Unity Catalog, aplicando arquitecturas por capas (Bronze/Silver/Gold), automatización de ETL/ELT con validación de calidad de datos, y estrategias de integridad como pipelines idempotentes y manejo de SCD. El objetivo es garantizar datos confiables, optimizados en costos y performance, alineados con las necesidades del negocio, respaldados por documentación robusta y estándares de código (PEP8, Git) para facilitar su evolución y gobierno. Requerimientos del cargo Coordinar el funcionamiento de los distintos entornos donde se ejecutan los procesos de procesamiento de datos. Extraer, transformar y cargar los datos para que estén alineados con respecto a las necesidades del negocio. Generar integraciones eficientes que permitan realizar la ingesta de datos requeridos para la lógica de negocio. Generar flujos de integración continua que permitan validar los flujos desarrollados de forma eficaz. Mentorizar a igenieros juniors en buenas practicas y soluciones escalables. Proponer e implementar mejoras tecnologicas que optimicen los flujos de datos. Principales Retos Requiere criterio para diseñar, implementar y mantener una estructura de datos eficiente, escalable e intuitiva. Requiere criterio y experiencia para cumplir con las mejores prácticas de código para el desarrollo de funcionalidades competitivas en el mercado. Procesar volúmenes de datos en crecimiento exponencial sin que los costos en la nube se disparen. Implementar mecanismos de data quality que no impacten la velocidad de los procesamientos. GETONBRD Job ID: 53848 Conditions Health coverage Global Mobility Apex, S.A. pays or copays health insurance for employees. Computer provided Global Mobility Apex, S.A. provides a computer for your work. Informal dress code No dress code is enforced. Remote work policy Locally remote only Position is 100% remote, but candidates must reside in Chile, Colombia, Ecuador, Peru, Mexico, Guatemala or El Salvador. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Data Analyst (Snowflake) Job ID: POS-10121 Primary Skill: Python Location: Hyderabad Experience Secondary skills: Snowflake, ADF, and SQL Mode of Work: Work from Office Experience : 5-7 Years About The Job Are you someone with an in-depth understanding of ETL and a strong background in developing Snowflake and ADF ETL-based solutions who develop, document, unit test, and maintain ETL applications and deliver successful code meeting customer expectations? If yes, this opportunity can be the next step in your career. Read on. We are looking for a Snowflake and ADF developer to join our Data leverage team – a team of high-energy individuals who thrive in a rapid-pace and agile product development environment. As a Developer, you will provide accountability in the ETL and Data Integration space, from the development phase through delivery. You will work closely with the Project Manager, Technical Lead, and client teams. Your prime responsibilities will be to develop bug free code with proper unit testing and documentation. You will provide inputs to planning, estimation, scheduling, and coordination of technical activities related to ETL-based applications. You will be responsible for meeting development schedules and delivering high-quality ETL-based solutions that meet technical specifications and design requirements ensuring customer satisfaction. You are expected to possess good knowledge in tools – Snowflake and ADF. Know Your Team At ValueMomentum’s Engineering Center , we are a team of passionate engineers who thrive on tackling complex business challenges with innovative solutions while transforming the P&C insurance value chain. We achieve this through strong engineering foundation and continuously refining our processes, methodologies, tools, agile delivery teams, and core engineering archetypes. Our core expertise lies in six key areas: Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. Join a team that invests in your growth. Our Infinity Program empowers you to build your career with role-specific skill development, leveraging immersive learning platforms. You'll have the opportunity to showcase your talents by contributing to impactful projects. Responsibilities Developing Modern Data Warehouse solutions using Snowflake and ADF. Ability to provide solutions that are forward-thinking in the data engineering and analytics space. Good understanding of star and snowflake dimensional modeling. Good knowledge of Snowflake security, Snowflake SQL, and designing other Snowflake objects Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Taks, Streams, Time travel, Cloning, Optimizer, data sharing, stored procedures, and UDFs. Good understanding of Databricks Data and Databricks Delta Lake Architecture. Experience in Azure Data Factory (ADF) to design, implement and manage complex data integration and transformation workflow. Good understanding of SDLC and Agile Methodologies. Strong problem-solving skills and analytical skills with proven strength in applying root-cause analysis. Ability to communicate verbally and in technical writing to all levels of the organization. Strong teamwork and interpersonal skills at all levels. Dedicated to excellence in one’s work; strong organizational skills; detail-oriented and thorough. Hands-on experience on support activities, able to create and resolve tickets – Jira, ServiceNow, Azure DevOps. Requirements Strong experience in Snowflake and ADF. Experience of working in an Onsite/Offshore model. 5+ years of experience in Snowflake and ADF development. About The Company Headquartered in New Jersey, US, ValueMomentum is the largest standalone provider of IT Services and Solutions to Insurers. Our industry focus, expertise in technology backed by R&D, and our customer-first approach uniquely position us to deliver the value we promise and drive momentum to our customers’ initiatives. ValueMomentum is amongst the top 10 insurance-focused IT services firms in North America by number of customers. Leading Insurance firms trust ValueMomentum with their Digital, Data, Core, and IT Transformation initiatives. Benefits We at ValueMomentum offer you a congenial environment to work and grow in the company of experienced professionals. Some benefits that are available to you are: Competitive compensation package. Career Advancement: Individual Career Development, coaching and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management: Goal Setting, continuous feedback and year-end appraisal. Reward & recognition for the extraordinary performers. Show more Show less

Posted 1 month ago

Apply

7.0 - 10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Experience: 7 to 10 years Mandatory Skills: Data Model Design, ER-diagram, Data Warehouse, Data Strategy, Hands on experience in Design and Architecture for enterprise data application. Good to have: Python, PySpark, Databricks, Azure Services(ADLS, ADF, ADB) Good communication and Problem-solving skills. Some understanding on CPG domain. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

K&K Talents is an international recruiting agency that has been providing technical resources globally since 1993. This position is with one of our clients in India , who is actively hiring candidates to expand their teams. Title: Data Engineer (SQL & ADF) Location: Gurgaon, India - Hybrid Employment Type: Full-time Permanent Notice Period: Immediate Role: We are seeking a skilled and proactive SQL + ADF Developer to join our client data engineering team. The ideal candidate will have strong hands-on experience in SQL development , Azure Data Factory (ADF) , and working knowledge of AWS cloud services . You will be responsible for building and maintaining scalable data integration solutions that support our business intelligence and analytics needs. Responsibilities: Develop, optimize, and maintain complex SQL queries, stored procedures , and scripts for large-scale data operations. Design and implement data pipelines using Azure Data Factory (ADF) for ETL/ELT processes. Integrate and move data between on-premise and cloud-based sources (Azure/AWS). Work with AWS services (e.g., S3, RDS, Glue, Lambda) for hybrid-cloud data workflows. Collaborate with data analysts, architects, and business teams to understand data requirements. Monitor, debug, and optimize ADF pipelines for performance and reliability. Document data flows, logic, and pipeline configurations for operational transparency. Participate in code reviews and follow data engineering best practices. Required Skills: Experience in SQL development , including performance tuning and stored procedures. Hands-on experience with Azure Data Factory (ADF) and building data pipelines. Working experience with AWS cloud services for data storage or movement. Experience with relational databases such as SQL Server, PostgreSQL, or MySQL. Good understanding of data integration concepts, scheduling, and monitoring. Strong problem-solving and analytical skills. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Required Skills: YOE-8+ Mode Of work: Remote Design, develop, modify, and test software applications for the healthcare industry in agile environment. Duties include: Develop. support/maintain and deploy software to support a variety of business needs Provide technical leadership in the design, development, testing, deployment and maintenance of software solutions Design and implement platform and application security for applications Perform advanced query analysis and performance troubleshooting Coordinate with senior-level stakeholders to ensure the development of innovative software solutions to complex technical and creative issues Re-design software applications to improve maintenance cost, testing functionality, platform independence and performance Manage user stories and project commitments in an agile framework to rapidly deliver value to customers deploy and operate software solutions using DevOps model. Required skills: Azure Deltalake, ADF, Databricks, PySpark, Oozie, Airflow, Big Data technologies( HBASE, HIVE), CI/CD (GitHub/Jenkins) Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Greetings from TCS! Job Title: Data Scientist / AI/ML Engg Required Skillset: AI/ML Location: Hyd,Kolkata, Delhi, Chennai(Last option) Experience Range: 6-10 Job Description Must-Have** AI/ML, Azure ML Studio, AI/ML On Databricks, Python & CICD Devops. Supervised and unsupervised ML and Predictive Analytics using Python • Feature generation through data exploration and SME requirements • Relational database querying • Applying computational algorithms and statistical methods to structured and unstructured data • Communicating results through data visualizations Programming Languages: Python, PySpark • Big Data Technologies: Spark with PySpark • Cloud Technologies: Azure (ADF, Databricks, Storage Account Usage, WebApp, Key vault, SQL Server, function app, logic app, Synapse, Azure Machine Learning, Azure DevOps) • RBAC Maintenance for Azure roles. • Github branching and managing • Terraform scripting for Azure IAA • Optional: GCP (Big Query, DataProc, Cloud Storage Thanks & Regards, Ria Aarthi A. Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Kochi, Kerala, India

On-site

Role Description Job Summary: We are seeking an experienced ADF Developer to design, build, and maintain data integration solutions using Azure Data Factory with exposure to Azure Databricks (ADB) . The ideal candidate will have hands-on expertise in ETL pipelines , data engineering , and Azure cloud services to support enterprise data initiatives. Key Responsibilities Design and develop scalable ETL pipelines using ADF. Integrate ADB for advanced data transformation tasks. Optimize and troubleshoot ADF pipelines and queries (SQL, Python, Scala). Implement robust data validation, error handling, and performance tuning. Collaborate with data architects, analysts, and DevOps teams. Maintain technical documentation and support ongoing solution improvements. Required Qualifications Bachelor’s/Master’s in Computer Science or related field. 2+ years of hands-on ADF experience. Strong skills in Python, SQL, and/or Scala. Familiarity with ADB and Azure cloud services. Solid knowledge of ETL, data warehousing, and performance optimization. Preferred Microsoft Azure Data Engineer certification. Exposure to Spark, Hadoop, Git, Agile practices, and domain-specific projects (finance, healthcare, retail). Understanding of data governance and compliance. Skills Adf,Adb,Datastage Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Kochi, Kerala, India

On-site

Role Description Job Summary: We are seeking a seasoned ADF Developer to design, implement, and optimize data integration solutions using Azure Data Factory (ADF) as the primary tool, with added experience in Azure Databricks (ADB) as a plus. The ideal candidate has strong ETL, data engineering, and cloud expertise within the Azure ecosystem . Key Responsibilities Design and develop ETL pipelines using ADF; integrate ADB for complex transformations. Write optimized Python, SQL, or Scala code for large-scale data processing. Configure ADF pipelines, datasets, linked services, and triggers. Ensure high data quality through robust validation, testing, and error handling. Optimize pipeline and query performance; troubleshoot issues proactively. Collaborate with data architects, analysts, and DevOps teams. Maintain clear documentation of pipeline logic and data flows. Support users and ensure minimal disruption to business operations. Required Skills 7+ years of hands-on ADF experience. Strong in Python, SQL, and/or Scala. Experience with ETL, data modeling, and Azure cloud tools. Familiarity with Azure Databricks. Excellent problem-solving and communication skills. Preferred Microsoft Azure Data Engineer Associate certification. Experience with Spark, Hadoop, Git, Agile, and data governance. Domain exposure: finance, healthcare, or retail. Skills Adf,Adb,Datastage Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

About The Job About Beyond Key We are a Microsoft Gold Partner and a Great Place to Work-certified company. "Happy Team Members, Happy Clients" is a principle we hold dear. We are an international IT consulting and software services firm committed to providing. Cutting-edge services and products that satisfy our clients' global needs. Our company was established in 2005, and since then we've expanded our team by including more than 350+ Talented skilled software professionals. Our clients come from the United States, Canada, Europe, Australia, the Middle East, and India, and we create and design IT solutions for them. If you need any more details, you can get them at https://www.beyondkey.com/about. Job Title: Senior Data Engineer ( Power BI, ADF & MS Fabric) Experience: 7+ years Location: Indore / Pune (Hybrid/Onsite) Job Type: Full-time Open Position : 1 Key Responsibilities Design, develop, and maintain interactive Power BI dashboards & reports with advanced DAX, Power Query, and custom visuals. Build and optimize end-to-end data solutions using Microsoft Fabric (OneLake, Lakehouse, Data Warehouse). Develop and automate ETL/ELT pipelines using Azure Data Factory (ADF) and Fabric Data Pipelines. Architect and manage modern data warehousing solutions (Star/Snowflake Schema) using Fabric Warehouse, Azure Synapse, or SQL Server. Implement data modeling, performance tuning, and optimization for large-scale datasets. Collaborate with business teams to translate requirements into scalable Fabric-based analytics solutions. Ensure data governance, security, and compliance across BI platforms. Mentor junior team members on Fabric, Power BI, and cloud data best practices. Required Skills & Qualifications 7+ years of hands-on experience in Power BI, SQL, Data Warehousing, and ETL/ELT. Strong expertise in Microsoft Fabric (Lakehouse, Warehouse, ETL workflows, Delta Lake). Proficient in Azure Data Factory (ADF) for orchestration and data integration. Advanced SQL (query optimization, stored procedures, partitioning). Experience with data warehousing (dimensional modeling, SCD, fact/dimension tables). Knowledge of Power BI Premium/Fabric capacity, deployment pipelines, and DAX patterns. Familiarity with Databricks, PySpark, or Python (for advanced analytics) is a plus. Strong problem-solving and stakeholder management skills. Preferred Qualifications Microsoft Certifications (PL-300: Power BI, DP-600: Fabric Analytics Engineer). Experience with Azure DevOps (CI/CD for Fabric/Power BI deployments). Domain knowledge in BFSI, Retail, or Manufacturing. Share with someone awesome View all job openings Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

JD for a Databricks Data Engineer Key Responsibilities: Design, develop, and maintain high-performance data pipelines using Databricks and Apache Spark. Implement medallion architecture (Bronze, Silver, Gold layers) for efficient data processing. Optimize Delta Lake tables, partitioning, Z-ordering, and performance tuning in Databricks. Develop ETL/ELT processes using PySpark, SQL, and Databricks Workflows. Manage Databricks clusters, jobs, and notebooks for batch and real-time data processing. Work with Azure Data Lake, AWS S3, or GCP Cloud Storage for data ingestion and storage. Implement CI/CD pipelines for Databricks jobs and notebooks using DevOps tools. Monitor and troubleshoot performance bottlenecks, cluster optimization, and cost management. Ensure data quality, governance, and security using Unity Catalog, ACLs, and encryption. Collaborate with Data Scientists, Analysts, and Business Teams to deliver insights. Required Skills & Experience: 5+ years of hands-on experience in Databricks, Apache Spark, and Delta Lake. Strong SQL, PySpark, and Python programming skills. Experience in Azure Data Factory (ADF), AWS Glue, or GCP Dataflow. Expertise in performance tuning, indexing, caching, and parallel processing. Hands-on experience with Lakehouse architecture and Databricks SQL. Strong understanding of data governance, lineage, and cataloging (e.g., Unity Catalog). Experience with CI/CD pipelines (Azure DevOps, GitHub Actions, or Jenkins). Familiarity with Airflow, Databricks Workflows, or orchestration tools. Strong problem-solving skills with experience in troubleshooting Spark jobs. Nice to Have: Hands-on experience with Kafka, Event Hubs, or real-time streaming in Databricks. Certifications in Databricks, Azure, AWS, or GCP. Show more Show less

Posted 1 month ago

Apply

0.0 - 5.0 years

0 Lacs

Pune, Maharashtra

On-site

Job details Employment Type: Full-Time Location: Pune, Maharashtra, India Job Category: Information Systems Job Number: WD30237233 Job Description At Johnson Controls, we’re shaping the future to create a world that’s safe, comfortable, and sustainable. Our global team creates innovative, integrated solutions that make cities more Software Developer – Data Solutions (ETL) Job Description: Johnson Controls is seeking an experienced ETL Developer responsible for designing, implementing, and managing ETL processes. The successful candidate will work closely with data architects, business analysts, and stakeholders to ensure data is extracted, transformed, and loaded accurately and efficiently for reporting and analytics purposes. Key Responsibilities o Design, develop, and implement ETL processes to extract data from various sources o Transform data to meet business requirements and load it into data warehouses or databases o Optimize ETL processes for performance and reliability o Collaborate with data architects and analysts to define data requirements and ensure data quality o Monitor ETL jobs and resolve issues as they arise o Create and maintain documentation of ETL processes and workflows o Participate in data modeling and database design Qualifications o Bachelor’s degree in computer science, Information Technology, or a related field o 3 to 5 years of experience as an ETL Developer or similar role o Strong knowledge of ETL tools – ADF, Synapse. Snowflake experience is mandatory. Multi cloud experience is a plus. o Proficient in SQL for data manipulation and querying o Experience with data warehousing concepts and methodologies o Knowledge of scripting languages (e.g., Python, Shell) is a plus o Excellent problem-solving skills and attention to detail o Strong communication skills to collaborate with technical and non-technical stakeholders o Candidates should be flexible / willing to work across this delivery landscape which includes and not limited to Agile Applications Development, Support and Deployment. o Expert level experience with Azure Data Lake, Azure Data Factory, Synapse, Azure Blob, Azure Storage Explorer, snowflake, Snowpark. What we offer Competitive salary and a comprehensive benefits package, including health, dental, and retirement plans. Opportunities for continuous professional development, training programs, and career advancement within the company. A collaborative, innovative, and inclusive work environment that values diversity and encourages creative problem-solving.

Posted 1 month ago

Apply

0.0 - 5.0 years

0 Lacs

Pune, Maharashtra

On-site

Job details Employment Type: Full-Time Location: Pune, Maharashtra, India Job Category: Information Systems Job Number: WD30237243 Job Description At Johnson Controls, we're shaping the future to create a world that's safe, comfortable, and sustainable. Join us and be part of a team that prioritizes innovation and customer satisfaction. What you will do: o Design, develop, and implement ETL processes to extract data from various sources o Transform data to meet business requirements and load it into data warehouses or databases o Optimize ETL processes for performance and reliability o Collaborate with data architects and analysts to define data requirements and ensure data quality o Monitor ETL jobs and resolve issues as they arise o Create and maintain documentation of ETL processes and workflows o Participate in data modeling and database design requirements and provide appropriate solutions. What we look for: Required: Bachelor’s degree in computer science, Information Technology, or a related field o 3 to 5 years of experience as an ETL Developer or similar role o Strong knowledge of ETL tools – ADF, Synapse. Snowflake experience is mandatory. Multi cloud experience is a plus. o Proficient in SQL for data manipulation and querying o Experience with data warehousing concepts and methodologies o Knowledge of scripting languages (e.g., Python, Shell) is a plus o Excellent problem-solving skills and attention to detail o Strong communication skills to collaborate with technical and non-technical stakeholders o Candidates should be flexible / willing to work across this delivery landscape which includes and not limited to Agile Applications Development, Support and Deployment. o Expert level experience with Azure Data Lake, Azure Data Factory, Synapse, Azure Blob, Azure Storage Explorer, snowflake, Snowpark. What we offer: Competitive salary and a comprehensive benefits package, including health, dental, and retirement plans. Opportunities for continuous professional development, training programs, and career advancement within the company. A collaborative, innovative, and inclusive work environment that values diversity and encourages creative problem-solving.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Skills: ORACLE Cc&B, Oracle Cloud, JAVA, PL/SQL, ORACLE ADF, ORACLE JET, Greetings from Colan Infotech!!! Job Title: Oracle CC&B Developer & Administrator (OCI) Location: Remote Department: IT / Enterprise Applications Job Summary We are looking for a highly skilled Oracle Customer Care & Billing (CC&B) Developer & Administrator with experience managing CC&B on Oracle Cloud Infrastructure (OCI). This role is critical to supporting and enhancing our utility billing platform through custom development, system upgrades, issue resolution, and infrastructure management. The ideal candidate is technically strong, detail-oriented, and experienced in both back-end and front-end CC&B development. Key Responsibilities Development & Customization Design and develop enhancements and custom modules for Oracle CC&B using Java, PL/SQL, Oracle ADF, and Oracle JET. Implement business rules, workflows, batch processes, and UI changes based on stakeholder requirements. Build RESTful APIs and integrations with internal and third-party systems (e.g., MDM, GIS, payment gateways). Upgrades & Maintenance Lead full lifecycle CC&B upgrades, including planning, testing, migration, and production deployment. Apply and test Oracle patches and interim fixes; resolve any post-patch issues. OCI Administration Manage CC&B environments hosted on Oracle Cloud Infrastructure (OCI) including Compute, Autonomous Database, Load Balancers, and Object Storage. Configure and monitor system performance using Oracle Enterprise Manager (OEM). Implement backup, recovery, and high-availability strategies aligned with security best practices. Support & Issue Resolution Provide daily operational support and issue resolution for CC&B application and infrastructure. Perform root cause analysis and deliver long-term fixes for recurring issues. Monitor, tune, and optimize system performance (JVM, SQL, WebLogic). Documentation & Collaboration Maintain detailed documentation including technical specs, runbooks, and support procedures. Collaborate with QA, infrastructure, and business teams to ensure smooth operations and releases. Use Bitbucket for version control and code collaboration. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of hands-on experience with Oracle CC&B development and administration. Proven experience with CC&B upgrades, patching, and environment management. Strong development skills in Java (8+), PL/SQL, Oracle ADF, and Oracle JET. Solid experience with OCI components including Compute, Autonomous Database, IAM, and networking. Proficiency with Oracle Enterprise Manager (OEM) for monitoring and diagnostics. Experience using Bitbucket or similar version control platforms. Strong problem-solving and communication skills. Ability to work both independently and as part of a cross-functional team. Preferred Qualifications Experience with Oracle SOA Suite or Oracle Integration Cloud. Knowledge of utility billing processes and customer service workflows. Experience working in agile or hybrid project environments. Interested candidates send your updated resume to kumudha.r@colanonline.com Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role : Azure Data Engineer Experience : Minimum 3-5 years Location : Spaze ITech Park, Sector-49, Gurugram Working Days : Monday to Friday (9 : 00 Am- 6 : 00 Pm) Joining : < 15 days About Us Panamoure is UK based group with offshore centre in Gurgaon, India. We are known to be the ultimate Business and Technology Change partner for our clients including PE groups and ambitious mid-market businesses.Panamoure is a fast paced and dynamic management consultancy delivering Business and Technology change services to the UKs fastest growing companies. Our ability to deliver exceptional quality to our clients has seen us grow rapidly over the last 36 months and we have ambitious plans to scale substantially further moving forward. As part of this growth we are looking to expand both our UK and India team with bright, ambitious and talented individuals that want to learn and grow with the business. Primary Skills The Azure Data Engineer will be responsible for developing, maintaining, and optimizing data pipelines and SQL databases using Azure Data Factory (ADF), Microsoft Fabrics and other Azure services. The role requires expertise in SQL Server, ETL/ELT processes, and data modeling to support business intelligence and operational applications. The ideal candidate will collaborate with cross-functional teams to deliver reliable, scalable, and high-performing data solutions. Key Responsibilities Design, develop, and manage SQL databases, tables, stored procedures, and T-SQL queries. Develop and maintain Azure Data Factory (ADF) pipelines to automate data ingestion, transformation, and integration. Build and optimize ETL/ELT processes to transfer data between Azure Data Lake, SQL Server, and other systems. Design and implement Microsoft Fabric Lake houses for structured and unstructured data storage. Build scalable ETL/ELT pipelines to move and transform data across Azure Data Lake, SQL Server, and external data sources. Develop and implement data modeling strategies using star schema, snowflake schema, and dimensional models to support analytics use cases. Integrate Azure Data Lake Storage (ADLS) with Microsoft Fabric for scalable, secure, and cost-effective data storage. Monitor, troubleshoot, and optimize data pipelines using Azure Monitor, Log Analytics, and Fabric Monitoring capabilities. Ensure data integrity, consistency, and security following data governance frameworks such as Azure Purview. Collaborate with DevOps teams to implement CI/CD pipelines for automated data pipeline deployment. Utilize Azure Monitor, Log Analytics, and Application Insights for pipeline monitoring and performance optimization. Stay updated on Azure Data Services and Microsoft Fabric innovations, recommending enhancements for performance and scalability. Requirements 4+ years of experience in data engineering with strong expertise in SQL development. Proficiency in SQL Server, T-SQL, and query optimization techniques. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, and Azure SQL Database. Solid understanding of ETL/ELT processes, data integration patterns, and data transformation. Practical experience with Microsoft Fabric components : Fabric Dataflows for self-service data preparation. Fabric Lake houses for unified data storage. Fabric Synapse Real-Time Analytics for streaming data insights. Fabric Direct Lake mode with Power BI for optimized performance. Strong understanding of Azure Data Lake Storage (ADLS) for efficient data management. Proficiency in Python or Scala for data transformation tasks. Experience with Azure DevOps, Git, and CI/CD pipeline automation. Knowledge of data governance practices, including data lineage, sensitivity labels, and RBAC. Experience with Infrastructure-as-Code (IaC) using Terraform or ARM templates. Understanding of data security protocols like data encryption and network security groups (NSGs). Familiarity with streaming services like Azure Event Hub or Kafka is a plus. Excellent problem-solving, communication, and team collaboration skills. Azure Data Engineer Associate (DP-203) and Microsoft Fabric Analytics certifications are desirable. What We Offer Opportunity to work with modern data architectures and Microsoft Fabric innovations. Competitive salary and benefits package, tailored to experience and qualifications. Opportunities for professional growth and development in a supportive and collaborative environment. A culture that values diversity, creativity, and a commitment to excellence. Benefits And Perks Provident Fund Health Insurance Flexible Timing Providing office Lunch How To Apply Interested candidates should submit their resume and a cover letter detailing their experience with Data Engineer experience, SQL expertise and familiarity with Microsoft Fabrics to hr@panamoure.com We look forward to adding a skilled Azure Data Engineer to our team! (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

India

Remote

Hi, Please go through the below requirements and let me know your interest and forward your resume along with your contact information to raja@covetitinc.com Role : Data Engineer Location : Remote JOB PURPOSE This position will help design, develop and provide operational support for data integration/ ETL projects and activities. He or She will also be required to guide/ mentor other data engineers, coordinate / assign / oversee tasks related to ETL projects , work with functional analysts, end users and other BI team members to design effective ETL solutions/ data integration pipelines. ESSENTIAL FUNCTIONS AND RESPONSIBILITIES The following are the essential functions of this position. This position may be responsible for performing additional duties and tasks as needed and assigned. Technical design, development, testing, documentation of Data Warehouse / ETL projects Perform data profiling and logical / physical data modelling to build new ETL designs and solutions Develop, implement and deploy ETL solutions to update data warehouse and datamarts Maintain quality control, document technical specs and unit testing to ensure accuracy and quality of BI data Implement, stabilize and establish Dev Ops process for version control and deployment from non prod to prod environments Troubleshoot, debug and diagnose ETL issues Provide production support and work with other IT team members and end users to resolve data refresh issues – provide off hours operational support as needed Performance tuning and enhancement of SQL and ETL processes and prepare related technical documentation Work with Offshore team to coordinate development work and operational support Keep abreast of latest ETL technologies and plan effective use Be key player in planning migration of our EDW system to Modern global data warehouse architecture Assessment and implementation new EDW/ Cloud technologies to help evolve EDW architecture to efficiency and performance. Communicate very clearly and professionally with users, peers, and all levels of management. The communication forms include written and verbal methods Lead ETL tasks and activities related to BI projects, assign/ coordinate/ follow up on activities to meet ETL project timelines. Follow through and ensure proper closure of service request issues Help with AI/ ML projects as assigned Perform code reviews on ETL/ report changes where appropriate Coordinate with the DBA team on migration, configuration, tuning of ETL codes Act as mentor for other data engineers in the BI Team. Adhere to the processes and work policies defined by management Perform other duties as needed MINIMUM QUALIFICATIONS The requirements listed below are representative of the education, knowledge, skill and/or ability required for this position. Education/Certifications : Requires minimum of 8 years of related experience with a Bachelor’s degree in computer science, MIS, Data science or related field; or 6 years and a Master’s degree Experience, Skills, Knowledge and/or Abilities : Understanding of ERP business processes (Order to Cash, Procure to Pay , Record to report etc), data warehouse and BI concepts and ability to apply educational and practical experience to improvise business intelligence applications and provide simplified and standardized solutions to achieve the business objectives. Expert knowledge of data warehouse architecture – well versed with Modern Data warehouse concepts, EDW & Data Lake/ Cloud architecture Expertise in dimensional modeling, star schema designs including best practices for use of indexes, partitioning, and data loading. Advanced experience in SQL, writing Stored procedures and tuning SQL, preferably using Oracle PL/SQL Strong experience with Data integration tool using ADF ( Azure Data Factory) Well versed with database administration tasks and working with DBAs to monitor and resolve SQL / ETL issues and performance tuning Experience with Dev Ops process in ADF, preferably using GitHub. Experience in other version control tools helpful. Experience in trouble shooting data warehouse refresh issues and BI reports data validation with source systems. Excellent communication skills. Ability to organize and handle multiple tasks simultaneously. Ability to mentor/ coordinate activities for other data engineers as needed. PREFERRED QUALIFICATIONS The education, knowledge, skills and/or abilities listed below are preferred qualifications in addition to the minimum qualifications stated above. Additional Experience, Skills, Knowledge and/or Abilities : Preferred with experience working with Oracle EBS or any major ERP systems like SAP Preferred with experience use of AI/ ML – Experience in R, Python, Pyspark a plus Preferred with experience on Cloud EDW technologies like Databricks, Snowflake, Synapse Preferred experience with Microsoft Fabric, Data Lakehouse concepts and related reporting capabilities PHYSICAL REQUIREMENTS / ADVERSE WORKING CONDITIONS The physical requirements listed in this section include, but are not limited, to the motor/physical abilities, skills, and/or demands required of the position in order to successfully undertake the essential duties and responsibilities of this position. In accordance with the Americans with Disabilities Act (ADA), reasonable accommodations may be made to allow qualified individuals with a disability to perform the essential functions and responsibilities of the position. No additional physical requirements or essential functions for this position. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are seeking an experienced and strategic Data to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, Unity Catalog , and Spark , while aligning with best practices in data governance, pipeline automation , and performance optimization . Key Responsibilities: • Design and develop scalable data pipelines using Databricks and Medallion (Bronze, Silver, Gold layers). • Architect and implement data governance frameworks using Unity Catalog and related tools. • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. • Maintain comprehensive documentation of architecture, processes, and workflows . Requirements: • Bachelor’s or master’s degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies