Jobs
Interviews

1611 Adf Jobs - Page 27

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Noida

On-site

Req ID: 321505 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Test Analyst to join our team in Bangalore, Karnātaka (IN-KA), India (IN). Job Duties: Understand business requirements , develop test cases. Work with tech team and client to validate and finalise test cases.. Use Jira or equivalent test management tool to record test cases, expected results, outcomes, assign defects Run in testing phase - SIT and UAT Test Reporting & Documentation Basic knowledge to use snowflake, SQL, ADF (optional) and Fivetran (optional) Minimum Skills Required: Test Cases development Jira knowledge for record test cases, expected results, outcomes, assign defects) Test Reporting & Documentation Basic knowledge to use snowflake, SQL, ADF (optional) and Fivetran (optional) About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 1 month ago

Apply

4.0 - 7.0 years

15 - 30 Lacs

Hyderabad

Remote

Experience Required: 5 to 7Years Mandate Mode of work: Remote Primary Skill: Azure Data Factory, SQL, Python/Scala Notice Period : Immediate Joiners/ Permanent(Can join within July 4th 2025 ) 5 to 7 years of experience with Big Data technologies Experience with Microsoft Azure cloud platform. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Data Factory. Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and resolving data pipeline problems will guarantee consistency and availability of the data Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

This is full time remote contract position. You may need to provide few hours of overlapping time with US timezone . You may need to go through the background verification process in which your claimed experience, education certificates and references given will be verified. So pls don't apply if you are not comfortable to go through this verification process. This is client facing role hence excellent communication in English language is MUST . Min. Experience : 5+ years About role : Our client is about to start an ERP replacement. They plan to move away from the AWS platform and move to an Azure data lake feeding Snowflake. We need a resource who can be Snowflake thought leader and having Microsoft azure data engineering expertise. Key Responsibilities: Data Ingestion & Orchestration (Transformation & Cleansing) : o Design and maintain Azure Data Factory (ADF) pipelines : Extract data from sources like ERPs (SAP, Oracle), UKG, SharePoint, and REST APIs. o Configure scheduled/event-driven loads : Set up ADF for automated data ingestion. o Transform and cleanse data : Develop logic in ADF for Bronze-to-Silver layer transformations. o Implement data quality checks : Ensure accuracy and consistency. · Snowflake Data Warehousing: o Design/optimize data models: Create tables, views, and stored procedures for Silver and Gold layers. o ETL/ELT in Snowflake: Transform curated Silver data into analytical Gold structures. o Performance tuning: Optimize queries and data loads. Design, develop, and optimize data models within Snowflake, including creating tables, views, and stored procedures for both Silver and Gold layers. o Implement ETL/ELT processes within Snowflake to transform curated data (Silver) into highly optimized analytical structures (Gold) Data Lake Management: o Implement Azure Data Lake Gen2 solutions : Follow medallion architecture (Bronze, Silver). o Manage partitioning, security, governance: Ensure efficient and secure data storage. · Collaboration & Documentation: Partner with stakeholders to convert data needs into technical solutions, document pipelines and models, and uphold best practices through code reviews. Monitoring & Support: Track pipeline performance, resolve issues, and deploy alerting/logging for proactive data integrity and issue detection. · Data visualization tools : Proficient in like Power BI, DAX, and Power Query for creating insightful reports. Skilled in Python for data processing and analysis to support data engineering tasks. Required Skills & Qualifications: Over 5+ years of experience in data engineering, data warehousing, or ETL development. Microsoft Azure proficiency: Azure Data Factory (ADF): Experience in designing, developing, and deploying complex data pipelines. Azure Data Lake Storage Gen2: Hands-on experience with data ingestion, storage, and organization. Expertise in Snowflake Data Warehouse and ETL/ELT: Understanding Snowflake architecture. SQL proficiency for manipulation and querying. Experience with Snowpipe, tasks, streams, and stored procedures. Strong understanding of data warehousing concepts and ETL/ELT principles. Data Formats & Integration : Experience with various data formats (e.g., Parquet, CSV, JSON) and data integration patterns. Data Visualization: Experience with Power BI, DAX, Power Query. Scripting: Python for data processing and analysis. Soft Skills: Problem-solving, attention to detail, communication, and collaboration Nice-to-Have Skills : Version control (e.g., Git), Agile/Scrum methodologies and Data governance and security best practices.

Posted 1 month ago

Apply

4.0 years

0 Lacs

India

On-site

About PTR Global PTR Global is a leader in providing innovative workforce solutions, dedicated to optimizing talent acquisition and management processes. Our commitment to excellence has earned us the trust of businesses looking to enhance their talent strategies. We cultivate a dynamic and collaborative environment that empowers our employees to excel and contribute to our clients' success. Job Summary We are seeking a highly skilled ETL Developer to join our team in Chennai. The ideal candidate will be responsible for designing, developing, and maintaining ETL processes, as well as data warehouse design and modeling, to support our data integration and business intelligence initiatives. This role requires proficiency in T-SQL, Azure Data Factory (ADF), and SSIS, along with excellent problem-solving and communication skills. Responsibilities Design, develop, and maintain ETL processes to support data integration and business intelligence initiatives. Utilize T-SQL to write complex queries and stored procedures for data extraction and transformation. Implement and manage ETL processes using SSIS (SQL Server Integration Services). Design and model data warehouses to support reporting and analytics needs. Ensure data accuracy, quality, and integrity through effective testing and validation procedures. Collaborate with business analysts and stakeholders to understand data requirements and deliver solutions that meet their needs. Monitor and troubleshoot ETL processes to ensure optimal performance and resolve any issues promptly. Document ETL processes, workflows, and data mappings to ensure clarity and maintainability. Stay current with industry trends and best practices in ETL development, data integration, and data warehousing. Must Haves Minimum 4+ years of experience as an ETL Developer or in a similar role. Proficiency in T-SQL for writing complex queries and stored procedures. Experience with SSIS (SQL Server Integration Services) for developing and managing ETL processes. Knowledge of ADF (Azure Data Factory) and its application in ETL processes. Experience in data warehouse design and modeling. Knowledge of Microsoft's Azure cloud suite, including Data Factory, Data Storage, Blob Storage, Power BI, and Power Automate. Strong problem-solving and analytical skills. Excellent communication and interpersonal skills. Strong attention to detail and commitment to data quality. Bachelor's degree in Computer Science, Information Technology, or a related field is preferred.

Posted 1 month ago

Apply

7.0 years

0 Lacs

India

Remote

Senior Azure Developer (Remote / WFH) Summary: As a Senior Azure Developer, you will lead the design, development, and implementation of complex cloud-based applications on the Microsoft Azure platform. You will provide technical leadership and mentor junior and mid-level developers. Responsibilities: Lead the design and development of cloud-based applications. Collaborate with stakeholders to define project requirements. Write high-quality, scalable, and maintainable code. Conduct code reviews and provide technical guidance. Implement and manage CI/CD pipelines. Ensure the security and performance of applications. Troubleshoot and resolve advanced technical issues. Optimize application architecture and performance. Create and maintain detailed documentation. Stay updated with the latest Azure technologies and industry trends. Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field. 7+ years of experience in cloud development. Expert understanding of Microsoft Azure services. Proficiency in programming languages such as C#, JavaScript, or Python. Excellent problem-solving and analytical skills. Strong communication and leadership abilities. Experience with Agile methodologies. Preferred Certifications: Microsoft Certified - Azure DevOps Engineer Expert and Microsoft Certified - Azure Solutions Architect Expert Required Knowledge and Skills: Expert knowledge of Azure services like Azure App Service, Azure Functions, and Azure Storage. Leading the design and architecture of Azure-based applications, ensuring scalability, security, and performance. Proficiency in RESTful APIs and web services. Experience with version control systems like Git. Strong knowledge of SQL and NoSQL databases. In-depth understanding of DevOps practices. Experience with CI/CD pipelines. Strong understanding of networking concepts. Knowledge of security best practices in cloud environments. Ability to write clean, maintainable code. Experience with performance optimization. Hands-on writing automated test cases in Nunit/xunit/MSTest framework Hands-on with Azure containerization services Hands-on with ADF or Synapse Technologies, Coding Languages, and Methodologies: Microsoft Azure (Key Vault, Service Bus Queues, Storage Queues, Topics, Blob storage, Azure Container services (kubernetes, docker), App Services [Web Apps, Logic Apps, Function Apps], Azure functions (time triggered, durable), Azure AI services) Azure SQL, Cosmos DB .NET Core (latest version) API s, APIM Angular/ React JavaScript, Python SQL, Azure SQL, Cosmos DB Azure containerization services (Docker, Kubernetes) ADF or Synapse Nunit/xunit/MSTest framework Git Agile methodologies CI/CD pipelines IaC (Infrastructure as Code) - ARM/Bicep/TerraForms Azure DevOps Outcomes: Lead the design and development of complex cloud-based applications. Collaborate effectively with stakeholders. Write high-quality and scalable code. Provide technical leadership and mentorship. Implement and manage CI/CD pipelines. Ensure application security and performance. Troubleshoot and resolve advanced technical issues. Optimize application architecture and performance. Create and maintain detailed documentation. Stay updated with Azure technologies and industry trends.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: SSE – DevOps Engineer Mode of work: Work from Office Experience: 4 - 10 Years of Experience Know your team At ValueMomentum’s Engineering Center, we are a team of passionate engineers who thrive on tackling complex business challenges with innovative solutions while transforming the P&C insurance value chain. We achieve this through strong engineering foundation and continuously refining our processes, methodologies, tools, agile delivery teams, and core engineering archetypes. Our core expertise lies in six key areas: Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. Join a team that invests in your growth. Our Infinity Program empowers you to build your career with role-specific skill development leveraging immersive learning platforms. You'll have the opportunity to showcase your talents by contributing to impactful projects. Requirements - Must Have: 5+ years in DevOps with strong data pipeline experience Build and maintain CI/CD pipelines for Azure Data Factory and Databricks notebooks The role demands deep expertise in Databricks, including the automation of unit, integration, and QA testing workflows. Additionally, strong data architecture skills are essential, as the position involves implementing CI/CD pipelines for schema updates. Strong experience with Azure DevOps Pipelines, YAML builds, and release workflows. Proficiency in scripting languages like Python, PowerShell, Terraform Working knowledge of Azure services: ADF, Databricks, DABs, ADLS Gen2, Key Vault, ADO . Maintain infrastructure-as-code practices Collaborate with Data Engineers and Platform teams to maintain development, staging, and production environments. Monitor and troubleshoot pipeline failures and deployment inconsistencies. About ValueMomentum ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry. Our culture – Our fuel At ValueMomentum, we believe in making employees win by nurturing them from within, collaborating and looking out for each other. People first - Empower employees to succeed. Nurture leaders - Nurture from within. Enjoy wins – Recognize and celebrate wins. Collaboration – Foster a culture of collaboration and people-centricity. Diversity – Committed to diversity, equity, and inclusion. Fun – Create a fun and engaging work environment. Warm welcome – Provide a personalized onboarding experience. Company Benefits Compensation - Competitive compensation package comparable to the best in the industry. Career Growth - Career development, comprehensive training & certification programs, and fast track growth for high potential associates. Benefits: Comprehensive health benefits and life insurance.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior Data Engineer Location: Noida | Gurgaon | Hyderabad | Bangalore (Hybrid – 2 days/month in office) Experience: 8+ years Employment Type: Full-time | Hybrid Skills: PySpark | Databricks | ADF | Big Data | Hadoop | Hive About the Role: We are looking for a highly experienced and results-driven Senior Data Engineer to join our growing team. This role is ideal for a data enthusiast who thrives in managing and optimizing big data pipelines using modern cloud and big data tools. You’ll play a key role in designing scalable data architectures and enabling data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines using PySpark and Databricks Develop ETL workflows and orchestrate data pipelines using Azure Data Factory (ADF) Work with structured and unstructured data across the Hadoop ecosystem (HDFS, Hive, Spark) Optimize data processing and storage for high performance and reliability Collaborate with data scientists, analysts, and business teams to ensure data availability and quality Implement data governance, data quality, and security best practices Monitor and troubleshoot production data pipelines and jobs Document technical solutions and standard operating procedures Required Skills & Qualifications: 8+ years of hands-on experience in data engineering and big data technologies Proficiency in PySpark , Databricks , and Azure Data Factory (ADF) Strong experience with Big Data technologies: Hadoop , Hive , Spark , HDFS Solid understanding of data modeling, warehousing concepts, and performance tuning Familiarity with cloud data platforms, preferably Azure Strong SQL skills and experience in managing large-scale data systems Excellent problem-solving, debugging, and communication skills Nice to Have: Experience with Delta Lake , Apache Airflow , or Kafka Exposure to CI/CD for data pipelines Knowledge of data lake architectures and data mesh principles

Posted 1 month ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About ProcDNA ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge technology to create game-changing Commercial Analytics and Technology solutions for our clients. We're a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won't be stuck in a cubicle - you'll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn't just encouraged; it's ingrained in our DNA. Ready to join our epic growth journey? What We Are Looking For You’ll be driving the adoption of the latest technologies in our solutions, bringing in thought leadership to guide clients on complex data management problems, and driving business performance. You will work with the leadership team to bring subject matter expertise in areas such as Big Data, ETL, Reporting, CRM, Data Warehousing, MDM, DevOps, Software Development, etc. We are seeking an individual who not only possesses the requisite expertise but also thrives in the dynamic landscape of a fast-paced global firm. What You’ll Do Leading end-to-end data management solution projects for multiple clients across data engineering and BI technologies. Responsible for creating a project management plan and ensuring adherence to project timelines. Integrate multiple data sources into one visualization to tell a story. Interact with customers to understand their business problems and provide best-in-class analytics solutions. Interact with Data Platform leaders and understand data flows that integrate into Tableau/analytics. Understand data governance, quality, security, and integrate analytics with these enterprise platforms. Interact with UX/UI global functions and design best-in class visualization for customers, harnessing all product capabilities. Must have 7 - 10 years of data warehousing and data engineering. Experience in interacting with Life Science clients directly, discussing requirements, and stakeholder management. Experience in requirement gathering and designing enterprise warehouse solutions from scratch. Hands-on experience with ETL tools like ADF, Databricks, and Informatica; experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc; experience in data warehouse: SQL/NoSQL, Amazon Redshift, Snowflake, Apache Hive, HDFS, etc. BI tools knowledge and experience in leading the implementation of dashboards. Deep understanding of data governance and data quality management frameworks. Strong communication and presentation skills with a strong problem-solving attitude. Excellent analytical, problem-solving, and debugging skills, with a strong ability to quickly learn and comprehend business processes and problems to effectively develop technical solutions to their requirements. Skills: mdm,sql,hdfs,data warehousing,big data,devops,cloud,amazon redshift,snowflake,pharmaceutical consulting,data management,apache hive,azure,reporting,problem-solving,luigi,informatica,analytical skills,presentation skills,data governance,adf,data engineering,crm,databricks,bi technologies,airflow,team management,business technology,aws,azkaban,software development,etl,client management,data quality management,life science

Posted 1 month ago

Apply

7.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

About Beyond Key We are a Microsoft Gold Partner and a Great Place to Work-certified company. "Happy Team Members, Happy Clients" is a principle we hold dear. We are an international IT consulting and software services firm committed to providing. Cutting-edge services and products that satisfy our clients' global needs. Our company was established in 2005, and since then we've expanded our team by including more than 350+ Talented skilled software professionals. Our clients come from the United States, Canada, Europe, Australia, the Middle East, and India, and we create and design IT solutions for them. If you need any more details, you can get them at https://www.beyondkey.com/about. Job Description We are looking for a Senior Data Engineer with strong expertise in Power BI, Azure Data Factory (ADF), and SSIS to design, develop, and optimize data solutions that support strategic decision-making. The ideal candidate will bring hands-on experience with data warehousing, advanced analytics, and cloud-based data pipelines, and will work closely with cross-functional teams to drive scalable data architectures. Key Responsibilities Design, develop, and maintain interactive Power BI dashboards & reports with advanced DAX, Power Query, and custom visuals. Develop and automate ETL/ELT pipelines using Azure Data Factory (ADF) and SSIS. Architect and manage modern data warehousing solutions (Star/Snowflake Schema) using Azure or On premise SQL Server. Implement data modeling, performance tuning, and optimization for large-scale datasets. Collaborate with business teams to translate requirements into scalable analytics solutions. Ensure data governance, security, and compliance across BI platforms. Mentor junior team members on Azure, Power BI, and cloud data best practices. Required Skills & Qualifications 7+ years of hands-on experience in Power BI, SQL, Data Warehousing, and ETL/ELT. Proficient in Azure Data Factory (ADF) for orchestration and data integration. Advanced SQL (query optimization, stored procedures, partitioning). Experience with data warehousing (dimensional modeling, SCD, fact/dimension tables). Knowledge of Power BI Premium/Fabric capacity, deployment pipelines, and DAX patterns. Knowledge of building and optimize end-to-end data solutions using Microsoft Fabric (OneLake, Lakehouse, Data Warehouse). Familiarity with Databricks, PySpark, or Python (for advanced analytics) is a plus. Strong problem-solving and communication skills. Preferred Qualifications Microsoft Certifications (PL-300: Power BI, DP-600: Fabric Analytics Engineer). Experience with Azure DevOps (CI/CD for Fabric/Power BI deployments). Domain knowledge in BFSI, Heathcare, Retail, or Manufacturing. Share with someone awesome View all job openings

Posted 1 month ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About ProcDNA ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge technology to create game-changing Commercial Analytics and Technology solutions for our clients. We're a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won't be stuck in a cubicle - you'll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn't just encouraged; it's ingrained in our DNA. Ready to join our epic growth journey? What We Are Looking For You’ll be driving the adoption of the latest technologies in our solutions, bringing in thought leadership to guide clients on complex data management problems, and driving business performance. You will work with the leadership team to bring subject matter expertise in areas such as Big Data, ETL, Reporting, CRM, Data Warehousing, MDM, DevOps, Software Development, etc. We are seeking an individual who not only possesses the requisite expertise but also thrives in the dynamic landscape of a fast-paced global firm. What You’ll Do Leading end-to-end data management solution projects for multiple clients across data engineering and BI technologies. Responsible for creating a project management plan and ensuring adherence to project timelines. Integrate multiple data sources into one visualization to tell a story. Interact with customers to understand their business problems and provide best-in-class analytics solutions. Interact with Data Platform leaders and understand data flows that integrate into Tableau/analytics. Understand data governance, quality, security, and integrate analytics with these enterprise platforms. Interact with UX/UI global functions and design best-in class visualization for customers, harnessing all product capabilities. Must have 7 - 10 years of data warehousing and data engineering. Experience in interacting with Life Science clients directly, discussing requirements, and stakeholder management. Experience in requirement gathering and designing enterprise warehouse solutions from scratch. Hands-on experience with ETL tools like ADF, Databricks, and Informatica; experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc; experience in data warehouse: SQL/NoSQL, Amazon Redshift, Snowflake, Apache Hive, HDFS, etc. BI tools knowledge and experience in leading the implementation of dashboards. Deep understanding of data governance and data quality management frameworks. Strong communication and presentation skills with a strong problem-solving attitude. Excellent analytical, problem-solving, and debugging skills, with a strong ability to quickly learn and comprehend business processes and problems to effectively develop technical solutions to their requirements. Skills: mdm,sql,hdfs,data warehousing,big data,devops,cloud,amazon redshift,snowflake,pharmaceutical consulting,data management,apache hive,azure,reporting,problem-solving,luigi,informatica,analytical skills,presentation skills,data governance,adf,data engineering,crm,databricks,bi technologies,airflow,team management,business technology,aws,azkaban,software development,etl,client management,data quality management,life science

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview We are PepsiCo PepsiCo is one of the world's leading food and beverage companies with more than $79 Billion in Net Revenue and a global portfolio of diverse and beloved brands. We have a complementary food and beverage portfolio that includes 22 brands that each generate more than $1 Billion in annual retail sales. PepsiCo's products are sold in more than 200 countries and territories around the world. PepsiCo's strength is its people. We are over 250,000 game changers, mountain movers and history makers, located around the world, and united by a shared set of values and goals. We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Pep+ Positive . For more information on PepsiCo and the opportunities it holds, visit www.pepsico.com. PepsiCo Data Analytics & AI Overview: With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCo’s leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AI Products. And will support “pre-engagement” activities as requested and validated by the prioritization framework of DA&AI. Data Scientist-Gurugram and Hyderabad The role will work in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Machine Learning Services and Pipelines. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Collaborate with data engineers and ML engineers to understand data and models and leverage various advanced analytics capabilities Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Use big data technologies to help process data and build scaled data pipelines (batch to real time) Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure/AWS/GCP Pipelines. Setup cloud alerts, monitors, dashboards, and logging and troubleshoot machine learning infrastructure Automate ML models deployments Qualifications Minimum 3years of hands-on work experience in data science / Machine learning Minimum 3year of SQL experience Experience in DevOps and Machine Learning (ML) with hands-on experience with one or more cloud service providers. BE/BS in Computer Science, Math, Physics, or other technical fields. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models Programming Skills - Hands-on experience in statistical programming languages like Python and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Any Cloud - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Model deployment experience will be a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is required Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills

Posted 1 month ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Noida, Pune, Bengaluru

Work from Office

Type: Permanent, Work from Office Location: Chennai Budget: Competitive, as per industry standards Looking for: Immediate Joiners Responsibilities: Design and develop customizations, extensions, and reports in Oracle Fusion applications. Collaborate with functional consultants to understand business requirements and provide technical solutions. Develop and implement integrations using Oracle Integration Cloud (OIC), BI Publisher, and other tools. Debug and resolve issues in Oracle Fusion modules. Maintain technical documentation for solutions provided. Ensure compliance with best practices in Fusion application development. Skills Required: Hands-on experience in Oracle Fusion technical development. Strong skills in Oracle Integration Cloud (OIC), BI Publisher, and ADF. Good understanding of Oracle Fusion modules (Finance, SCM, HCM, etc.). Strong problem-solving and communication skills. Location: Remote-Delhi / NCR,Bangalore/Bengaluru,Hyderabad/Secunderabad,Chennai,Pune,Kolkata,Ahmedabad,Mumbai

Posted 1 month ago

Apply

2.0 years

0 Lacs

India

Remote

Welcome to Veradigm! Our Mission is to be the most trusted provider of innovative solutions that empower all stakeholders across the healthcare continuum to deliver world-class outcomes. Our Vision is a Connected Community of Health that spans continents and borders. With the largest community of clients in healthcare, Veradigm is able to deliver an integrated platform of clinical, financial, connectivity and information solutions to facilitate enhanced collaboration and exchange of critical patient information. Veradigm Life Veradigm is here to transform health, insightfully. Veradigm delivers a unique combination of point-of-care clinical and financial solutions, a commitment to open interoperability, a large and diverse healthcare provider footprint, along with industry proven expert insights. We are dedicated to simplifying the complicated healthcare system with next-generation technology and solutions, transforming healthcare from the point-of-patient care to everyday life. For more information, please explore Veradigm.com. Job Description For Software Engineer Job Title: Software Engineer Job Responsibilities What will your job look like: The primary purpose of this role is to perform Specification, Design, Coding, Testing, Documentation in the areas of Development and Maintenance. Responsible for creating low-level designs for complex software modules and subsystems. Provide technical guidance to the team, ensuring the successful implementation of advanced software solutions. The ideal candidate will excel at translating business requirements into detailed and comprehensive functional requirements, thereby significantly contributing to the success of our projects. An Ideal Candidate Will Have 2+ years of experience as a software engineer. SQL database experience (Redshift, PostgreSQL, MySQL, Snowflake or similar). Key areas include understanding database design principles, writing efficient queries, and utilizing advanced features. Specific items include database design, data manipulation (CRUD operations), querying data (SELECT statements with various clauses like WHERE, GROUP BY, ORDER BY, JOINs), data modeling, and understanding database concepts like primary and foreign keys. Excellent programming skills in ADF (Azure data factory pipelines ) includes data movements , data transformation ,authentication and control activities Excellent programming skills in Python, Java, C#, C++, or similar language At least 1 year working as a software developer on large distributed systems and client server architectures. 2 years Python development using frameworks like Flask, Django, Jinja, SQLAlchemy Experience building and deploying applications using Amazon Web Services or similar cloud infrastructure. Software development in life sciences industry preferred. Validated software development in a regulated environment preferred. Development/testing of ETL Experience with Apache HTTP, NGINX, Tomcat, or Jetty. Experience with standard build tools and version control systems (e.g., Git, Jenkins). Broad understanding of internet protocols and network programming. Benefits Veradigm believes in empowering our associates with the tools and flexibility to bring the best version of themselves to work. Through our generous benefits package with an emphasis on work/life balance, we give our employees the opportunity to allow their careers to flourish. Quarterly Company-Wide Recharge Days Flexible Work Environment (Remote/Hybrid Options) Peer-based incentive “Cheer” awards “All in to Win” bonus Program Tuition Reimbursement Program To know more about the benefits and culture at Veradigm, please visit the links mentioned below: - https://veradigm.com/about-veradigm/careers/benefits/ https://veradigm.com/about-veradigm/careers/culture/ Veradigm is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce. Thank you for reviewing this opportunity! Does this look like a great match for your skill set? If so, please scroll down and tell us more about yourself!

Posted 1 month ago

Apply

4.0 years

0 Lacs

India

Remote

Welcome to Veradigm! Our Mission is to be the most trusted provider of innovative solutions that empower all stakeholders across the healthcare continuum to deliver world-class outcomes. Our Vision is a Connected Community of Health that spans continents and borders. With the largest community of clients in healthcare, Veradigm is able to deliver an integrated platform of clinical, financial, connectivity and information solutions to facilitate enhanced collaboration and exchange of critical patient information. Veradigm Life Veradigm is here to transform health, insightfully. Veradigm delivers a unique combination of point-of-care clinical and financial solutions, a commitment to open interoperability, a large and diverse healthcare provider footprint, along with industry proven expert insights. We are dedicated to simplifying the complicated healthcare system with next-generation technology and solutions, transforming healthcare from the point-of-patient care to everyday life. For more information, please explore Veradigm.com. Job Description For Sr Software Engineer Job Title: Sr Software Engineer Job Responsibilities What will your job look like: The primary purpose of this role is to perform Specification, Design, Coding, Testing, Documentation in the areas of Development and Maintenance. Responsible for creating low-level designs for complex software modules and subsystems. Provide technical guidance to the team, ensuring the successful implementation of advanced software solutions. The ideal candidate will excel at translating business requirements into detailed and comprehensive functional requirements, thereby significantly contributing to the success of our projects. An Ideal Candidate Will Have 4+ years of experience as a software engineer. SQL database experience (Redshift, PostgreSQL, MySQL, Snowflake or similar). Key areas include understanding database design principles, writing efficient queries, and utilizing advanced features. Specific items include database design, data manipulation (CRUD operations), querying data (SELECT statements with various clauses like WHERE, GROUP BY, ORDER BY, JOINs), data modeling, and understanding database concepts like primary and foreign keys. Excellent programming skills in ADF (Azure data factory pipelines ) includes data movements , data transformation ,authentication and control activities Excellent programming skills in Python, Java, C#, C++, or similar language At least 1 year working as a software developer on large distributed systems and client server architectures. 3+ years Python development using frameworks like Flask, Django, Jinja, SQLAlchemy Experience building and deploying applications using Amazon Web Services or similar cloud infrastructure. Software development in life sciences industry preferred. Validated software development in a regulated environment preferred. Development/testing of ETL Experience with Apache HTTP, NGINX, Tomcat, or Jetty. Experience with standard build tools and version control systems (e.g., Git, Jenkins). Broad understanding of internet protocols and network programming. Benefits Veradigm believes in empowering our associates with the tools and flexibility to bring the best version of themselves to work. Through our generous benefits package with an emphasis on work/life balance, we give our employees the opportunity to allow their careers to flourish. Quarterly Company-Wide Recharge Days Flexible Work Environment (Remote/Hybrid Options) Peer-based incentive “Cheer” awards “All in to Win” bonus Program Tuition Reimbursement Program To know more about the benefits and culture at Veradigm, please visit the links mentioned below: - https://veradigm.com/about-veradigm/careers/benefits/ https://veradigm.com/about-veradigm/careers/culture/ Veradigm is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce. Thank you for reviewing this opportunity! Does this look like a great match for your skill set? If so, please scroll down and tell us more about yourself!

Posted 1 month ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Oracle Management Level Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. Those in Oracle technology at PwC will focus on utilising and managing Oracle suite of software and technologies for various purposes within an organisation. You will be responsible for tasks such as installation, configuration, administration, development, and support of Oracle products and solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary Managing business performance in today’s complex and rapidly changing business environment is crucial for any organization’s short-term and long-term success. However ensuring streamlined E2E Oracle fusion Technical to seamlessly adapt to the changing business environment is crucial from a process and compliance perspective. As part of the Technology Consulting -Business Applications - Oracle Practice team, we leverage opportunities around digital disruption, new age operating model and best in class practices to deliver technology enabled transformation to our clients *Responsibilities: Extensive experience in Oracle ERP/Fusion SaaS/PaaS project implementations as a technical developer . Completed at least 2 full Oracle Cloud (Fusion) Implementation Extensive Knowledge on database structure for ERP/Oracle Cloud (Fusion) Extensively worked on BI Publisher reports, FBDI/OTBI Cloud and Oracle Integration (OIC) * Mandatory skill sets BI Publisher reports, FBDI/OTBI Cloud and Oracle Integration (OIC) *Preferred skill sets database structure for ERP/Oracle Cloud (Fusion) *Years of experience required Minimum 4Years of Oracle fusion experience *Education Qualification BE/BTech MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Oracle Fusion Middleware (OFM) Optional Skills Accepting Feedback, Active Listening, Business Transformation, Communication, Design Automation, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Optimism, Oracle Application Development Framework (ADF), Oracle Business Intelligence (BI) Publisher, Oracle Cloud Infrastructure, Oracle Data Integration, Process Improvement, Process Optimization, Strategic Technology Planning, Teamwork, Well Being Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 month ago

Apply

5.0 - 8.0 years

13 - 16 Lacs

Gāndhīnagar

On-site

Company Name : PIB Techco India Pvt Ltd Location: Gandhinagar, Gujarat Job title: Sr. Devops Engineer Requirements: Must have: We are seeking a highly skilled DevOps Engineer with 5–8 years of professional hand on experience, particularly in managing Azure DevOps CI/CD pipelines and automating deployments across cloud-based data solutions. The ideal candidate should be capable of handling end-to-end deployment processes for Azure Devops projects involving Azure Data Factory (ADF), Databricks, SQL, Python, Azure Data Lake Storage (ADLS) and Power BI repositories Key Responsibilities: - Design, implement, and manage automated deployment pipelines for ADF, Databricks notebooks, SQL scripts, Python-based data processing and Power BI projects. - Manage build and release pipelines for various environments including Dev, UAT, and Production. - Enable environment consistency across Dev, UAT, and Production with automated application deployments using Azure CI/CD Pipelines, PowerShell, and CLI scripts. - Proficient in Python, Bash, or PowerShell - Collaborate with dataops and data engineering teams to enable smooth integration and deployment across Dev, UAT, and production environments. - Monitor pipeline health and performance, troubleshoot deployment failures, and ensure version control and rollback mechanisms are in place. - Support end-to-end project delivery including requirement gathering, pipeline design, development, testing automation, deployment, and post-deployment support. - Implement robust branching strategies, Git workflows, and automated testing frameworks. - Maintain version control practices using Azure DevOps Repos. - Monitor, log, and troubleshoot deployment issues using Azure Monitor, Log Analytics, or Cloud-native tools Nice to have: - Familiarity with Azure Data Factory (ADF), Databricks, SQL, Python, Azure Data Lake Storage (ADLS) and Power BI repositories,Docker, Kubernetes, or managed services like AKS/EKS. - Experience working with Agile methodologies, Test-Driven Development (TDD), and implementing CI/CD pipelines using tools like Azure DevOps pipeline or AWS CodePipeline. - Exposure to data modelling tools like Erwin or ER/Studio to support DevOps in metadata and schema management. - Exposure to leading reporting and visualization tools such as Power BI, particularly in automating report deployment and integration workflows. - Experience with API integrations and supporting infrastructure-as-code for connecting various systems and services Job Types: Full-time, Permanent Pay: ₹1,300,000.00 - ₹1,600,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person Application Deadline: 01/07/2025 Expected Start Date: 01/07/2025

Posted 1 month ago

Apply

0 years

7 - 9 Lacs

Calcutta

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: We are seeking a Data Engineer to design, develop, and maintain data ingestion processes to a data platform built using Microsoft Technologies, ensuring data quality and integrity. The role involves collaborating with data architects and business analysts to implement solutions using tools like ADF, Azure Databricks, and requires strong SQL skills. Responsibilities: Key responsibilities include developing, testing, and optimizing ETL workflows and maintaining documentation. ETL development experience in Microsoft data track are required. Work with business team to translate the business requirement to technical requirements. Demonstrated expertise in Agile methodologies, including Scrum, Kanban, or SAFe. Mandatory skill sets: · Strong proficiency in Azure Databricks, including Spark and Delta Lake. · Experience with Azure Data Factory, Azure Data Lake Storage, and Azure SQL Database. · Proficiency in data integration and ETL processes and T-SQL. · Experienced working in Python for data engineering · Experienced working in Postgres Database · Experienced working in graph database · Experienced in architecture design and data modelling Good To Have Skill Sets: · Unity Catalog / Purview · Familiarity with Fabric/Snowflake service offerings · Visualization tool – PowerBI Preferred skill sets: Hands on knowledge of python, Pyspark and strong SQL knowledge. ETL and data warehousing is must. Relevant certifications (Any one) (e.g., Databricks Data Engineer Associate Microsoft Certified: Azure Data Engineer Associate Azure Solution Architect) are mandatory Years of experience required: 5+yrs Education qualification: Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 month ago

Apply

3.0 years

7 - 9 Lacs

Calcutta

On-site

Line of Service Advisory Industry/Sector FS X-Sector Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Senior Associate Exp : 3 - 6 Years Location: Kolkata Technical Skills: · Strong expertise in Azure Databricks, Azure Data Factory (ADF), PySpark, SQL Server, and Python. · Solid understanding of Azure Functions and their application in data processing workflows. · Understanding of DevOps practices and CI/CD pipelines for data solutions. · Experience with other ETL tools such as Informatica Intelligent Cloud Services (IICS) is a plus. · Strong problem-solving skills and ability to work independently and collaboratively in a fast-paced environment. · Excellent communication skills to effectively convey technical concepts to non-technical stakeholders. Key Responsibilities: · Develop, maintain, and optimize scalable data pipelines using Azure Databricks, Azure Data Factory (ADF), and PySpark. · Collaborate with data architects and business stakeholders to translate requirements into technical solutions. · Implement and manage data integration processes using SQL Server and Python. · Design and deploy Azure Functions to support data processing workflows. · Monitor and troubleshoot data pipeline performance and reliability issues. · Ensure data quality, security, and compliance with industry standards and best practices. · Document technical specifications and maintain clear and concise project documentation. Mandatory skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Preferred skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Years of experience required: 3-6 Years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 month ago

Apply

3.0 years

7 - 9 Lacs

Calcutta

On-site

Line of Service Advisory Industry/Sector FS X-Sector Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary – Senior Associate – Azure Data Engineer Responsibilities: Role : Senior Associate Exp : 3 - 6 Years Location: Kolkata Technical Skills: · Strong expertise in Azure Databricks, Azure Data Factory (ADF), PySpark, SQL Server, and Python. · Solid understanding of Azure Functions and their application in data processing workflows. · Understanding of DevOps practices and CI/CD pipelines for data solutions. · Experience with other ETL tools such as Informatica Intelligent Cloud Services (IICS) is a plus. · Strong problem-solving skills and ability to work independently and collaboratively in a fast-paced environment. · Excellent communication skills to effectively convey technical concepts to non-technical stakeholders. Key Responsibilities: · Develop, maintain, and optimize scalable data pipelines using Azure Databricks, Azure Data Factory (ADF), and PySpark. · Collaborate with data architects and business stakeholders to translate requirements into technical solutions. · Implement and manage data integration processes using SQL Server and Python. · Design and deploy Azure Functions to support data processing workflows. · Monitor and troubleshoot data pipeline performance and reliability issues. · Ensure data quality, security, and compliance with industry standards and best practices. · Document technical specifications and maintain clear and concise project documentation. Mandatory skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Preferred skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Years of experience required: 3-6 Years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure, PySpark Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: Azure Data Engineer Experience: 5-10 years Notice Period: Immediate to 15 days Location: Hyderabad We are seeking a highly skilled Data Engineer to join our dynamic team. Job Description Mandate Skills: Databricks, BI and ADP Proficient in Azure Data Platform (Storage, ADF, Databricks, Devops). Strong SQL skills, Data model design (MSSQL, DatabricksSQL) Experience with Azure SQL Database, Azure Cosmos DB, and Azure Blob Storage. Expertise in designing and implementing ETL processes using SSIS, Python, or PowerShell. Fabric/Power BI (full lifecycle of models/reports design, test, deployment, performance optimization/monitoring) Familiarity with data modeling principles and techniques. Excellent understanding of data security and compliance regulations. Proficiency in Azure DevOps for continuous integration and deployment. Ability to work in a fast-paced, collaborative environment. Regards, ValueLabs

Posted 1 month ago

Apply

6.0 - 10.0 years

0 - 3 Lacs

Pune, Chennai, Bengaluru

Hybrid

Experience with cloud database platforms, e.g. Azure SQL Database, Snowflake etc, or on-prem database platforms like MS SQL server, and have good SQL skills Experience working with Azure Data Factory (ADF). Experience with data modelling. Experience of working with a variety of stakeholders including product owners, delivery managers and architects. Experience with performance tuning and relational database design particularly in the development of business intelligence solutions. Experience with data migration strategies from on premise to cloud. Experience with Data Warehouse concepts. Experience of communicating and documenting technical design proposals. Experience with DataOps, automating the promotion and release of data engineering artefacts, automating testing and pipeline optimisation. Experience with data masking policies, GDPR, auditing access and securing sensitive data sets.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

The BI Data Engineer is a key role within the Enterprise Data team. We are looking for expert Azure data engineer with deep Data engineering, ADF Integration and database development experience. This is a unique opportunity to be involved in delivering leading-edge business analytics using the latest and greatest cutting-edge BI tools, such as cloud-based databases, self-service analytics and leading visualisation tools enabling the company’s aim to become a fully digital organisation. Job Description: Key Responsibilities: Build Enterprise data engineering and Integration solutions using the latest Azure platform, Azure Data Factory, Azure SQL Database, Azure Synapse and Azure Fabric Development of enterprise ETL and integration routines using ADF Evaluate emerging Data enginnering technologies, standards and capabilities Partner with business stakeholder, product managers, and data scientists to understand business objectives and translate them into technical solutions. Work with DevOps, engineering, and operations teams to implement CI/CD pipelines and ensure smooth deployment of data engiinering solutions Required Skills And Experience Technical Expertise : Expertise in the Azure platform including Azure Data Factory, Azure SQL Database, Azure Synapse and Azure Fabric Exposure to Data bricks and lakehouse arcchitect & technologies Extensive knowledge of data modeling, ETL processes and data warehouse design principles. Experienc in machine learning and AI services in Azure. Professional Experience : 5+ years of experience in database development using SQL 5+ Years integration and data engineering experience 5+ years experience using Azure SQL DB, ADF and Azure Synapse 2+ Years experience using Power BI Comprehensive understanding of data modelling Relevant certifications in data engineering, machine learning, AI. Key Competencies: Expertise in data engineering and database development. Familiarity with the Microsoft Fabric technologies including One Lake, Lakehouse and Data Factory Strong understanding of data governance, compliance, and security frameworks. Proven ability to drive innovation in data strategy and cloud solutions. A deep understanding of business intelligence workflows and the ability to align technical solutions Strong database design skills, including an understanding of both normalised form and dimensional form databases. In-depth knowledge and experience of data-warehousing strategies and techniques e.g., Kimball Data warehousing Experience in Cloud based data integration tools like Azure Data Factory Experience in Azure Dev Ops or JIRA is a plus Experience working with finance data is highly desirable Familiarity with agile development techniques and objectives Location: DGS India - Mumbai - Thane Ashar IT Park Brand: Dentsu Time Type: Full time Contract Type: Permanent

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Oracle ADF Good to have skills : Python (Programming Language), Node.js, React.js Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project specifications, developing application features, and ensuring that the applications are aligned with business needs. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking opportunities for improvement and innovation in application development processes. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Analyze user requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Oracle ADF. - Good To Have Skills: Experience with Python (Programming Language), Node.js, React.js. - Strong understanding of application development methodologies. - Experience with database management and SQL. - Familiarity with web services and RESTful APIs. Additional Information: - The candidate should have minimum 5 years of experience in Oracle ADF. - This position is based in Pune. - A 15 years full time education is required.

Posted 1 month ago

Apply

0.0 - 8.0 years

13 - 16 Lacs

Gandhinagar, Gujarat

On-site

Company Name : PIB Techco India Pvt Ltd Location: Gandhinagar, Gujarat Job title: Sr. Devops Engineer Requirements: Must have: We are seeking a highly skilled DevOps Engineer with 5–8 years of professional hand on experience, particularly in managing Azure DevOps CI/CD pipelines and automating deployments across cloud-based data solutions. The ideal candidate should be capable of handling end-to-end deployment processes for Azure Devops projects involving Azure Data Factory (ADF), Databricks, SQL, Python, Azure Data Lake Storage (ADLS) and Power BI repositories Key Responsibilities: - Design, implement, and manage automated deployment pipelines for ADF, Databricks notebooks, SQL scripts, Python-based data processing and Power BI projects. - Manage build and release pipelines for various environments including Dev, UAT, and Production. - Enable environment consistency across Dev, UAT, and Production with automated application deployments using Azure CI/CD Pipelines, PowerShell, and CLI scripts. - Proficient in Python, Bash, or PowerShell - Collaborate with dataops and data engineering teams to enable smooth integration and deployment across Dev, UAT, and production environments. - Monitor pipeline health and performance, troubleshoot deployment failures, and ensure version control and rollback mechanisms are in place. - Support end-to-end project delivery including requirement gathering, pipeline design, development, testing automation, deployment, and post-deployment support. - Implement robust branching strategies, Git workflows, and automated testing frameworks. - Maintain version control practices using Azure DevOps Repos. - Monitor, log, and troubleshoot deployment issues using Azure Monitor, Log Analytics, or Cloud-native tools Nice to have: - Familiarity with Azure Data Factory (ADF), Databricks, SQL, Python, Azure Data Lake Storage (ADLS) and Power BI repositories,Docker, Kubernetes, or managed services like AKS/EKS. - Experience working with Agile methodologies, Test-Driven Development (TDD), and implementing CI/CD pipelines using tools like Azure DevOps pipeline or AWS CodePipeline. - Exposure to data modelling tools like Erwin or ER/Studio to support DevOps in metadata and schema management. - Exposure to leading reporting and visualization tools such as Power BI, particularly in automating report deployment and integration workflows. - Experience with API integrations and supporting infrastructure-as-code for connecting various systems and services Job Types: Full-time, Permanent Pay: ₹1,300,000.00 - ₹1,600,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person Application Deadline: 01/07/2025 Expected Start Date: 01/07/2025

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Visit us at nttdata.com Job Description for Data Engineer Experience with the following key deployment automations must be there: 5+ yeas of experience in Azure Data Factory (ADF) Azure SQL PySpark Delta Lake –(Databrics) Databricks deployment pipelines Automation of ADF deployments Automation of database deployments (Azure SQL, Delta Lake) Automation of Databricks deployments Deployment of Python, Django, and React-based microservices to Azure services such as Function Apps, Container Apps, Web Apps, and Azure Kubernetes Service (AKS) Job Mode:- Hybrid (2 days in a week) Job Location:- Any NTT Data office Note:- Preferred only candidates who can join us within 30 days time frame. #NTTData #LI-NorthAmerica

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies