Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
22 - 37 Lacs
Noida, Pune, Bengaluru
Work from Office
Desired Profile - Collect, analyse, and document all business and functional requirements for the Data Lake infrastructure. Support advancements in Business Analytics to ensure the system meets evolving business needs. Profile new and existing data sources to define and refine the data warehouse model. Collaborate with architects and stakeholders to define data workflows and strategies. Drive process improvements to optimize data handling and performance. Perform deep data analysis to ensure accuracy, consistency, and quality of data. Work with QA resources on test planning to ensure quality and consistency within the data lake and Data Warehouse. Gather data governance requirements and ensure implementation of data governance practices within the Data Lake infrastructure. Collaborate with functional users to gather and define metadata for the Data Lake. Key Skills: Azure Data Factory, Synapse, Power BI, Data Lake, SQL, KQL, Azure Security, data integration, Oracle EBS, cloud computing, data visualization, CI/CD pipelines, communication skills Pls share cv at parul@mounttalent.com
Posted 1 month ago
8.0 - 13.0 years
25 - 35 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Roles & Responsibilities Build strong relationships and channels of communication with other team members When necessary, challenge the team on their estimation values to gain a deeper understanding of the product from a business, design, and technical perspective Support the team in building a trusting and respectful environment where issues can be discussed openly and in a calm and friendly way Facilitate all reporting on scrum health and help to identify key learnings and areas of improvement Actively help the team in becoming self-organized and support them in aligning to the 12 principles of agile Display strong communication skills and be comfortable in dealing with conflict resolution to facilitate continuous improvement and empowerment Manage dependencies and mitigate them and support the team to accomplish sprint goal Collaborate effectively with Scrum Leads to standards and best practices Accurate reporting to the management depicting the true picture and resolving impediments on daily basis Scrum Masters must deliver facilitation of all SCRUM rituals, including Person should have exp in Business Analysis, Should have worked as Business Analyst in past Daily Stand-ups Backlog Grooming Estimation Sessions Sprint Planning Retrospectives Key Skills 8+ years of experience working as a scrum master Experienced in working with Atlassian tools Experienced in assisting product owners on product backlogs Experienced in coaching team members Excellent verbal and written communication skills
Posted 1 month ago
5.0 - 7.0 years
15 - 25 Lacs
Pune, Ahmedabad
Hybrid
Key Responsibilities: Design, develop, and optimize data pipelines and ETL/ELT workflows using Microsoft Fabric, Azure Data Factory, and Azure Synapse Analytics. Implement Lakehouse and Warehouse architectures within Microsoft Fabric, supporting medallion (bronze-silver-gold) data layers. Collaborate with business and analytics teams to build scalable and reliable data models (star/snowflake) using Azure SQL, Power BI, and DAX. Utilize Azure Analysis Services, Power BI Semantic Models, and Microsoft Fabric Dataflows for analytics delivery. Very good hands-on experience with Python for data transformation and processing. Apply CI/CD best practices and manage code through Git version control. Ensure data security, lineage, and quality using data governance best practices and Microsoft Purview (if applicable). Troubleshoot and improve performance of existing data pipelines and models. Participate in code reviews, testing, and deployment activities. Communicate effectively with stakeholders across geographies and time zones. Required Skills: Hands-on experience with Microsoft Fabric (Lakehouse, Warehouse, Dataflows, Pipelines). Strong knowledge of Azure Synapse Analytics, Azure Data Factory, Azure SQL, and Azure Analysis Services. Proficiency in Power BI and DAX for data visualization and analytics modeling. Strong Python skills for scripting and data manipulation. Experience in dimensional modeling, star/snowflake schemas, and Kimball methodologies. Familiarity with CI/CD pipelines, DevOps, and Git-based versioning. Understanding of data governance, data cataloging, and quality management practices. Excellent verbal and written communication skills.
Posted 1 month ago
5.0 - 10.0 years
13 - 23 Lacs
Mumbai, Hyderabad, Bengaluru
Hybrid
Primarily looking for a Data Engineer with expertise in processing data pipelines using Databricks PySpark SQL on Cloud distributions like AWS Must have AWS Databricks Good to have PySpark Snowflake Talend Requirements- Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills : Apache Spark, Databricks, Java, Python, Scala, Spark SQL.
Posted 1 month ago
3.0 - 6.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.
Posted 1 month ago
5.0 - 10.0 years
11 - 21 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Job Title: Senior Data Engineer ADF | Snowflake | DBT | Databricks Experience: 5 to 8 Years Locations: Pune / Hyderabad / Gurgaon / Bangalore (Hybrid) Job Type: Full Time, Permanent Job Description: We are hiring for a Senior Data Engineer role with strong expertise in Azure Data Factory (ADF) , Snowflake , DBT , and Azure Databricks . The ideal candidate will be responsible for designing, building, and maintaining scalable cloud-based data pipelines and enabling high-quality data delivery for analytics and reporting. Key Responsibilities Build and manage ETL/ELT pipelines using ADF, Snowflake, DBT, and Databricks Create parameterized, reusable components within ADF pipelines Perform data transformations and modeling in Snowflake using DBT Use Databricks for data processing using PySpark/SQL Collaborate with stakeholders to define and implement data solutions Optimize data workflows for performance, scalability , and cost-efficiency Ensure data quality, governance, and documentation standards Mandatory Skills Azure Data Factory (ADF) Snowflake DBT (Data Build Tool) Azure Databricks Strong SQL and data modeling experience Good-to-Have Skills Azure Data Lake, Azure Synapse, Blob Storage CI/CD using Azure DevOps or GitHub Python scripting, PySpark Power BI/Tableau integration Experience in metadata/data governance tools Role Requirements Education : Bachelors/Masters degree in Computer Science, Data Engineering, or related field Certifications : Azure or Snowflake certification is a plus Strong problem-solving and communication skills Keywords: Azure Data Factory, ADF, Snowflake, DBT, Azure Databricks, PySpark, SQL, Data Engineer, Azure Data Lake, ETL, ELT, Azure Synapse, Power BI, CI/CD
Posted 1 month ago
3.0 - 7.0 years
12 - 22 Lacs
Bengaluru
Hybrid
Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 35 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of either AWS Redshift (or other AWS cloud data warehouse services) or Databricks . A problem solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various business stakeholders to deliver actionable insights and drive data-driven decision-making within the organization. A strong understanding of the US healthcare ecosystem will be an added advantage. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and/or Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and/or Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs will be a plus. Excellent communication skills with the ability to work across different teams and stakeholders. Additional Details: Work Mode: Hybrid Notice Period: Preferably looking for Immediate Joiners Job Location: Cessna Business Park, Kadubeesanahalli, Bangalore Interested candidates can share your updated cv to the below mail ID Contact Person- Pawan Contact No -8951873995 Mail ID - pawanbehera@infinitiresearch.com
Posted 1 month ago
5.0 - 9.0 years
6 - 15 Lacs
Pune
Work from Office
Greeting from Infosys BPM Ltd., You are kindly invited for the Infosys BPM: Walk-In Drive on 28th June 2025 at Pune. Note : Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please mention Candidate ID on top of the Resume Please use below link to apply and register your application.https://career.infosys.com/jobdesc?jobReferenceCod e = PROGEN-HRODIRECT-216785 Interview Information Interview Date: 28th June 2025 Interview Time: 10:00 Am till 01:00PM Interview Venu e: Taluka Mulshi, Plot No. 1, Pune, Phase 1, Building B1 Ground floor, Hinjewadi Rajiv Gandhi Infotech Park, Pune, Maharashtra-411057 Documents to Carry Please carry 2 set of updated CV (Hard Copy) Carry any 2 photo Identity proof (PAN Card mandatory /Driving License/Voters ID card/Passport ) About the Job We're seeking a skilled Azure Data Engineer to join our dynamic team and contribute to our data management and analytics initiatives. Job Role: Azure Data Engineer Job Location: Pune Experience: 5+ Yrs Skills: SQL + ETL + Azure + Python + Pyspark + Databricks Job Description: As an Azure Data Engineer, you will play a crucial role in designing, implementing, and maintaining our data infrastructure on the Azure platform. You will collaborate with cross-functional teams to develop robust data pipelines, optimize data workflows, and ensure data integrity and reliability. Responsibilities: Design, develop, and deploy data solutions on Azure, leveraging SQL Azure, Azure Data Factory, and Databricks. Build and maintain scalable data pipelines to ingest, transform, and load data from various sources into Azure data repositories. Implement data security and compliance measures to safeguard sensitive information. Collaborate with data scientists and analysts to support their data requirements and enable advanced analytics and machine learning initiatives. Optimize and tune data workflows for performance and efficiency. Troubleshoot data-related issues and provide timely resolution. Stay updated with the latest Azure data services and technologies and recommend best practices for data engineering. Qualifications: Bachelors degree in computer science, Information Technology, or related field. Proven experience as a data engineer, preferably in a cloud environment. Strong proficiency in SQL Azure for database design, querying, and optimization. Hands-on experience with Azure Data Factory for ETL/ELT workflows. Familiarity with Azure Databricks for big data processing and analytics. Experience with other Azure data services such as Azure Synapse Analytics, Azure Cosmos DB, and Azure Data Lake Storage is a plus. Solid understanding of data warehousing concepts, data modeling, and dimensional modeling. Excellent problem-solving and communication skills. Regards, Infosys BPM
Posted 1 month ago
0.0 - 5.0 years
0 - 108 Lacs
Kolkata
Work from Office
We are a AI Company, led by Adithiyaa Tulshan, with 15 years of experience in AI. We are looking for associates who are hungry to adopt to the change due to AI and deliver value for clients fill in this form: https://forms.gle/BUcqTK3gBHARPcxv5 Flexi working Work from home Over time allowance Annual bonus Sales incentives Performance bonus Joining bonus Retention bonus Referral bonus Career break/sabbatical
Posted 1 month ago
4.0 - 8.0 years
30 - 37 Lacs
Bengaluru
Work from Office
ECMS ID/ Title 525632 Number of Openings 1 Duration of contract 6 No of years experience Relevant 4-8 Years. Detailed job description - Skill Set: Attached Mandatory Skills* Azure Data Factory, PySpark notebooks, Spark SQL, and Python. Good to Have Skills ETL Processes, SQL, Azure Data Factory, Data Lake, Azure Synapse, Azure SQL, Databricks etc. Vendor Billing range 9000- 10000/Day Remote option available: Yes/ No Hybrid Mode Work location: Most Preferrable Pune and Hyderabad Start date: Immediate Client Interview / F2F Applicable yes Background check process to be followed: Before onboarding / After onboarding: BGV Agency: Post
Posted 1 month ago
6.0 - 7.0 years
14 - 17 Lacs
Hyderabad
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Coud Data Patform or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / Azure eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Tota 6 - 7+ years of experience in Data Management (DW, DL, Data Patform, Lakehouse) and Data Engineering skis Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa; Minimum 3 years of experience on Coud Data Patforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to exceent SQL skis Preferred technica and professiona experience Certification in Azure and Data Bricks or Coudera Spark Certified deveopers
Posted 1 month ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Coud Data Patform or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / Azure eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa; Minimum 3 years of experience on Coud Data Patforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in Azure and Data Bricks or Coudera Spark Certified deveopers
Posted 1 month ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and AWS Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark, Scaa, and Hive, Hbase or other NoSQL databases on Coud Data Patforms (AWS) or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / AWS eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa ; Minimum 3 years of experience on Coud Data Patforms on AWS; Experience in AWS EMR / AWS Gue / DataBricks, AWS RedShift, DynamoDB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in AWS and Data Bricks or Coudera Spark Certified deveopers
Posted 1 month ago
10.0 - 11.0 years
8 - 12 Lacs
Noida
Work from Office
What We’re Looking For: Solid understanding of data pipeline architecture , cloud infrastructure , and best practices in data engineering Strong grip on SQL Server , Oracle , Azure SQL , and working with APIs Skilled in data analysis – identify discrepancies, recommend fixes Proficient in at least one programming language: Python , Java , or C# Hands-on experience with Azure Data Factory (ADF) , Logic Apps , Runbooks Knowledge of PowerShell scripting and Azure environment Excellent problem-solving , analytical , and communication skills Able to collaborate effectively and manage evolving project priorities Roles and Responsibilities Senior Data Engineer - Azure & Databricks Development and maintenance of Data Pipelines, Modernisation of cloud data platform At least 8 years experience in Data Engineering space At least 4 experiences in Apache Spark/ Databricks At least 4 years of experience in Python & at least 7 years in SQL and ETL stack .
Posted 1 month ago
4.0 - 7.0 years
25 - 35 Lacs
Hyderabad
Hybrid
Looking for Immediate Joiners only. Pls share your updated CV, CTC, ECTC, Notice Period/LWD @ monika.yadav@ness.com Job Description: Data Engineer As a Data Engineer, you will develop, maintain Data Pipelines. You will be involved in the design of data solutions for ESG. You will implement and manage cluster for streaming using technologies like Postgres, Oracle, Scala, Azure Data Lake, Spark, Kafka, Data bricks, ETL and Advanced SQL. You will be responsible for: Converting existing manual & semi-automated Data Ingress/ Egress processes to automated Data pipelines Create data pipelines for AI/ ML using Python / Pyspark Full operational lifecycle of Data platform including creating a streaming platform & helping with Kafka apps development Implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc. Job Requirements: 4 years of experience in Big Data technologies Experience in developing data processing flows using Python/ PySpark Hands on experience on Data ingestion, data cleansing, ETL, Data Mart creation and exposing data for consumers Strong experience in Oracle / PostgreSQL Experience of implementing and managing large scale cluster for streaming Kafka, Flink, Druid, NoSQL DB (MongoDB) etc Experience with Elastic Search, Splunk, Kibana or similar technologies Good to have experience in Business Intelligence tool (Qlik Sense) Knowledge of Microservices Familiarity with packages such as Numpy/pandas is desirable Qualifications: Bachelors degree in computer science, Information Technology, or a similar field (Minimum Qualification) Experience in Big Data technologies Experience in developing data processing flows using Python/ PySpark Hands on experience on Data ingestion, data cleansing, ETL, Data Mart creation and exposing data for consumers Strong experience in Oracle / PostgreSQL Experience of implementing and managing large scale cluster for streaming Kafka, Flink, Druid, NoSQL DB (MongoDB) etc Experience with Elastic Search, Splunk, Kibana or similar technologies Good to have experience in Business Intelligence tool (Qlik Sense) Knowledge of Microservices Familiarity with packages such as Numpy/pandas is desirable
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Pune
Work from Office
Job Title: Data Engineer Data Solutions Delivery + Data Catalog & Quality Engineer About Advanced Energy Advanced Energy Industries, Inc (NASDAQ: AEIS), enables design breakthroughs and drives growth for leading semiconductor and industrial customers Our precision power and control technologies, along with our applications know-how, inspire close partnerships and innovation in thin-film and industrial manufacturing We are proud of our rich heritage, award-winning technologies, and we value the talents and contributions of all Advanced Energy's employees worldwide, Department: Data and Analytics Team: Data Solutions Delivery Team Job Summary: We are seeking a highly skilled Data Engineer to join our Data and Analytics team As a member of the Data Solutions Delivery team, you will be responsible for designing, building, and maintaining scalable data solutions The ideal candidate should have extensive knowledge of Databricks, Azure Data Factory, and Google Cloud, along with strong data warehousing skills from data ingestion to reporting Familiarity with the manufacturing and supply chain domains is highly desirable Additionally, the candidate should be well-versed in data engineering, data product, data platform concepts, data mesh, medallion architecture, and establishing enterprise data catalogs using tools like Ataccama, Collibra, or Microsoft Purview The candidate should also have proven experience in implementing data quality practices using tools like Great Expectations, Deequ, etc Key Responsibilities: Design, build, and maintain scalable data solutions using Databricks, ADF, and Google Cloud, Develop and implement data warehousing solutions, including ETL processes, data modeling, and reporting, Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions, Ensure data integrity, quality, and security across all data platforms, Provide expertise in data engineering, data product, and data platform concepts, Implement data mesh principles and medallion architecture to build scalable data platforms, Establish and maintain enterprise data catalogs using tools like Ataccama, Collibra, or Microsoft Purview, Implement data quality practices using tools like Great Expectations, Deequ, etc Work closely with the manufacturing and supply chain teams to understand domain-specific data requirements, Develop and maintain documentation for data solutions, data flows, and data models, Act as an individual contributor, picking up tasks from technical solution documents and delivering high-quality results, Qualifications: Bachelors degree in computer science, Information Technology, or a related field, Proven experience as a Data Engineer or similar role, In-depth knowledge of Databricks, Azure Data Factory, and Google Cloud, Strong data warehousing skills, including ETL processes, data modelling, and reporting, Familiarity with manufacturing and supply chain domains, Proficiency in data engineering, data product, data platform concepts, data mesh, and medallion architecture, Experience in establishing enterprise data catalogs using tools like Ataccama, Collibra, or Microsoft Purview, Proven experience in implementing data quality practices using tools like Great Expectations, Deequ, etc Excellent problem-solving and analytical skills, Strong communication and collaboration skills, Ability to work independently and as part of a team, Preferred Qualifications: Master's degree in a related field, Experience with cloud-based data platforms and tools, Certification in Databricks, Azure, or Google Cloud, As part of our total rewards philosophy, we believe in offering and maintaining competitive compensation and benefits programs for our employees to attract and retain a talented, highly engaged workforce Our compensation programs are focused on equitable, fair pay practices including market-based base pay, an annual pay-for-performance incentive plan, we offer a strong benefits package in each of the countries in which we operate, Advanced Energy is committed to diversity in its workforce including Equal Employment Opportunity for Minorities, Females, Protected Veterans, and Individuals with Disabilities, We are committed to protecting and respecting your privacy We take your privacy seriously and will only use your personal information to administer your application in accordance with the RA No 10173 also known as the Data Privacy Act of 2012
Posted 1 month ago
1.0 - 4.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Mandate 2Employees will have the freedom to work remotely all through the year These employees, who form a large majority, will come together in their base location for a week, once every quarter, Analytics Engineer Swiggy About Swiggy: Swiggy, founded in 2014, is India's leading tech-driven on-demand delivery platform With a vision to enhance the urban consumer's quality of life through unparalleled convenience, Swiggy connects millions of consumers with a vast network of restaurants and stores across 500+ cities Our growth stems from cutting-edge technology, innovative thinking, and well-informed decision-making Join the Swiggy Analytics team to collaborate on decoding hyperlocal trends and impact the entire value chain, Role and Responsibilities Collaboration With Data Engineering Team As an Analytics Engineer at Swiggy, you will be at the heart of our data-driven approach, collaborating with cross-functional teams to transform raw data into actionable insights Your role will encompass: Partner closely with our Data Engineering team to ensure efficient data collection, storage, and processing, Collaborate in designing and optimizing data pipelines for seamless data movement, Work jointly on data architecture decisions to enhance analytics capabilities, Performance Tuning And Query Optimization Dive into large, complex datasets to create efficient and optimized queries for analysis, Identify bottlenecks and optimize data processing pipelines for improved performance, Implement best practices for query optimization, ensuring swift data retrieval, DataOps Excellence Contribute to the DataOps framework, automating data processes and enhancing data quality, Implement monitoring and alerting systems to ensure smooth data operations, Collaborate with the team to develop self-serve platforms for recurring analysis, Qualifications And Skills Bachelor's or Masters degree in Engineering, Mathematics, Statistics, or a related quantitative field, 2-4 years of analytics, data science, or related experience, Proficiency (2-4 years) in SQL, R, Python, Excel, etc , for effective data manipulation, Hands-on experience with Snowflake and Spark/Databricks, adept at Query Profiles and bottleneck identification, What We Expect Apply creative thinking to solve real-world problems using data-driven insights, Embrace a "fail fast, learn faster" approach in a dynamic, fast-paced environment, Exhibit proficient verbal and written communication skills, Thrive in an unstructured environment, demonstrating attention to detail and self-direction, Foster collaboration and partnerships across functions, Join us as an Analytics Engineer at Swiggy to contribute significantly to our data ecosystem, drive operational efficiency, and be an integral part of our data-driven journey Your expertise will play a pivotal role in influencing our strategic decisions and reshaping the food delivery landscape,
Posted 1 month ago
1.0 - 4.0 years
4 - 8 Lacs
Gurugram
Work from Office
Company Overview Leading with our core values of Quality, Integrity, and Opportunity, MedInsight is one of the healthcare industrys most trusted solutions for healthcare intelligence Our company purpose is to empower easy, data-driven decision-making on important healthcare questions Through our products, education, and services, MedInsight is making an impact on healthcare by helping to drive better outcomes for patients while reducing waste Over 300 leading healthcare organizations have come to rely on MedInsight analytic solutions for healthcare cost and care management, MedInsight is a subsidiary of Milliman; a global, employee-owned consultancy providing actuarial consulting, retirement funding and healthcare financing, enterprise risk management and regulatory compliance, data analytics and business transformation as well as a range of other consulting and technology solutions, Position Summary: We are seeking an entry-level Business Analyst to join our team to assist in all phases of the software development life cycle Individuals interested in this position must have the ability to assist in the management of multiple projects in varying stages of implementation Initially this position will focus on assisting in the development and support of our analytic products, requiring you to work closely with clinicians, actuaries, technical resources, and clients In addition to being a self-starter with the ability to work both independently and in a cross-functional team environment, candidates interested in this position must have the ability to: Manage multiple priorities in a fast-paced environment, Work independently on assigned tasks, (i-e , plan, organize, delegate, problem solve and meet established deadlines), Learn quickly, Follow-through, Prioritize work under time pressure, Execute work with exceptional attention to detail on all project tasks, Primary Responsibilities Responsibilities specific to the Business Analyst role include: Perform tasks in support of a systems development lifecycle including requirements definition, functional specifications, use case scenarios, test plans, documentation, training, and implementation, Gain an in-depth understanding of customer data for purposes of modeling, mapping and integrating data into a data warehouse, Work with client and market advocates (such as product management and sales/marketing) to determine system/business objectives and solutions, Train new users on the system and assisting with change management for new releases; developing test plans; testing new applications and maintenance releases, Work directly with end users in utilizing the client developed products, Establish, review, and validate action and decisions relative to system design and programming, Work with customers to evaluate and ensure the overall integrity of data and reporting results in meeting business needs and use requirements, Provide timely responses to technical and business-related questions/issues and provide consultative advice on use of data in meeting customer objectives, Skills And Requirements Candidates must be team players with excellent interpersonal skills They must also have solid, proven experience developing commercial quality business software applications, Requirements Bachelors degree in Mathematics, Statistics, Engineering, Physical Sciences, Pharmacy or equivalent work experience, Ability to analyze user requests, define requirements, develop project plans and report conclusions, Ability to work creatively and flexibly, both independently and as part of a team, Working knowledge of healthcare data, Attention to fine details and work processes, Desire and ability to learn new skills, Good organizational, and written and oral communications skills, Preferred Skills Working knowledge of Databricks, Python, SQL, and relational database systems, Working knowledge of standard clinical terminologies and coding systems, Experience with Excel and Microsoft Office products,
Posted 1 month ago
5.0 - 10.0 years
7 - 17 Lacs
Pune, Chennai, Bengaluru
Hybrid
Hi All, We are looking for Azure Data Engineer No. of position: 10 Location: Pune,chennai,bangalore,coimbatore Role: Permanent Key skills Azure Data Factory (primary) , Azure Data bricks Spark (PySpark, SQL) Band: B2, B3,C1 Experience - 5 to 15 Years Must-have skills Cloud certified in one of these categories Azure Data Engineer Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation . Semantic Modelling/ Optimization of data model to work within Rahona Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. Experience in Sqoop / Hadoop Microsoft Excel (for metadata files with requirements for ingestion) Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud Strong Programming skills with at least one of Python, Scala or Java Strong SQL skills ( T-SQL or PL-SQL) Data files movement via mailbox Source-code versioning/promotion tools, e.g. Git/Jenkins Orchestration tools, e.g. Autosys, Oozie Source-code versioning with Git. Nice-to-have skills Experience working with mainframe files Experience in Agile environment, JIRA/Confluence tool If your skills and experience matched with the above role, do apply by sharing your resume to Indu.sharma@wipro.com Note: Preferrance should be given to Immediate to 45 days Notice period. Thanks & Regards, Indu Sharma TA Specialist
Posted 1 month ago
7.0 - 12.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Client calls, guide towards optimized, cloud-native architectures, future state of their data platform, strategic recommendations and Microsoft Fabric integration. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders AZ Data Platform Expertise: Synapse, Databricks, Azure Data Factory (ADF), Azure SQL (DW/DB), Power BI (PBI). Define modernization roadmaps and target architecture. Strong understanding of data governance best practices for data quality, Cataloguing, and lineage. Proven ability to lead client engagements and present complex findings. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Lead all interviews workshops to capture current/future needs. Direct the technical review of Azure (AZ) infrastructure (Databricks, Synapse Analytics, Power BI) and critical on-premises (on-prem) systems. Come up with architecture designs (Arch. Designs), focusing on refined processing strategies and Microsoft Fabric. Understand and refine the Data Governance (Data Gov.) roadmap, including data cataloguing (Data Cat.), lineage, and quality. Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Conduct technical analyses of existing data pipelines, ETL processes, and on-premises/cloud system, identify technical bottlenecks, evaluate migration complexities, and propose optimizations. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders Strong experience in Synapse Analytics, Databricks, ADF, Azure SQL (DW/DB), SSIS. Strong experience in Advanced PS, Batch Scripting, C# (.NET 3.0). Expertise on Orchestration systems with ActiveBatch and AZ orchestration tools. Strong understanding of data warehousing, DLs, and Lakehouse concepts. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Understand and review PowerShell (PS), SSIS, Batch Scripts, and C# (.NET 3.0) codebases for data processes. Assess the complexity of trigger migration across Active Batch (AB), Synapse, ADF, and Azure Databricks (ADB). Define usage of Azure SQL DW, SQL DB, and Data Lake (DL) for various workloads, proposing transitions where beneficial. Analyze data patterns for optimization, including direct raw-to-consumption loading and zone elimination (e.g., stg/app zones). Understand requirements for external tables (Lakehouse) Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 month ago
5.0 - 7.0 years
4 - 8 Lacs
Gurugram
Work from Office
Job Responsibilities Design and developing complex applications. An innovative, result-orientated individual, seeking challenges in order to utilize the knowledge and experience they have gained working across a number of clients. Development of real-time, multi-threaded application. Desired Skills and Experience Candidate Profile 5+ years of industry experience in software development using Java, Spring Boot and SQL. Proficient in using Java 8 features such as lambda expressions, streams, and functional interfaces. Experience with newer versions of Java and their enhancements. Strong understanding and practical experience with various data structures (arrays, linked lists, stacks, queues, trees, graphs) and algorithms (sorting, searching, dynamic programming, etc.). Experience in full software development lifecycle (SDLC) including requirements gathering, design, coding, testing, and deployment. Familiar with Spring, Hibernate, Maven, Gradle, and other Java-related frameworks and tools. Proficient in SQL and experience with databases like MySQL, PostgreSQL, or Oracle. Experience working with technologies such as Kafka, MongoDB, Apache Spark/DataBricks, and Azure Cloud Good experience of API/Microservices, Publisher/Subscriber and related data integration patterns Having experience in Unit Testing with Junit or any other similar framework Strong understanding of OOP and Design Patterns Working with users, senior management and stake holders across multiple disciplines Mentoring and developing technical colleagues. Code management knowledge (e.g., version control, code branching merging, continuous integration delivery, build deployment strategies, testing lifecycle) Experience in managing stakeholder expectations (client and project team) and generating relevant reports. Excellent project tracking and monitoring skills Good decision making and problem-solving skills. Adaptable, flexible and ability to prioritize and work in tight schedules. Ability to manage pressure, ambiguity and change. Good understanding of all knowledge areas in software development including requirement gathering, designing, development, testing, maintenance, quality control etc. Preferred experience with Agile methodology and knowledge of Financial Services/Asset Management Industry Ensure quality of deliverables within project timelines Independently manage daily client communication, especially over calls Drives the work towards completion with accuracy and timely deliverables. Good to have Financial Services knowledge Key Responsibilities A candidate needs to interact with the global financial clients regularly and will be responsible for final delivery of work including: Translate client requirements into actionable software solutions. Understand the business requirements from the customers. Direct and manage project development from beginning to end. Effectively communicate project expectations to team members in a timely and clear manner Communicate with relevant stakeholders on an ongoing basis. Identify and manage project dependencies and critical path. Guide the team to implement industry best practices. Working as a part of a team developing new enhancement and revamping the existing trade limit persistence and pre trade risk check micro services (LMS) based on the clients own low latency framework. Designing and developing the persistence cache layer which will use the MONGO persistence for storing Design and development work for SMS integration to send out the 2FA code and for other business reasons Migrating existing Couchbase DB based limit documents processing system to a new AMPS based processing micro service. Design and implement the system from scratch build enhancements, features request using Java and Springboot Build prototype of application solution as needed. Involve in both development maintenance of the systems. Work collaboratively in a global setting, should be eager to learn new technologies. Provide support for any implemented solutions including incident, problem, and defect management, and appropriately cross train other members so that they are able to support the solutions. Responsible for extending and maintaining existing codebase with focus on quality, re-usability, maintainability and consistency Independently troubleshoot difficult and complex issues on production and other environments Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic. Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 month ago
3.0 - 7.0 years
4 - 8 Lacs
Bengaluru
Work from Office
As a member of the Data and Technology practice, you will be working on advanced AI ML engagements tailored for the investment banking sector. This includes developing and maintaining data pipelines, ensuring data quality, and enabling data-driven insights. Your core responsibility will be to build and manage scalable data infrastructure that supports our proof-of-concept initiatives (POCs) and full-scale solutions for our clients. You will work closely with data scientists, DevOps engineers, and clients to understand their data requirements, translate them into technical tasks, and develop robust data solutions. Your primary duties will encompass: Develop, optimize, and maintain scalable and reliable data pipelines using tools such as Python, SQL, and Spark. Integrate data from various sources including APIs, databases, and cloud storage solutions such as Azure, Snowflake, and Databricks. Implement data quality checks and ensure the accuracy and consistency of data. Manage and optimize data storage solutions, ensuring high performance and availability. Work closely with data scientists and DevOps engineers to ensure seamless integration of data pipelines and support machine learning model deployment. Monitor and optimize the performance of data workflows to handle large volumes of data efficiently. Create detailed documentation of data processes. Implement security best practices and ensure compliance with industry standards. Experience / Skills 5+ years of relevant experience in: Experience in a data engineering role , preferably within the financial services industry . Strong experience with data pipeline tools and frameworks such as Python, SQL, and Spark. Proficiency in cloud platforms, particularly Azure, Snowflake, and Databricks. Experience with data integration from various sources including APIs and databases. Strong understanding of data warehousing concepts and practices. Excellent problem-solving skills and attention to detail. Strong communication skills, both written and oral, with a business and technical aptitude. Additionally, desired skills: Familiarity with big data technologies and frameworks. Experience with financial datasets and understanding of investment banking metrics. Knowledge of visualization tools (e.g., PowerBI). Education Bachelors or Masters in Science or Engineering disciplines such as Computer Science, Engineering, Mathematics, Physics, etc.
Posted 1 month ago
4.0 - 6.0 years
5 - 9 Lacs
Gurugram
Work from Office
We are looking for an experienced Data Scientist to work at one of our global biopharma customers on a range of Biostats models consulting and development engagements. They are expected to bring extensive knowledge in best practices on R package development, model development and deployment on Databricks, collaboration with version control systems, and familiarity with other topics such as data architecture and cloud infrastructure. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field Strong experience in R programming and package development Proficiency with GitHub and unit testing frameworks. Strong documentation and communication skills. A background or work experience in biostatistics or a similar discipline (Preferred). Expert knowledge in Survival Analysis (Preferred) Statistical model deployment, and end-to-end MLOps is nice to have. Having worked extensively on cloud infrastructure, preferably Databricks and Azure. Shiny development is nice to have. Can work with customer stakeholders to understand business processes and workflows and can design solutions to optimize processes via streamlining and automation. DevOps experience and familiarity with software release process. Familiar with agile delivery methods. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and also collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Key responsibilities include: Evaluate and document R packages, including metadata and user-focused use cases. Develop unit tests aligned with best practices in R package development. Collaborate closely with internal stakeholders. Design and implement technical solutions for Survival Analysis, based on statistical and business requirements Develop professional quality R packages. Provide consultancy on Biostats model development and deployment best practices. Review and optimize code, integrate existing modelling code into packages. Design and implement end to end modelling and deployment process on Databricks. Support and collaborate with adjacent teams (e.g. products, IT) to integrate the modelling solution. Continually innovate with the team and the customer on using modern tooling to improve model development and deployment. Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 1 month ago
4.0 - 6.0 years
2 - 6 Lacs
Gurugram
Work from Office
As a key member of the DTS team, you will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to be directly working on processes that develop the inputs to our models. Migrate from MATLAB to Databricks moving to a more modern approach to update processes Desired Skills and Experience Essential skills 4-6 years of experience with data analytics Skilled in Databricks using SQL Working knowledge of Snowflake and Python Hands-on experience on large datasets and data structures using SQL Experience working with financial and/or alternative data products Excellent analytical and strong problem-solving skills Exposure on SP Capital IQ Exposure to data models on Databricks Education: B.E./B.Tech in Computer Science or related field Key Responsibilities Ability to write data processes in Databricks using SQL. Develop ELT processes for data preparation SQL expertise to understand data sources and data structures Document the developed data processes. Assist with related data tasks for model inputs within the Databricks environment. Assist with data tasks for model inputs within Databricks environment Taking data from SP Capital IQ, prepping it, and getting it ready for the model Key Metrics SQL, Databricks, Snowflake SP Capital IQ, Data Structures Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France