Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
4 - 8 Lacs
Pune
Work from Office
Project Role : Software Configuration Engineer Project Role Description : Implement the configuration management plan as directed by the Configuration Lead. Assist in the design of software configuration and customization to meet the business process design and application requirements. Must have skills : Spring Boot Good to have skills : JavaMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Configuration Engineer, you will be responsible for implementing the configuration management plan as directed by the Configuration Lead. You will assist in the design of software configuration and customization to meet the business process design and application requirements. Your day will involve collaborating with the team to ensure smooth configuration processes and customization. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Assist in the design of software configuration and customization- Excellent presentation, and communication skills. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot, Java- Experience of 5-12 years working in Java Spring boot for implementing projects with high SLA of data availability and data quality, exposure to Cloud technologies (Azure preferred) is a major plus.- 8+ years of strong delivery experience in Backend Development, Java Spring Boot, J2EE, REST. Experience using DevOps toolsets like GitLab, Jenkins, TDD/BDD tools like PyTest , Cucumber.- Hands-on experience working with build tools like Maven/Gradle. Experience of working on Kubernetes / OpenShift, Containerization (Docker, Podman or Similar), Cloud Native technologies and frameworks (e.g. Spring Boot).- Hands-on experience on with PostgreSQL or similar (RDBMS concepts) .- Experience working in any cloud platform and preferably in Azure development including Databricks , Azure Services , ADLS etc- While not necessary, experience of working with Kafka and Elastic Search will be a plus- A real passion for and experience of Agile working practices, with a strong desire to work with baked in quality subject areas such as TDD, BDD, test automation and DevOps principles Additional Information:- The candidate should have a minimum of 8 years of experience in Spring Boot.- This position is based at our client office Pune Kharadi . Looking for candidates who are willing to work 3 days a week from Client office. A 15 years full-time education is required. Qualification 15 years full time education
Posted 1 month ago
2.0 - 5.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Azure Data Services, Microsoft Azure Analytics ServicesMinimum 12 year(s) of experience is required Educational Qualification : Full Tim Education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with the team to develop and implement solutions that align with the organization's goals and objectives. You will utilize your expertise in Databricks Unified Data Analytics Platform to create efficient and effective applications that enhance business processes and drive innovation. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Collaborate with stakeholders to gather requirements and understand business needs.- Design and develop applications using Databricks Unified Data Analytics Platform.- Configure and customize applications to meet specific business process requirements.- Perform code reviews and ensure adherence to coding standards.- Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Microsoft Azure Data Services, Microsoft Azure Analytics Services.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 12 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A Full Time Education is required. Qualification Full Tim Education
Posted 1 month ago
4.0 - 6.0 years
1 - 5 Lacs
Gurugram, Bengaluru
Hybrid
Role & responsibilities : Data Analysis & Insights Analyze and transform large datasets to identify actionable insights into customer experience touchpoints. Use statistical techniques and tools to improve success metrics and answer business questions. Reporting & Visualization Create dashboards, reports, and visualizations to communicate analytical findings effectively. Present insights to senior leaders using data storytelling and actionable recommendations. Consultation & Collaboration. Preferred candidate profile 4-6 years of experience in analytics, statistics, or informatics. Technical Skills • Proficiency in SQL, Python, and data visualization tools (e.g., Power BI). Experience with Databricks and handling large datasets. Bonus: Familiarity with Azure Analysis Services, NLP, or machine learning techniques. Behavioral Skills Strong analytical mindset with excellent problem-solving and storytelling abilities.
Posted 1 month ago
12.0 - 20.0 years
22 - 37 Lacs
Bengaluru
Hybrid
12+ yrs of experience in Data Architecture Strong in Azure Data Services & Databricks, including Delta Lake & Unity Catalog Experience in Azure Synapse, Purview, ADF, DBT, Apache Spark,DWH,Data Lakes, NoSQL,OLTP NP-Immediate sachin@assertivebs.com
Posted 1 month ago
5.0 - 10.0 years
9 - 19 Lacs
Bengaluru
Remote
5+ years with Python, PySpark, SQL and SparkSQL
Posted 1 month ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Location: Bangalore, Hyderabad, Chennai Notice Period: Immediate to 20 days Experience: 5+ years Relevant Experience: 5+ years Skills: Data Engineer, Azure, Python, Panda, SQL, Pyspark, SQL, Databricks, Data pipeline, Synapse
Posted 1 month ago
3.0 - 8.0 years
20 - 30 Lacs
Chennai
Hybrid
Job Title: Senior Data Engineer Data Products Location: Chennai, India Open Roles: 2 Mode: Hybrid About the Role Are you a hands-on data engineer who thrives on solving complex data challenges and building modern cloud-native solutions? We're looking for two experienced Senior Data Engineers to join our growing Data Engineering team. This is an exciting opportunity to work on cutting-edge data platform initiatives that power advanced analytics, AI solutions, and digital transformation across a global enterprise. In this role, you'll help design and build reusable, scalable, and secure data pipelines on a multi-cloud infrastructure, while collaborating with cross-functional teams in a highly agile environment. What You’ll Do Design and build robust data pipelines and ETL frameworks using modern tools and cloud platforms. Implement lakehouse architecture (Bronze/Silver/Gold layers) and support data product publishing via Unity Catalog. Work with structured and unstructured enterprise data including ERP, CRM, and product data systems. Optimize pipeline performance, reliability, and security across AWS and Azure environments. Automate infrastructure using IaC tools like Terraform and AWS CDK. Collaborate closely with data scientists, analysts, and platform teams to deliver actionable data products. Participate in agile ceremonies, conduct code reviews, and contribute to team knowledge sharing. Ensure compliance with data privacy, cybersecurity, and governance policies. What You Bring 3+ years of hands-on experience in data engineering roles. Strong command of SQL and Python ; experience with Scala is a plus. Proficiency in cloud platforms (AWS, Azure), Databricks , DBT , Airflow , and version control tools like GitLab . Hands-on experience implementing lakehouse architectures and multi-hop data flows using Delta Lake . Background in working with enterprise data systems like SAP, Salesforce, and other business-critical platforms. Familiarity with DevOps , DataOps , and agile delivery methods (Jira, Confluence). Strong understanding of data security , privacy compliance , and production-grade pipeline management. Excellent communication skills and ability to work in global, multicultural teams. Why Join Us? Opportunity to work with modern data technologies in a complex, enterprise-scale environment. Be part of a collaborative, forward-thinking team that values innovation and continuous learning. Hybrid work model that offers both flexibility and team engagement . A role where you can make a real impact by contributing to digital transformation and data-driven decision-making.
Posted 1 month ago
4.0 - 7.0 years
13 - 17 Lacs
Pune
Hybrid
Role: Performance Testing Specialist Databricks Pipelines Job Seniority: Advanced (4-6 years) OR Experienced (3-4 years) Location: Magarpatta City,Pune Unit: Amdocs Data and Intelligence Mandatory SKills: All Skills must be in the resume in the roles and responsibilities Strong understanding of Databricks, Apache Spark, and performance tuning techniques for distributed data processing systems. Hands-on experience in Spark (PySpark/Scala) performance profiling, partitioning strategies, and job parallelization. 2+ year s of experience in performance testing and load simulation of data pipelines. Solid skills in SQL, Snowflake, and analyzing performance via query plans and optimization hints. Familiarity with Azure Databricks, Azure Monitor, Log Analytics, or similar observability tools. Proficient in scripting (Python/Shell) for test automation and pipeline instrumentation. Experience with DevOps tools such as Azure DevOps, GitHub Actions, or Jenkins for automated testing. Comfortable working in Unix/Linux environments and writing shell scripts for monitoring and debugging. Notice Period: Only Serving NP candidate who can join in the month of June ( 15 days to Immediate) Excellent Communication SKills This is C2H role. Interested Candidate Share Resume at dipti.bhaisare@in.experis.com
Posted 1 month ago
2.0 - 4.0 years
12 - 15 Lacs
Navi Mumbai
Work from Office
Defines, designs, develops, test software components/applications using Microsoft Azure- (Databricks, Data Factory, Data Lake Storage, Logic Apps, Azure Key Vaults, ADLS) Strong SQL skills, Structured & unstructured datasets, Data Modeling Required Candidate profile Must Have Databricks, Python, SQL, Pyspark Big Data Ecosystem Spark Ecosystem Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse) AWS Data Modelling, ETL Methodology.
Posted 1 month ago
5.0 - 10.0 years
5 - 15 Lacs
Chennai
Work from Office
About the Role We are seeking a highly skilled Senior Azure Data Solutions Architect to design and implement scalable, secure, and efficient data solutions supporting enterprise-wide analytics and business intelligence initiatives. You will lead the architecture of modern data platforms, drive cloud migration, and collaborate with cross-functional teams to deliver robust Azure-based solutions. Key Responsibilities Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Stay current with Azure platform advancements and recommend improvements. Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. • Expertise in data modeling and design (relational, dimensional, NoSQL). • Proven experience with ETL/ELT processes, data lakes, and modern lakehouse architectures. • Proficiency in Python, SQL, Scala, and/or Java. • Strong knowledge of data governance, security, and compliance frameworks. • Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). • Familiarity with BI and analytics tools such as Power BI or Tableau. • Excellent communication, collaboration, and stakeholder management skills. • Bachelors degree in Computer Science, Engineering, Information Systems, or related field. Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership. Why Join Us? • Work on cutting-edge cloud data platforms in a collaborative, innovative environment. • Lead strategic data initiatives that impact enterprise-wide decision-making. • Competitive compensation and opportunities for professional growth.
Posted 1 month ago
8.0 - 12.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Good to have skills required : Cloud, SQL , data analysis skills Location : Pune - Kharadi - WFO - 3 days/week. Job Description : We are seeking a highly skilled and experienced Python Lead to join our team. The ideal candidate will have strong expertise in Python coding and development, along with good-to-have skills in cloud technologies, SQL, and data analysis. Key Responsibilities : - Lead the development of high-quality, scalable, and robust Python applications. - Collaborate with cross-functional teams to define, design, and ship new features. - Ensure the performance, quality, and responsiveness of applications. - Develop RESTful applications using frameworks like Flask, Django, or FastAPI. - Utilize Databricks, PySpark SQL, and strong data analysis skills to drive data solutions. - Implement and manage modern data solutions using Azure Data Factory, Data Lake, and Data Bricks. Mandatory Skills : - Proven experience with cloud platforms (e.g. AWS) - Strong proficiency in Python, PySpark, R, and familiarity with additional programming languages such as C++, Rust, or Java. - Expertise in designing ETL architectures for batch and streaming processes, database technologies (OLTP/OLAP), and SQL. - Experience with the Apache Spark, and multi-cloud platforms (AWS, GCP, Azure). - Knowledge of data governance and GxP data contexts; familiarity with the Pharma value chain is a plus. Good to Have Skills : - Experience with modern data solutions via Azure. - Knowledge of principles summarized in the Microsoft Cloud Adoption Framework. - Additional expertise in SQL and data analysis. Educational Qualifications : Bachelor's/Master's degree or equivalent with a focus on software engineering. If you are a passionate Python developer with a knack for cloud technologies and data analysis, we would love to hear from you. Join us in driving innovation and building cutting-edge solutions! Apply Insights Follow-up Save this job for future reference Did you find something suspicious? Report Here! Hide This Job? Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 1 month ago
5.0 - 7.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Engineer / Technical Lead Location: Bangalore Employment Type: Full-time Role Summary We are seeking a highly skilled and motivated Senior Data Engineer/Technical Lead to take ownership of the end-to-end delivery of a key project involving data lake transitions, data warehouse maintenance, and enhancement initiatives. The ideal candidate will bring strong technical leadership, excellent communication skills, and hands-on expertise with modern data engineering tools and platforms. Experience in Databricks and JIRA is highly desirable. Knowledge of supply chain and finance domains is a plus, or a willingness to quickly ramp up in these areas is expected. Key Responsibilities Delivery Management Lead and manage data lake transition initiatives under the Gold framework. Oversee delivery of enhancements and defect fixes related to the enterprise data warehouse. Technical Leadership Design and develop efficient, scalable data pipelines using Python, PySpark , and SQL . Ensure adherence to coding standards, performance benchmarks, and data quality goals. Conduct performance tuning and infrastructure optimization for data solutions. Provide code reviews, mentorship, and technical guidance to the engineering team. Collaboration & Stakeholder Engagement Collaborate with business stakeholders (particularly the Laboratory Products team) to gather, interpret, and refine requirements. Communicate technical solutions and project progress clearly to both technical and non-technical audiences. Tooling and Technology Use Leverage tools such as Databricks, Informatica, AWS Glue, Google DataProc , and Airflow for ETL and data integration. Use JIRA to manage project workflows, track defects, and report progress. Documentation and Best Practices Create and review documentation including architecture, design, testing, and deployment artifacts. Define and promote reusable templates, checklists, and best practices for data engineering tasks. Domain Adaptation Apply or gain knowledge in supply chain and finance domains to enhance project outcomes and align with business needs. Skills and Qualifications Technical Proficiency Strong hands-on experience in Python, PySpark , and SQL . Expertise with ETL tools such as Informatica, AWS Glue, Databricks , and Google Cloud DataProc . Deep understanding of data warehousing solutions (e.g., Snowflake, BigQuery, Delta Lake, Lakehouse architectures ). Familiarity with performance tuning, cost optimization, and data modeling best practices. Platform & Tools Proficient in working with cloud platforms like AWS, Azure, or Google Cloud . Experience in version control and configuration management practices. Working knowledge of JIRA and Agile methodologies. Certifications (Preferred but not required) Certifications in cloud technologies, ETL platforms, or relevant domain (e.g., AWS Data Engineer, Databricks Data Engineer, Supply Chain certification). Expected Outcomes Timely and high-quality delivery of data engineering solutions. Reduction in production defects and improved pipeline performance. Increased team efficiency through reuse of components and automation. Positive stakeholder feedback and high team engagement. Consistent adherence to SLAs, security policies, and compliance guidelines. Performance Metrics Adherence to project timelines and engineering standards Reduction in post-release defects and production issues Improvement in data pipeline efficiency and resource utilization Resolution time for pipeline failures and data issues Completion of required certifications and training Preferred Background Background or exposure to supply chain or finance domains Willingness to work during morning US East hours Ability to work independently and drive initiatives with minimal oversight Required Skills Databricks,Data Warehousing,ETL,SQL
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Pune
Work from Office
New Opportunity :FullStack Engineer. Location :Pune (Onsite). Company :Apptware Solutions Hiring. Experience :4+ years. We're looking for a skilled Full Stack Engineer to join our team. If you have experience in building scalable applications and working with modern technologies, this role is for you. Role & Responsibilities. Develop product features to help customers easily transform data. Design, implement, deploy, and support client-side and server-side architectures, including web applications, CLI, and SDKs. Minimum Requirements. 4+ years of experience as a Full Stack Developer or similar role. Hands-on experience in a distributed engineering role with direct operational responsibility (on-call experience preferred). Proficiency in at least one back-end language (Node.js, TypeScript, Python, or Go). Front-end development experience with Angular or React, HTML, CSS. Strong understanding of web applications, backend APIs, CI/CD pipelines, and testing frameworks. Familiarity with NoSQL databases (e.g. DynamoDB) and AWS services (Lambda, API Gateway, Cognito, etc.). Bachelor's degree in Computer Science, Engineering, Math, or equivalent experience. Strong written and verbal communication skills. Preferred Skills. Experience with AWS Glue, Spark, or Athena. Strong understanding of SQL and data engineering best practices. Exposure to Analytical EDWs (Snowflake, Databricks, Big Query, Cloudera, Teradata). Experience in B2B applications, SaaS offerings, or startups is a plus. (ref:hirist.tech). Show more Show less
Posted 1 month ago
2.0 - 5.0 years
3 - 12 Lacs
Kolkata, Pune, Bengaluru
Work from Office
Company Name: Tech Mahindra Experience: 2-5 Years Location: Bangalore/Hyderabad Interview Mode: Virtual Interview Rounds: 2-3 Rounds Notice Period: Immediate to 30 days Generic description: Roles and Responsibilities : Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Databricks. Collaborate with cross-functional teams to understand business requirements and design scalable solutions for big data processing using PySpark on Azure Data Lake Storage. Develop complex SQL queries to optimize database performance and troubleshoot issues in real-time. Ensure high availability of the system by implementing monitoring tools and performing regular maintenance tasks. Job Requirements : 2-5 years of experience in designing and developing large-scale data systems on Microsoft Azure platform. Strong understanding of Azure Data Factory (ADF), Azure Databricks, and Azure Data Lake Storage concepts. Proficiency in writing efficient Python code using PySpark for big data processing.
Posted 1 month ago
5.0 - 7.0 years
5 - 16 Lacs
Hyderabad, Bengaluru
Work from Office
Company Name: Tech Mahindra Experience: 5-7 Years Location: Bangalore/Hyderabad Interview Mode: Virtual Interview Rounds: 2-3 Rounds Notice Period: Immediate to 30 days Generic description: Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data pipelines using Azure Data Factory (ADF) to integrate various data sources into a centralized data lake. Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business objectives. Develop complex SQL queries to extract insights from large datasets stored in Azure Databricks or other relational databases. Troubleshoot issues related to ADF pipeline failures, data quality problems, and performance optimization. Job Requirements : 5-7 years of experience in designing and developing large-scale data pipelines using ADF. Strong understanding of Azure Databricks, including its architecture, features, and best practices. Proficiency in writing complex SQL queries for querying large datasets stored in relational databases. Experience working with PySpark on AWS EMR clusters.
Posted 1 month ago
5.0 - 8.0 years
5 - 15 Lacs
Kochi
Work from Office
Job Summary: We are looking for a seasoned Data Engineer with 58 years of experience, specializing in Microsoft Fabric. The ideal candidate will play a key role in designing, building, and optimizing scalable data pipelines and models. You will work closely with analytics and business teams to drive data integration, ensure quality, and support data-driven decision-making in a modern cloud environment. Key Responsibilities: Design, develop, and optimize end-to-end data pipelines using Microsoft Fabric (Data Factory, Dataflows Gen2). Create and maintain data models , semantic models , and data marts for analytical and reporting purposes. Develop and manage SQL-based ETL processes , integrating various structured and unstructured data sources. Collaborate with BI developers and analysts to develop Power BI datasets, dashboards, and reports. Implement robust data integration solutions across diverse platforms and sources (on-premises, cloud). Ensure data integrity, quality, and governance through automated validation and error handling mechanisms. Work with business stakeholders to understand data requirements and translate them into technical specifications. Optimize data workflows for performance and cost-efficiency in a cloud-first architecture. Provide mentorship and technical guidance to junior data engineers. Required Skills: Strong hands-on experience with Microsoft Fabric , including Dataflows Gen2, Pipelines, and OneLake. Proficiency in Power BI , including building reports, dashboards, and working with semantic models. Solid understanding of data modeling techniques : star schema, snowflake, normalization/denormalization. Deep experience with SQL , stored procedures, and query optimization. Experience in data integration from diverse sources such as APIs, flat files, databases, and streaming data. Knowledge of data governance , lineage , and data catalog capabilities within the Microsoft ecosystem.
Posted 1 month ago
10.0 - 15.0 years
15 - 30 Lacs
Pallavaram
Work from Office
Data Engineering Lead Company Name: Blackstraw.ai Oce Location: Chennai (Work from Office) Job Type: Full-time Experience: 10 - 15 Years Candidates who can join immediately will be preferred. Job Description: As a lead data engineer you will oversee data architecture, ETL processes, and analytics pipelines, ensuring efficiency, scalability, and quality. Key Responsibilities: Working with clients to understand their data. Based on the understanding you will be building the data structures and pipelines. You will be working on the application from end to end collaborating with UI and other development teams. You will be working with various cloud providers such as Azure & AWS. You will be engineering data using the Hadoop/Spark ecosystem. You will be responsible for designing, building, optimizing and supporting new and existing data pipelines. Orchestrating jobs using various tools such Oozie, Airflow, etc. Developing programs for cleaning and processing data. You will be responsible for building the data pipelines to migrate and load the data into the HDFS either on-prem or in the cloud. Developing Data ingestion/process/integration pipelines effectively. Creating Hive data structures,metadata and loading the data into data lakes / BigData warehouse environments. Optimized (Performance tuning) many data pipelines effectively to minimize cost. Code versioning control and git repository is up to date. You should be able to explain the data pipeline to internal and external stakeholders. You will be responsible for building and maintaining CI/CD of the data pipelines. You will be managing the unit testing of all data pipelines. Tech Stack: Minimum of 5+ years working experience with Spark, Hadoop eco systems. Minimum of 4+ years working experience on designing data streaming pipelines. Should be an expert in either Python/Scala/Java. Should have experience in Data Ingestion and Integration into data lake using hadoop ecosystem tools such as Sqoop, Spark, SQL, Hive, Airflow, etc.. Should have experience optimizing (Performance tuning) data pipelines. Should have minimum experience of 3+ years on NoSQL and Spark Streaming. Knowledge of Kubernetes and Docker is a plus. Should have experience with Cloud services either Azure/AWS. Should have experience with on-prem distribution such as Cloudera/HortonWorks/MapR. Basic understanding of CI/CD pipelines. Basic knowledge of Linux environment and commands. Preferred Qualifications: Bachelors degree in computer science or related field. Proven experience with big data ecosystem tools such as Sqoop, Spark, SQL, API, Hive, Oozie, Airflow, etc.. Solid experience in all phases of SDLC with 10+ years of experience (plan, design, develop, test, release, maintain and support) Hands-on experience using Azures data engineering stack. Should have implemented projects using programming languages such as Scala or Python. Working experience on SQL complex data merging techniques such as windowing functions etc.. Hands-on experience with on-prem distribution tools such as Cloudera/HortonWorks/MapR. Should have excellent communication, presentation and problem solving skills. Key Traits: Should have excellent communication skills. Should be self motivated and willing to work as part of a team. Should be able to collaborate and coordinate with on shore and offshore teams. Be a problem solver and be proactive to solve the challenges that come his way.
Posted 1 month ago
12.0 - 18.0 years
50 - 80 Lacs
Hyderabad
Work from Office
Executive Director Data Management Company Overview Accordion is a global private equity-focused financial consulting firm specializing in driving value creation through services rooted in Data & Analytics and powered by technology. Accordion works at the intersection of Private Equity sponsors and portfolio companies management teams across every stage of the investment lifecycle. We provide hands-on, execution-oriented support, driving value through the office of the CFO by building data and analytics capabilities and identifying and implementing strategic work, rooted in data and analytics. Accordion is headquartered in New York City with 10 offices worldwide. Join us and make your mark on our company. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) practice in India delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team members deliver data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Working at Accordion in India means joining 800+ analytics, data science, finance, and technology experts in a high-growth, agile, and entrepreneurial environment to transform how portfolio companies drive value. It also means making your mark on Accordion's future by embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Join us and experience a better way to work! Location: Hyderabad, Telangana Role Overview: Accordion is looking for an experienced Enterprise Data Architect to lead the strategy, design, and implementation of data architectures for across all its data management projects. He/she will be part of the technology team and will possess in-depth knowledge of distinct types of data architectures and frameworks including distributed large-scale implementations. He/she will collaborate closely with the client partnership team to design and recommend robust and scalable data architecture to clients and work with engineering teams to implement the same in on-premises or cloud-based environments. He/she will be a data evangelist and will conduct knowledge sharing sessions in the company on various data management topics to spread awareness of data architecture principles and improve the overall capabilities of the team. The Enterprise Data Architect will also conduct design review sessions to validate/verify implementations, emphasize and implement best practices followed by exhaustive documentation which are in line with the design philosophy. He/she will have excellent communication skills and will possess industry standard certification in the data architecture areas. What You will do: Partner with clients to understand their business and create comprehensive requirements to enable development of optimal data architecture. Translate business requirements into logical and physical design of databases, data warehouses, and data streams. Analyze, plan, and define data architecture framework, including security, reference data, metadata, and master data. Create elaborate data management processes and procedures and consult with Senior Management to share the knowledge. Collaborate with client and internal project teams to devise and implement data strategies, build models, and assess shareholder needs and goals. Develop application programming interfaces (APIs) to extract and store data in the most optimal manner. Align business requirements with technical architecture and collaborate with the technical teams for implementation and tracking purposes. Research and track the latest developments in the field to maintain expertise about the latest best practices and techniques within the industry. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. 12+ years of experience in related field. Experience in designing logical & physical data design architectures in various RDBMS (SQL Server, Oracle, MySQL etc.), Non-RDBMS (MongoDB, Cassandra etc.) and Data Warehouse (Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) environments. Deep knowledge and implementation experience on Modern Data Warehouse principles using Kimball & Inmon Models or Data Vault including their application based on data quality requirements. In-depth knowledge of any one of cloud-based infrastructure (AWS, Azure, Google Cloud) for solution design, development, and delivery is mandatory. Proven abilities to take on initiative, be innovative and drive it through completion. Analytical mind with strong problem-solving attitude. Excellent communication skills, both written and verbal. Any Enterprise Data Architect certification will be an added advantage. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment: Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctor's consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.
Posted 1 month ago
3.0 - 8.0 years
10 - 20 Lacs
Chennai
Hybrid
Roles & Responsibilities : • We are looking for a strong Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . • Integration of data from multiple sources or vendors to provide the holistic insights from data. • You are expected to build and manage Data Lake and Data warehouse solutions, design data models, create ETL processes, implementing data quality mechanisms etc. • Perform EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. • Should have experience in client interaction oral and written. • Experience in mentoring juniors and providing required guidance to the team. Required Technical Skills • Extensive experience in languages such as Python, Pyspark, SQL (basics and advanced). • Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Data Architecture . • Must be proficient in Redshift, Azure Data Factory, Snowflake etc. • Hands-on experience in cloud services like AWS S3, Glue, Lambda, CloudWatch, Athena etc. • Good to have knowledge in Dataiku, Big Data Technologies and basic knowledge of BI tools like Power BI, Tableau etc will be plus. • Sound knowledge in Data management, data operations, data quality and data governance. • Knowledge of SFDC, Waterfall/ Agile methodology. • Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications • Bachelors or masters Engineering/ MCA or equivalent degree. • 4-6 years of relevant industry experience as Data Engineer . • Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales, Open Data etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Chennai, India
Posted 1 month ago
1.0 - 5.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Role Data Scientist LocationBangalore TimingsFull Time (As per company timings) Notice Period(Immediate Joiner Only) Experience5 Years We are looking for a highly motivated and skilled Data Scientist to join our growing team The ideal candidate should possess a robust background in data science, machine learning, and statistical analysis, with a passion for uncovering insights from complex datasets This role demands hands-on experience in Python and various ML libraries, strong business acumen, and effective communication skills for translating data insights into strategic decisions. Key Responsibilities Develop, implement, and optimize machine learning models for predictive analytics and business decision-making. Work with both structured and unstructured data to extract valuable insights and patterns. Leverage Python and standard ML libraries (NumPy, Pandas, SciPy, Scikit-Learn, TensorFlow, PyTorch, Keras, Matplotlib) for data modeling and analysis. Design and build data pipelines for streamlined data processing and integration. Conduct Exploratory Data Analysis (EDA) to identify trends, anomalies, and business opportunities. Partner with cross-functional teams to embed data-driven strategies into core business operations. Create compelling data stories through visualization techniques to convey findings to non-technical stakeholders. Stay abreast of the latest ML/AI innovations and industry best practices. Required Skills & Qualifications 5 years of proven experience in Data Scientist and machine learning. Proficient in Python and key data science libraries. Experience with ML frameworks such as TensorFlow, Keras, or PyTorch. Strong understanding of SQL and relational databases. Solid grounding in statistical analysis, hypothesis testing, and feature engineering. Familiarity with data visualization tools like Matplotlib, Seaborn, or Plotly. Demonstrated ability to work with large datasets and solve complex analytical problems. Excellent communication and data storytelling skills. Knowledge of Marketing Mix Modeling is a plus. Preferred Skills Hands-on experience with cloud platforms like AWS, Azure, or GCP. Exposure to big data technologies such as Hadoop, Spark, or Databricks. Familiarity with NLP, computer vision, or deep learning. Understanding of A/B testing and experimental design methodologies. Show more Show less
Posted 1 month ago
1.0 - 5.0 years
10 - 14 Lacs
Pune
Work from Office
Technical Project Manager Company Overview: At Codvo, software and people transformations go hand-in-hand We are a global empathy-led technology services company Product innovation and mature software engineering are part of our core DNA Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results. : Lead and manage end-to-end data and analytics projects, ensuring timely delivery and alignment with business objectives. Collaborate with cross-functional teams, including data scientists, analysts, engineers, and business stakeholders, to define project scope, goals, and deliverables. Develop detailed project plans, including timelines, milestones, resource allocation, and risk management strategies. Monitor project progress, identify potential issues, and implement corrective actions to ensure project success. Facilitate effective communication and collaboration among team members and stakeholders. Ensure data quality, integrity, and security throughout the project lifecycle. Stay updated with the latest trends and technologies in data and analytics to drive continuous improvement and innovation. Provide regular project updates and reports to senior management and stakeholders. Effective leadership, interpersonal and communication skills. Ability to stay calm and composed to deliver under pressure. Strategic thinkers having adequate cost control / management experience would be a plus. Strong knowledge of Change, Risk and Resource management is required. Thorough understanding of project/program management techniques and methods from initiation to closure. Working knowledge of program/project management tools like JIRA, Azure DevOps Board, Basecamp, MS Project, Excellent communication skills and clarity of thought. Excellent problem-solving ability, with escalation handling experience. Qualifications: Bachelors degree in Computer Science, Information Technology, Data Science, or a related field A Masters degree is a plus. Proven experience as a Technical Project Manager, preferably in data and analytics projects. Strong understanding of data management, analytics, and visualization tools and technologies. Excellent project management skills, including the ability to manage multiple projects simultaneously. Proficiency in project management software (e.g., JIRA, MS Project, ADO). Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work effectively in a fast-paced, dynamic environment. Preferred Skills: Experience with big data technologies (e.g., Hadoop, Spark, Azure, Databricks). Knowledge of machine learning and artificial intelligence. Certification in project management (e.g., PMP, PRINCE2). Work Location Remote / Pune Work timings 2.30 pm- 11.30 pm Show more Show less
Posted 1 month ago
5.0 - 9.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities As a Tech Lead, the candidate should be able to work as an individual contributor as well as people manager Be able to work on data pipelines and databases Be able to work on data intensive applications or systems Be able to lead the team and have to soft skills for the same Be able to review code, design and mentor the team members Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent experience Experience working on Databricks Well versed with Apache spark, Azure, SQL, Pyspark, Airflow, Hadoop, UNIX etc. Proven ability to work on big data technology stack on cloud and on-prem Proven ability to communicate effectively with the team Proven ability to lead and mentor the team Proven ability to have soft skills for people management
Posted 1 month ago
3.0 - 7.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead the migration of the ETLs from on-premises SQLServer based data warehouse to Azure Cloud, Databricks and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, and Databricks (Pyspark) Review and analyze existing on-premises ETL processes developed in SSIS and T-SQL Implement DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 6+ years of experience as a Cloud Data Engineer Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage) and Databricks Solid experience in ETL development using on-premises databases and ETL technologies Experience with Python or other scripting languages for data processing Experience with Agile methodologies Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills Proven solid analytical skills and attention to detail Proven ability to adapt to new technologies and learn quickly Preferred Qualifications Certification in Azure or Databricks Experience with data modeling and database design Experience with development in Snowflake for data engineering and analytics workloads Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)
Posted 1 month ago
5.0 - 9.0 years
13 - 18 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a highly skilled and experienced Technical Delivery Lead to join our team for a Cloud Data Modernization project. The successful candidate will be responsible for managing and leading the migration of an on-premises Enterprise Data Warehouse (SQLServer) to a modern cloud-based data platform utilizing Azure Cloud data tools and Snowflake. This platform will enable offshore (non-US) resources to build and develop Reporting, Analytics, and Data Science solutions. Primary Responsibilities Manage and lead the migration of the on-premises SQLServer Enterprise Data Warehouse to Azure Cloud and Snowflake Design, develop, and implement data platform solutions using Azure Data Factory (ADF), Self-hosted Integration Runtime (SHIR), Logic Apps, Azure Data Lake Storage Gen2 (ADLS Gen2), Blob Storage, Databricks, and Snowflake Manage and guide the development of cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Implement and oversee DevOps practices and CI/CD pipelines using GitActions Collaborate with cross-functional teams to ensure seamless integration and data flow Optimize and troubleshoot data pipelines and workflows Ensure data security and compliance with industry standards Provide technical leadership and mentorship to the engineering team Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 8+ years of experience in a Cloud Data Engineering role, with 3+ years in a leadership or technical delivery role Hands-on experience with Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), Databricks, and Snowflake Experience with Python or other scripting languages for data processing Experience with Agile methodologies and project management tools Solid experience in developing cloud-native ETLs and data pipelines using modern technologies on Azure Cloud, Databricks, and Snowflake Proficiency in DevOps and CI/CD practices using GitActions Proven excellent problem-solving skills and ability to work independently Proven solid communication and collaboration skills. Solid analytical skills and attention to detail Proven track record of successful project delivery in a cloud environment Preferred Qualifications Certification in Azure or Snowflake Experience working with automated ETL conversion tools used during cloud migrations (SnowConvert, BladeBridge, etc.) Experience with data modeling and database design Knowledge of data governance and data quality best practices Familiarity with other cloud platforms (e.g., AWS, Google Cloud)
Posted 1 month ago
4.0 - 7.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Data Pipeline ManagementOversee the design, deployment, and maintenance of data pipelines to ensure they are optimized and highly available Data Collection and StorageBuild and maintain systems for data collection, storage, and processing ETL ProcessesDevelop and manage ETL (Extract, Transform, Load) processes to convert raw data into usable formats CollaborationWork closely with data analysts, data scientists, and other stakeholders to gather technical requirements and ensure data quality System MonitoringMonitor existing metrics, analyze data, and identify opportunities for system and process improvements Data GovernanceEnsure data compliance and security needs are met in system construction MentorshipOversee and mentor junior data engineers, ensuring proper execution of their duties ReportingDevelop queries for ad hoc business projects and ongoing reporting Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor degree in engineering or equivalent experience Minimum 3/4 years of experience in SQL (Joins, Stored procedures, performance tuning), Azure, PySpark, Databricks & Big Data Ecosystem) Flexibility to work in different shift timings Flexibility to work as Dev OPS Engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France