Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow
Posted 1 day ago
10.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow
Posted 1 day ago
10.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–10 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,agile methodologies,performance tuning,data,aws,airflow
Posted 1 day ago
0 years
10 - 12 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),data,feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance
Posted 1 day ago
0 years
10 - 12 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,apache spark,aws,ml,langchain,shell scripting,kafka,performance testing,mlops,pandas,knowledge-graph design (rdf/owl/sparql),sql,data,feature engineering,nosql,delta lake,ci/cd,python,aws services (sagemaker, bedrock, lambda),pyspark,synthetic-data augmentation,generative ai,data-cataloging,metadata management,databricks,git,lineage,data governance,azure
Posted 1 day ago
0 years
10 - 12 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),data,feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance
Posted 1 day ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking an experienced PMT Technical Project Manager with a minimum of 10 years of experience in managing data projects. The ideal candidate will have a strong background in data management, MDM, Databricks, Azure, and lakeform. The candidate should also have a decent understanding of data domains. As a PMT Project Manager, you will be responsible for overseeing and coordinating all aspects of data projects, including planning, budgeting, and execution. You will work closely with cross-functional teams to ensure the successful delivery of projects on time and within budget. The ideal candidate will have excellent leadership and communication skills, as well as a proven track record of successfully managing data projects. This is a great opportunity for a driven and experienced professional to join our team and make a significant impact in the data industry. The total years of work experience for this position is 12 and the work mode is Work from Virtusa Office.
Posted 1 day ago
3.0 years
0 Lacs
Greater Madurai Area
On-site
Job Requirements Role Description As a BI Developer, you will be responsible for transforming raw data into actionable insights that drive business decisions. Being part of the BI and Reporting team, You will work closely with Data operations team , Data Base Administrators, Data Business Partner and business stakeholders to develop data analytics solutions, create interactive reports, and optimize BI workflows using SQL, Python, Databricks, Power BI, and Tableau. Your expertise in data modelling, visualization, and reporting will be crucial in shaping data-driven strategies. Key Responsibilities Develop data models and interactive dashboards using Power BI, Tableau, and automated reporting to track key performance indicators (KPIs) relevant to business functions. Write complex SQL queries, leverage Python for data manipulation and predictive analytics, and optimize ETL processes for efficient data handling across multiple domains. Work with Databricks for large-scale data processing and implement AWS/Azure-based cloud solutions, ensuring scalability and performance of BI applications. Maintain data accuracy, consistency, and security across platforms, ensuring high-quality BI applications tailored to SCM, finance, sales, and marketing needs. Partner with business teams, communicate complex findings effectively to non-technical stakeholders, and drive a data-centric culture across departments. Required Skills & Qualifications Education: Bachelor’s degree in data science, Computer Science, Business Analytics, or a related field. Experience: 3+ years in BI, data analytics, or reporting roles. Technical Expertise SQL: Strong proficiency in writing queries and optimizing databases. Python: Experience in data manipulation and automation. Databricks: Hands-on experience with cloud-based data processing. Visualization Tools: Power BI, Tableau. Soft Skills Strong analytical thinking and problem-solving abilities. Excellent communication and stakeholder management skills. Ability to translate business needs into technical solutions.
Posted 1 day ago
8.0 - 12.0 years
14 - 24 Lacs
Pune
Work from Office
Role & responsibilities Experience: 8-10 years in the Data and Analytics domain with expertise in the Microsoft Data Tech stack. Leadership: Experience in managing teams of 8-10 members. Technical Skills: Expertise in tools like Microsoft Fabric, Azure Synapse Analytics, Azure Data Factory, Power BI, SQL Server, Azure Databricks, etc. Strong understanding of data architecture, pipelines, and governance. Understanding of one of the other data platforms like Snowflake or Google Big query or Amazon Red shift will be a plus and good to have skill. Tech stack - DBT and Databricks or Snowflake Microsoft BI - PBI, Synapse and Fabric Project Management: Proficiency in project management methodologies (Agile, Scrum, or Waterfall). Key Responsibilities Project Delivery & Management: Involved in the delivery of project. Help and define project plan, and ensure timelines are met in project delivery. Maintain quality control and ensure client satisfaction at all stages. Team Leadership & Mentorship: Lead, mentor, and manage a team of 5 to 8 professionals. Conduct performance evaluations and provide opportunities for skill enhancement. Foster a collaborative and high-performance work environment. Client Engagement: Act as the primary point of contact on technical front. Understand client needs and ensure expectations are met or exceeded. Conduct and do bi-weekly and monthly reviews on projects with customer. Technical Expertise & Innovation: Stay updated with the latest trends in Microsoft Data Technologies (Microsoft Fabric, Azure Synapse, Power BI, SQL Server, Azure Data Factory, etc.). Provide technical guidance and support to the team. Regards, Ruchita Shete Busisol Sourcing Pvt. Ltd. Tel No: 7738389588 Email id: ruchita@busisol.net
Posted 1 day ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms ( Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Posted 1 day ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Production monitoring and troubleshooting in on Prem ETL and AWS environment Working experience using ETL Datastage along with DB2 Awareness to use tools such as Dynatrace, Appdynamics, Postman , AWS CICD Software code development experience in ETL batch processing and AWS cloud Software code management, repository updates and reuse Implementation and/or configuration, management, and maintenance of software Implementation and configuration of SaaS and public, private and hybrid cloud-based PaaS solutions Integration of SaaS and PaaS solutions with Data Warehouse Application Systems including SaaS and PaaS upgrade management Configuration, Maintenance and support for entire DWA Application Systems landscape including but not limited to supporting DWA Application Systems components and tasks required to deliver business processes and functionally (e.g., logical layers of databases, data marts, logical and physical data warehouses, middleware, interfaces, shell scripts, massive data transfer and uploads, web development, mobile app development, web services and APIs) DWA Application Systems support for day-to-day changes and business continuity and for addressing key business, regulatory, legal or fiscal requirements 11. Support for all Third-party specialized DWA Application Systems DWA Application Systems configuration and collaboration with infrastructure service supplier required to provide application access to external/third parties 13. Integration with internal and external systems (e.g., direct application interfaces, logical middleware configuration and application program interface (API) use and development) Collaboration with third party suppliers such as infrastructure service supplier and enterprise public cloud providers 15. Documentation and end user training of new functionality All activities required to support business process application functionality and to deliver the required application and business functions to End Users in an integrated service delivery model across the DWA Application Development lifecycle (e.g., plan, deliver, run) . Maintain data quality and run batch schedules , Operations and Maintenance Deploy code to all the environments (Prod, UAT, Performance, SIT etc.) Address all open tickets within the SLA CDK (Typescript) CFT (YAML) Nice to have GitHub Scripting -Bash/SH Security minded/best practices known Python Databricks & Snowflake Skills Databricks,Datastage,CloudOps,production support
Posted 1 day ago
10.0 years
20 - 25 Lacs
Gurugram, Haryana, India
On-site
7–10 years of data engineering experience, with 5+ years on Databricks and Apache Spark. Expert-level hands-on experience with Databricks and AWS (S3, Glue, EMR, Kinesis, Lambda, IAM, CloudWatch). Primary language: Python; strong skills in Spark SQL. Deep understanding of Lakehouse architecture, Delta Lake, Parquet, Iceberg. Strong experience with Databricks Workflows, Unity Catalog, Runtime upgrades, and cost optimization. Experience with Databricks native monitoring tools and Datadog integration. Security and compliance expertise across data governance and infrastructure layers. Experience with CI/CD automation using Terraform, CloudFormation, and Git. Hands-on experience with disaster recovery and multi-region architecture. Strong problem-solving, debugging, and documentation skills. Skills: s3,pipeline engineering,lakehouse architecture,aws,lambda,iam,parquet,delta lake,datadog,spark sql,ci/cd automation,runtime upgrades,terraform,kinesis,glue,platform governance,python,cloudwatch,databricks,observability,git,iceberg,apache spark,databricks workflows,cloudformation,unity catalog,disaster recovery,emr
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Greetings from TCS! TCS is hiring for Databricks / Python + Pyspark Required Skill Set: Python, Sql , Pyspark Desired Experience Range: 5 to 8 Years Job Location: Gurgaon Must-Have Actively engage in the development, testing, deployment, operation, monitoring and refinement of data services Manage incidents/problems, apply fixes and resolve systematic issues; triage issues with stakeholders and identify and implement solutions to restore productivity Experience with design build and implementation experience on Data Engineering pipelines using SQL, Python , Databricks (or Snowflake) Experience with data solutions in Cloud (optional: preferably with AWS) as well as on-premises assets like Oracle Experience of Pyspark is Desirable. Good experience with Stored Procs. HR Recruitment Anshika Varma
Posted 1 day ago
3.0 - 6.0 years
25 - 32 Lacs
Hyderabad
Work from Office
Senior Data Scientist Gen AI Experience: 3 - 6 Years Exp. Salary : INR 25-33 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Onsite (Hyderabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Gen AI Good to have skills : MDM platforms Blend360 (One of Uplers' Clients) is Looking for: Senior Data Scientist Gen AI who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are hiring a Senior Data Scientist (Generative AI) to spearhead the development of advanced AI-powered classification and matching systems on Databricks. You will contribute to flagship programs like the Diageo AI POC by building RAG pipelines, deploying agentic AI workflows, and scaling LLM-based solutions for high-precision entity matching and MDM modernization. Key Responsibilities Design and implement end-to-end AI pipelines for product classification, fuzzy matching, and deduplication using LLMs, RAG, and Databricks-native workflows. Develop scalable, reproducible AI solutions within Databricks notebooks and job clusters, leveraging Delta Lake, MLflow, and Unity Catalog. Engineer Retrieval-Augmented Generation (RAG) workflows using vector search and integrate with Python-based matching logic. Build agent-based automation pipelines (rule-driven + GenAI agents) for anomaly detection, compliance validation, and harmonization logic. Implement explainability, audit trails, and governance-first AI workflows aligned with enterprise-grade MDM needs. Collaborate with data engineers, BI teams, and product owners to integrate GenAI outputs into downstream systems. Contribute to modular system design and documentation for long-term scalability and maintainability. Qualifications Bachelors/Masters in Computer Science, Artificial Intelligence, or related field. 5+ years of overall Data Science experience with 2+ years in Generative AI / LLM-based applications. Deep experience with Databricks ecosystem: Delta Lake, MLflow, DBFS, Databricks Jobs & Workflows. Strong Python and PySpark skills with ability to build scalable data pipelines and AI workflows in Databricks. Experience with LLMs (e.g., OpenAI, LLaMA, Mistral) and frameworks like LangChain or LlamaIndex. Working knowledge of vector databases (e.g., FAISS, Chroma) and prompt engineering for classification/retrieval. Exposure to MDM platforms (e.g., Stibo STEP) and familiarity with data harmonization challenges. Experience with explainability frameworks (e.g., SHAP, LIME) and AI audit tooling. Preferred Skills Knowledge of agentic AI architectures and multi-agent orchestration. Familiarity with Azure Data Hub and enterprise data ingestion frameworks. Understanding of data governance, lineage, and regulatory compliance in AI systems. Interview Process Online Assessment Technical Screenings -2 Technical Interviews - 2 Project Review Client Interview How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Our Vision is to build a company of world-class people that helps our clients optimize business performance through data, technology and analytics. The company has two divisions: Data Science Solutions: We work at the intersection of data, technology and analytics. Talent Solutions: We live and breathe the digital and talent marketplace. About Uplers: Uplers is the #1 hiring platform for SaaS companies, designed to help you hire top product and engineering talent quickly and efficiently. Our end-to-end AI-powered platform combines artificial intelligence with human expertise to connect you with the best engineering talent from India. With over 1M deeply vetted professionals, Uplers streamlines the hiring process, reducing lengthy screening times and ensuring you find the perfect fit. Companies like GitLab, Twilio, TripAdvisor, and AirBnB trust Uplers to scale their tech and digital teams effectively and cost-efficiently. Experience a simpler, faster, and more reliable hiring process with Uplers today.
Posted 1 day ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Engineer Job Description Product Data & Analytics Team Senior Data Engineer – Product Data & Analytics Overview Product Data & Analytics team builds internal analytic partnerships, strengthening focus on the health of the business, portfolio and revenue optimization opportunities, initiative tracking, new product development and Go-To Market strategies. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data driven decision making? Are you motivated to be part of a Global Analytics team that builds large scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data analytics in the company? The ideal candidate has a knack for seeing solutions in sprawling data sets and the business mindset to convert insights into strategic opportunities for our company. Role & Responsibilities Work closely with global & regional teams to architect, develop, and maintain data engineering, advanced reporting and data visualization capabilities on large volumes of data to support analytics and reporting needs across products, markets and services. Obtain data from multiple sources, collate, analyze, and triangulate information to develop reliable fact bases. Effectively use tools to manipulate large-scale databases, synthesizing data insights. Execute cross-functional projects using advanced modeling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities. Build, develop and maintain data models, reporting systems, dashboards and performance metrics that support key business decisions. Extract intellectual capital from engagement work and actively share tools, methods and best practices across projects Provide 1st level insights/conclusions/assessments and present findings via Tableau/PowerBI dashboards, Excel and PowerPoint. Apply quality control, data validation, and cleansing processes to new and existing data sources. Lead, mentor and guide more junior team members. Communicate results and business impacts of insight initiatives to stakeholders in leadership, technology, sales, marketing and product teams. Bring your Passion and Expertise All About You Experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis Financial Institution or a Payments experience a plus Experience presenting data findings in a readable and insight driven format. Experience building support decks. Advanced SQL skills, ability to write optimized queries for large data sets (Big data) Experience on Platforms/Environments: Cloudera Hadoop, Big data technology stack, SQL Server, Microsoft BI Stack Experience with data visualization tools such as Looker, Tableau, PowerBI Experience with Python, R, Databricks a plus Experience on SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS) will be an added advantage Excellent problem solving, quantitative and analytical skills In depth technical knowledge, drive and ability to learn new technologies Strong attention to detail and quality Team player, excellent communication skills Must be able to interact with management, internal stakeholders and collect requirements Must be able to perform in a team, use judgment and operate under ambiguity Education Bachelor’s or master’s Degree in a Computer Science, Information Technology, Engineering, Mathematics, Statistics Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills Analytical/Problem Solving Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency Project Management/Risk Mitigation Able to prioritize and perform multiple tasks simultaneously Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-245986
Posted 1 day ago
7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Ahmedabad) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Inferenz) What do you need for this opportunity? Must have skills required: Databricks, SQL, Python, ETL tools, Data Modelling, Data Warehousing Inferenz is Looking for: Position: Lead Data Engineer (Databricks) Location: Ahmedabad, Pune Required Experience: 7 to 10 Years Preferred Immediate Joiner Key Responsibilities: Lead the design, development, and optimization of data solutions using Databricks, ensuring they are scalable, efficient, and secure. Collaborate with cross-functional teams to gather and analyse data requirements, translating them into robust data architectures and solutions. Develop and maintain ETL pipelines, leveraging Databricks and integrating with Azure Data Factory as needed. Implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensure data quality, governance, and security practices are adhered to, maintaining the integrity and reliability of data solutions. Provide technical leadership and mentorship to junior engineers, fostering an environment of learning and growth. Stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. Required Skills & Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a focus on Databricks. Proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue. Proficiency in SQL and programming languages such as Python or Scala. Strong understanding of data modelling, ETL processes, and Data Warehousing/Data Lakehouse concepts. Familiarity with cloud platforms, particularly Azure, and containerization technologies such as Docker. Excellent analytical, problem-solving, and communication skills. Demonstrated leadership ability with experience mentoring and guiding junior team members. Preferred Qualifications: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP. Knowledge of data governance frameworks and tools. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
We’re Hiring: Databricks Administrator (5+ Years Experience) | Contractual | PAN India | Hybrid Work Model What we’re looking for: Minimum 5 years of experience in Databricks administration Strong background in cluster management, access control, and job monitoring Proficiency in scripting (Python, Bash, or PowerShell) Experience with AWS or Azure Prior work experience in a Tier 1 company is a must This is a contract-to-hire opportunity offering flexibility, challenging projects, and a chance to work with some of the best minds in data.
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are looking for a Data Engineer with 3 to 5 years of experience to help design, build, and maintain the next-generation data platform for our Sisal team . This role will leverage modern cloud technologies, infrastructure as code (IaC), and advanced data processing techniques to drive business value from our data assets. You will collaborate with cross-functional teams to ensure data availability, quality, and reliability while applying expertise in Databricks on AWS, Python, CI/CD, and Agile methodologies to deliver scalable and efficient solutions. Key Responsibilities Design and implement scalable ETL processes and data pipelines that integrate with diverse data sources Build streaming and batch data processing solutions using Databricks on AWS Develop and optimize Lakehouse architectures ; work with big data access patterns to process large-scale datasets efficiently Drive automation and efficiency using CI/CD pipelines , IaC , and DevOps practices. Improve database performance, implement best practices for data governance, and enhance data security. Required Skills 3 to 5 years of experience in data engineering and ETL pipeline development . Hands-on experience with Databricks on AWS . Proven experience designing and implementing scalable data warehousing solutions . Expertise in AWS data services , particularly DynamoDB, Glue, Athena, EMR, Redshift, Lambda, and Kinesis. Strong programming skills in Python (PySpark/Spark SQL experience preferred) and Java. Desireable / Preferred Skills Knowledge of streaming data processing (e.g., Kafka, Kinesis, Spark Streaming). Experience with CI/CD tools and automation ( Git, Jenkins, Ansible, Shell Scripting, Unit/Integration Testing ). Familiarity with Agile methodologies and DevOps best practices . Benefits We Offer Access to Learnerbly, Udemy , and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs . Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model : 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance , and a Home Office Setup Allowance. Employer PF Contribution , gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards . Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India .
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are looking for a Data Engineer with 3 to 5 years of experience to help design, build, and maintain the next-generation data platform for our Sisal team . This role will leverage modern cloud technologies, infrastructure as code (IaC), and advanced data processing techniques to drive business value from our data assets. You will collaborate with cross-functional teams to ensure data availability, quality, and reliability while applying expertise in Databricks on AWS, Python, CI/CD, and Agile methodologies to deliver scalable and efficient solutions. Key Responsibilities Design and implement scalable ETL processes and data pipelines that integrate with diverse data sources Build streaming and batch data processing solutions using Databricks on AWS Develop and optimize Lakehouse architectures ; work with big data access patterns to process large-scale datasets efficiently Drive automation and efficiency using CI/CD pipelines , IaC , and DevOps practices. Improve database performance, implement best practices for data governance, and enhance data security. Required Skills 3 to 5 years of experience in data engineering and ETL pipeline development . Hands-on experience with Databricks on AWS . Proven experience designing and implementing scalable data warehousing solutions . Expertise in AWS data services , particularly DynamoDB, Glue, Athena, EMR, Redshift, Lambda, and Kinesis. Strong programming skills in Python (PySpark/Spark SQL experience preferred) and Java. Desireable / Preferred Skills Knowledge of streaming data processing (e.g., Kafka, Kinesis, Spark Streaming). Experience with CI/CD tools and automation ( Git, Jenkins, Ansible, Shell Scripting, Unit/Integration Testing ). Familiarity with Agile methodologies and DevOps best practices . Benefits We Offer Access to Learnerbly, Udemy , and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs . Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model : 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance , and a Home Office Setup Allowance. Employer PF Contribution , gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards . Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India .
Posted 1 day ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a seasoned Engineering Manager (Data Engineering) to lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Any Degree and 9-13 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 day ago
0 years
0 Lacs
India
On-site
Visionqual IT Services a Hyderabad based IT Service company looking for + 4 yrs experienced resources for SAP Databricks Consultant Role 1. Proficiency in Databricks 2. Understanding of Medallion Architecture 3. Understanding of AWS environment 4. Good to have SAP Datasphere skill 5. Proficient in SQL for data manipulation andSA optimization 6. Strong understanding of data warehouse concepts and dimensional modeling 7. Advanced knowledge of DAX, M language, and PowerQuery for sophisticated data modeling 8. Strong expertise in semantic modeling principles and best practices 9. Extensive experience with custom visualizations and complex dashboard design 10. Good to have Python programming skill Interested resources who can join immediately may share your profiles to sateesh.varma@visionqual.com and info@visionqual.com
Posted 1 day ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Description Lead the design, development, and implementation of scalable data pipelines and ELT processes using Databricks, DLT, dbt, Airflow, and other tools. Collaborate with stakeholders to understand data requirements and deliver high-quality data solutions. Optimize and maintain existing data pipelines to ensure data quality, reliability, and performance. Develop and enforce data engineering best practices, including coding standards, testing, and documentation. Mentor junior data engineers, providing technical leadership and fostering a culture of continuous learning and improvement. Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption to business operations. Stay up to date with the latest industry trends and technologies, and proactively recommend improvements to our data engineering practices. Qualifications Systems (MIS), Data Science or related field. 15 years of experience in data engineering and/or architecture, with a focus on big data technologies. Extensive production experience with Databricks, Apache Spark, and other related technologies. Familiarity with orchestration and ELT tools like Airflow, dbt, etc. Expert SQL knowledge. Proficiency in programming languages such as Python, Scala, or Java. Strong understanding of data warehousing concepts. Experience with cloud platforms such as Azure, AWS, Google Cloud. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication and leadership skills, with the ability to effectively mentor and guide Experience with machine learning and data science workflows Knowledge of data governance and security best practices Certification in Databricks, Azure, Google Cloud or related technologies. Job Information Technology Primary Location India-Maharashtra-Mumbai Schedule: Full-time Travel: No Req ID: 250903 Job Hire Type Experienced Not Applicable #BMI N/A
Posted 1 day ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Databricks & AWS Lakehouse Engineer Budget : Max 25 LPA Gurugram Client: HCL Strong hands-on skills in Python (primary), Spark SQL, pipeline engineering, CI/CD automation , observability, and platform governance. 7–10 years of data engineering experience, with 5+ years on Databricks and Apache Spark. Expert-level hands-on experience with Databricks and AWS (S3, Glue, EMR, Kinesis, Lambda, IAM, CloudWatch). Primary language: Python ; strong skills in Spark SQL . Deep understanding of Lakehouse architecture, Delta Lake, Parquet, Iceberg. Strong experience with Databricks Workflows, Unity Catalog, Runtime upgrades, and cost optimization. Experience with Databricks native monitoring tools and Datadog integration. Security and compliance expertise across data governance and infrastructure layers. Experience with CI/CD automation using Terraform, CloudFormation, and Git. Hands-on experience with disaster recovery and multi-region architecture. Strong problem-solving, debugging, and documentation skills.
Posted 1 day ago
7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Lead Data Engineer (Databricks) Experience: 7-10 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Hybrid (Ahmedabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills: Databricks, SQL OR Python, ETL tools OR Data Modelling OR Data Warehousing Inferenz (One of Uplers' Clients) is looking for: About Inferenz: At Inferenz, our team of innovative technologists and domain experts help accelerating the business growth through digital enablement and navigating the industries with data, cloud and AI services and solutions. We dedicate our resources to increase efficiency and gain a greater competitive advantage by leveraging various next generation technologies. Our technology expertise has helped us delivering the innovative solutions in key industries such as Healthcare & Life Sciences, Consumer & Retail, Financial Services and Emerging industries. Our main capabilities and solutions: Data Strategy & Architecture Data & Cloud Migration Data Quality & Governance Data Engineering Predictive Analytics Machine Learning/Artificial Intelligence Generative AI Specialties: Data and Cloud Strategy, Data Modernization, On-Premise to Cloud Migration, SQL to Snowflake Migration, Hadoop to Snowflake Migration, Cloud Data Platform and Warehouses, Data Engineering and Pipeline, Data Virtualization, Business Intelligence, Data Democratization, Marketing Analytics, Attribution Modelling, Machine Learning, Computer Vision, Natural Language Processing and Augmented Reality. Job Description Key Responsibilities: Lead the design, development, and optimization of data solutions using Databricks, ensuring they are scalable, efficient, and secure. Collaborate with cross-functional teams to gather and analyse data requirements, translating them into robust data architectures and solutions. Develop and maintain ETL pipelines, leveraging Databricks and integrating with Azure Data Factory as needed. Implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensure data quality, governance, and security practices are adhered to, maintaining the integrity and reliability of data solutions. Provide technical leadership and mentorship to junior engineers, fostering an environment of learning and growth. Stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. Required Skills & Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a focus on Databricks. Proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue. Proficiency in SQL and programming languages such as Python or Scala. Strong understanding of data modelling, ETL processes, and Data Warehousing/Data Lakehouse concepts. Familiarity with cloud platforms, particularly Azure, and containerization technologies such as Docker. Excellent analytical, problem-solving, and communication skills. Demonstrated leadership ability with experience mentoring and guiding junior team members. Preferred Qualifications: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP. Knowledge of data governance frameworks and tools How to apply for this opportunity: Easy 3 Step Process: 1. Click On Apply and register or log in to our portal 2.Upload updated Resume & complete the Screening Form 3. Increase your chances of getting shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: India Job Type: Full-time Experience Level: Mid-Level/Senior Must have skills: Strong proficiency in PySpark , Python, SQL and Azure Data Factory . Good to have skills: working knowledge on Azure Synapse Analytics, Azure functions, Logic app workflows, Log analytics and Azure DevOps. Job Summary We are looking for a highly skilled Azure Data Engineer / Databricks Developer to join our data and analytics team. The ideal candidate will have deep expertise in building robust, scalable, and efficient data solutions using Azure cloud services and Apache Spark on Databricks . You will be instrumental in developing end-to-end data pipelines that support advanced analytics, and business intelligence initiatives. Key Responsibilities Design and implement scalable data pipelines using Databricks , Azure Data Factory , Azure SQL , and other Azure services. Write efficient PySpark / Spark SQL code for data transformation, cleansing, and enrichment. Implement data ingestion from various sources including structured, semi-structured, and unstructured data. Optimize data processing workflows for performance, cost, and reliability. Collaborate with data analysts, and stakeholders to understand data needs and deliver high-quality datasets. Ensure data governance, security, and compliance using Azure-native tools. Participate in code reviews, documentation, and deployment of data solutions using DevOps practices.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane