Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking an experienced PMT Technical Project Manager with a minimum of 10 years of experience in managing data projects. The ideal candidate will have a strong background in data management, MDM, Databricks, Azure, and lakeform. The candidate should also have a decent understanding of data domains. As a PMT Project Manager, you will be responsible for overseeing and coordinating all aspects of data projects, including planning, budgeting, and execution. You will work closely with cross-functional teams to ensure the successful delivery of projects on time and within budget. The ideal candidate will have excellent leadership and communication skills, as well as a proven track record of successfully managing data projects. This is a great opportunity for a driven and experienced professional to join our team and make a significant impact in the data industry. The total years of work experience for this position is 12 and the work mode is Work from Virtusa Office.
Posted 3 days ago
3.0 years
0 Lacs
Greater Madurai Area
On-site
Job Requirements Role Description As a BI Developer, you will be responsible for transforming raw data into actionable insights that drive business decisions. Being part of the BI and Reporting team, You will work closely with Data operations team , Data Base Administrators, Data Business Partner and business stakeholders to develop data analytics solutions, create interactive reports, and optimize BI workflows using SQL, Python, Databricks, Power BI, and Tableau. Your expertise in data modelling, visualization, and reporting will be crucial in shaping data-driven strategies. Key Responsibilities Develop data models and interactive dashboards using Power BI, Tableau, and automated reporting to track key performance indicators (KPIs) relevant to business functions. Write complex SQL queries, leverage Python for data manipulation and predictive analytics, and optimize ETL processes for efficient data handling across multiple domains. Work with Databricks for large-scale data processing and implement AWS/Azure-based cloud solutions, ensuring scalability and performance of BI applications. Maintain data accuracy, consistency, and security across platforms, ensuring high-quality BI applications tailored to SCM, finance, sales, and marketing needs. Partner with business teams, communicate complex findings effectively to non-technical stakeholders, and drive a data-centric culture across departments. Required Skills & Qualifications Education: Bachelor’s degree in data science, Computer Science, Business Analytics, or a related field. Experience: 3+ years in BI, data analytics, or reporting roles. Technical Expertise SQL: Strong proficiency in writing queries and optimizing databases. Python: Experience in data manipulation and automation. Databricks: Hands-on experience with cloud-based data processing. Visualization Tools: Power BI, Tableau. Soft Skills Strong analytical thinking and problem-solving abilities. Excellent communication and stakeholder management skills. Ability to translate business needs into technical solutions.
Posted 3 days ago
8.0 - 12.0 years
14 - 24 Lacs
Pune
Work from Office
Role & responsibilities Experience: 8-10 years in the Data and Analytics domain with expertise in the Microsoft Data Tech stack. Leadership: Experience in managing teams of 8-10 members. Technical Skills: Expertise in tools like Microsoft Fabric, Azure Synapse Analytics, Azure Data Factory, Power BI, SQL Server, Azure Databricks, etc. Strong understanding of data architecture, pipelines, and governance. Understanding of one of the other data platforms like Snowflake or Google Big query or Amazon Red shift will be a plus and good to have skill. Tech stack - DBT and Databricks or Snowflake Microsoft BI - PBI, Synapse and Fabric Project Management: Proficiency in project management methodologies (Agile, Scrum, or Waterfall). Key Responsibilities Project Delivery & Management: Involved in the delivery of project. Help and define project plan, and ensure timelines are met in project delivery. Maintain quality control and ensure client satisfaction at all stages. Team Leadership & Mentorship: Lead, mentor, and manage a team of 5 to 8 professionals. Conduct performance evaluations and provide opportunities for skill enhancement. Foster a collaborative and high-performance work environment. Client Engagement: Act as the primary point of contact on technical front. Understand client needs and ensure expectations are met or exceeded. Conduct and do bi-weekly and monthly reviews on projects with customer. Technical Expertise & Innovation: Stay updated with the latest trends in Microsoft Data Technologies (Microsoft Fabric, Azure Synapse, Power BI, SQL Server, Azure Data Factory, etc.). Provide technical guidance and support to the team. Regards, Ruchita Shete Busisol Sourcing Pvt. Ltd. Tel No: 7738389588 Email id: ruchita@busisol.net
Posted 3 days ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms ( Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Posted 3 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Production monitoring and troubleshooting in on Prem ETL and AWS environment Working experience using ETL Datastage along with DB2 Awareness to use tools such as Dynatrace, Appdynamics, Postman , AWS CICD Software code development experience in ETL batch processing and AWS cloud Software code management, repository updates and reuse Implementation and/or configuration, management, and maintenance of software Implementation and configuration of SaaS and public, private and hybrid cloud-based PaaS solutions Integration of SaaS and PaaS solutions with Data Warehouse Application Systems including SaaS and PaaS upgrade management Configuration, Maintenance and support for entire DWA Application Systems landscape including but not limited to supporting DWA Application Systems components and tasks required to deliver business processes and functionally (e.g., logical layers of databases, data marts, logical and physical data warehouses, middleware, interfaces, shell scripts, massive data transfer and uploads, web development, mobile app development, web services and APIs) DWA Application Systems support for day-to-day changes and business continuity and for addressing key business, regulatory, legal or fiscal requirements 11. Support for all Third-party specialized DWA Application Systems DWA Application Systems configuration and collaboration with infrastructure service supplier required to provide application access to external/third parties 13. Integration with internal and external systems (e.g., direct application interfaces, logical middleware configuration and application program interface (API) use and development) Collaboration with third party suppliers such as infrastructure service supplier and enterprise public cloud providers 15. Documentation and end user training of new functionality All activities required to support business process application functionality and to deliver the required application and business functions to End Users in an integrated service delivery model across the DWA Application Development lifecycle (e.g., plan, deliver, run) . Maintain data quality and run batch schedules , Operations and Maintenance Deploy code to all the environments (Prod, UAT, Performance, SIT etc.) Address all open tickets within the SLA CDK (Typescript) CFT (YAML) Nice to have GitHub Scripting -Bash/SH Security minded/best practices known Python Databricks & Snowflake Skills Databricks,Datastage,CloudOps,production support
Posted 3 days ago
10.0 years
20 - 25 Lacs
Gurugram, Haryana, India
On-site
7–10 years of data engineering experience, with 5+ years on Databricks and Apache Spark. Expert-level hands-on experience with Databricks and AWS (S3, Glue, EMR, Kinesis, Lambda, IAM, CloudWatch). Primary language: Python; strong skills in Spark SQL. Deep understanding of Lakehouse architecture, Delta Lake, Parquet, Iceberg. Strong experience with Databricks Workflows, Unity Catalog, Runtime upgrades, and cost optimization. Experience with Databricks native monitoring tools and Datadog integration. Security and compliance expertise across data governance and infrastructure layers. Experience with CI/CD automation using Terraform, CloudFormation, and Git. Hands-on experience with disaster recovery and multi-region architecture. Strong problem-solving, debugging, and documentation skills. Skills: s3,pipeline engineering,lakehouse architecture,aws,lambda,iam,parquet,delta lake,datadog,spark sql,ci/cd automation,runtime upgrades,terraform,kinesis,glue,platform governance,python,cloudwatch,databricks,observability,git,iceberg,apache spark,databricks workflows,cloudformation,unity catalog,disaster recovery,emr
Posted 3 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Greetings from TCS! TCS is hiring for Databricks / Python + Pyspark Required Skill Set: Python, Sql , Pyspark Desired Experience Range: 5 to 8 Years Job Location: Gurgaon Must-Have Actively engage in the development, testing, deployment, operation, monitoring and refinement of data services Manage incidents/problems, apply fixes and resolve systematic issues; triage issues with stakeholders and identify and implement solutions to restore productivity Experience with design build and implementation experience on Data Engineering pipelines using SQL, Python , Databricks (or Snowflake) Experience with data solutions in Cloud (optional: preferably with AWS) as well as on-premises assets like Oracle Experience of Pyspark is Desirable. Good experience with Stored Procs. HR Recruitment Anshika Varma
Posted 3 days ago
3.0 - 6.0 years
25 - 32 Lacs
Hyderabad
Work from Office
Senior Data Scientist Gen AI Experience: 3 - 6 Years Exp. Salary : INR 25-33 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Onsite (Hyderabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Gen AI Good to have skills : MDM platforms Blend360 (One of Uplers' Clients) is Looking for: Senior Data Scientist Gen AI who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are hiring a Senior Data Scientist (Generative AI) to spearhead the development of advanced AI-powered classification and matching systems on Databricks. You will contribute to flagship programs like the Diageo AI POC by building RAG pipelines, deploying agentic AI workflows, and scaling LLM-based solutions for high-precision entity matching and MDM modernization. Key Responsibilities Design and implement end-to-end AI pipelines for product classification, fuzzy matching, and deduplication using LLMs, RAG, and Databricks-native workflows. Develop scalable, reproducible AI solutions within Databricks notebooks and job clusters, leveraging Delta Lake, MLflow, and Unity Catalog. Engineer Retrieval-Augmented Generation (RAG) workflows using vector search and integrate with Python-based matching logic. Build agent-based automation pipelines (rule-driven + GenAI agents) for anomaly detection, compliance validation, and harmonization logic. Implement explainability, audit trails, and governance-first AI workflows aligned with enterprise-grade MDM needs. Collaborate with data engineers, BI teams, and product owners to integrate GenAI outputs into downstream systems. Contribute to modular system design and documentation for long-term scalability and maintainability. Qualifications Bachelors/Masters in Computer Science, Artificial Intelligence, or related field. 5+ years of overall Data Science experience with 2+ years in Generative AI / LLM-based applications. Deep experience with Databricks ecosystem: Delta Lake, MLflow, DBFS, Databricks Jobs & Workflows. Strong Python and PySpark skills with ability to build scalable data pipelines and AI workflows in Databricks. Experience with LLMs (e.g., OpenAI, LLaMA, Mistral) and frameworks like LangChain or LlamaIndex. Working knowledge of vector databases (e.g., FAISS, Chroma) and prompt engineering for classification/retrieval. Exposure to MDM platforms (e.g., Stibo STEP) and familiarity with data harmonization challenges. Experience with explainability frameworks (e.g., SHAP, LIME) and AI audit tooling. Preferred Skills Knowledge of agentic AI architectures and multi-agent orchestration. Familiarity with Azure Data Hub and enterprise data ingestion frameworks. Understanding of data governance, lineage, and regulatory compliance in AI systems. Interview Process Online Assessment Technical Screenings -2 Technical Interviews - 2 Project Review Client Interview How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Our Vision is to build a company of world-class people that helps our clients optimize business performance through data, technology and analytics. The company has two divisions: Data Science Solutions: We work at the intersection of data, technology and analytics. Talent Solutions: We live and breathe the digital and talent marketplace. About Uplers: Uplers is the #1 hiring platform for SaaS companies, designed to help you hire top product and engineering talent quickly and efficiently. Our end-to-end AI-powered platform combines artificial intelligence with human expertise to connect you with the best engineering talent from India. With over 1M deeply vetted professionals, Uplers streamlines the hiring process, reducing lengthy screening times and ensuring you find the perfect fit. Companies like GitLab, Twilio, TripAdvisor, and AirBnB trust Uplers to scale their tech and digital teams effectively and cost-efficiently. Experience a simpler, faster, and more reliable hiring process with Uplers today.
Posted 3 days ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Engineer Job Description Product Data & Analytics Team Senior Data Engineer – Product Data & Analytics Overview Product Data & Analytics team builds internal analytic partnerships, strengthening focus on the health of the business, portfolio and revenue optimization opportunities, initiative tracking, new product development and Go-To Market strategies. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data driven decision making? Are you motivated to be part of a Global Analytics team that builds large scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data analytics in the company? The ideal candidate has a knack for seeing solutions in sprawling data sets and the business mindset to convert insights into strategic opportunities for our company. Role & Responsibilities Work closely with global & regional teams to architect, develop, and maintain data engineering, advanced reporting and data visualization capabilities on large volumes of data to support analytics and reporting needs across products, markets and services. Obtain data from multiple sources, collate, analyze, and triangulate information to develop reliable fact bases. Effectively use tools to manipulate large-scale databases, synthesizing data insights. Execute cross-functional projects using advanced modeling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities. Build, develop and maintain data models, reporting systems, dashboards and performance metrics that support key business decisions. Extract intellectual capital from engagement work and actively share tools, methods and best practices across projects Provide 1st level insights/conclusions/assessments and present findings via Tableau/PowerBI dashboards, Excel and PowerPoint. Apply quality control, data validation, and cleansing processes to new and existing data sources. Lead, mentor and guide more junior team members. Communicate results and business impacts of insight initiatives to stakeholders in leadership, technology, sales, marketing and product teams. Bring your Passion and Expertise All About You Experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis Financial Institution or a Payments experience a plus Experience presenting data findings in a readable and insight driven format. Experience building support decks. Advanced SQL skills, ability to write optimized queries for large data sets (Big data) Experience on Platforms/Environments: Cloudera Hadoop, Big data technology stack, SQL Server, Microsoft BI Stack Experience with data visualization tools such as Looker, Tableau, PowerBI Experience with Python, R, Databricks a plus Experience on SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS) will be an added advantage Excellent problem solving, quantitative and analytical skills In depth technical knowledge, drive and ability to learn new technologies Strong attention to detail and quality Team player, excellent communication skills Must be able to interact with management, internal stakeholders and collect requirements Must be able to perform in a team, use judgment and operate under ambiguity Education Bachelor’s or master’s Degree in a Computer Science, Information Technology, Engineering, Mathematics, Statistics Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills Analytical/Problem Solving Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency Project Management/Risk Mitigation Able to prioritize and perform multiple tasks simultaneously Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-245986
Posted 3 days ago
7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Experience : 7.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Ahmedabad) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Inferenz) What do you need for this opportunity? Must have skills required: Databricks, SQL, Python, ETL tools, Data Modelling, Data Warehousing Inferenz is Looking for: Position: Lead Data Engineer (Databricks) Location: Ahmedabad, Pune Required Experience: 7 to 10 Years Preferred Immediate Joiner Key Responsibilities: Lead the design, development, and optimization of data solutions using Databricks, ensuring they are scalable, efficient, and secure. Collaborate with cross-functional teams to gather and analyse data requirements, translating them into robust data architectures and solutions. Develop and maintain ETL pipelines, leveraging Databricks and integrating with Azure Data Factory as needed. Implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensure data quality, governance, and security practices are adhered to, maintaining the integrity and reliability of data solutions. Provide technical leadership and mentorship to junior engineers, fostering an environment of learning and growth. Stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. Required Skills & Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a focus on Databricks. Proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue. Proficiency in SQL and programming languages such as Python or Scala. Strong understanding of data modelling, ETL processes, and Data Warehousing/Data Lakehouse concepts. Familiarity with cloud platforms, particularly Azure, and containerization technologies such as Docker. Excellent analytical, problem-solving, and communication skills. Demonstrated leadership ability with experience mentoring and guiding junior team members. Preferred Qualifications: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP. Knowledge of data governance frameworks and tools. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
We’re Hiring: Databricks Administrator (5+ Years Experience) | Contractual | PAN India | Hybrid Work Model What we’re looking for: Minimum 5 years of experience in Databricks administration Strong background in cluster management, access control, and job monitoring Proficiency in scripting (Python, Bash, or PowerShell) Experience with AWS or Azure Prior work experience in a Tier 1 company is a must This is a contract-to-hire opportunity offering flexibility, challenging projects, and a chance to work with some of the best minds in data.
Posted 3 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are looking for a Data Engineer with 3 to 5 years of experience to help design, build, and maintain the next-generation data platform for our Sisal team . This role will leverage modern cloud technologies, infrastructure as code (IaC), and advanced data processing techniques to drive business value from our data assets. You will collaborate with cross-functional teams to ensure data availability, quality, and reliability while applying expertise in Databricks on AWS, Python, CI/CD, and Agile methodologies to deliver scalable and efficient solutions. Key Responsibilities Design and implement scalable ETL processes and data pipelines that integrate with diverse data sources Build streaming and batch data processing solutions using Databricks on AWS Develop and optimize Lakehouse architectures ; work with big data access patterns to process large-scale datasets efficiently Drive automation and efficiency using CI/CD pipelines , IaC , and DevOps practices. Improve database performance, implement best practices for data governance, and enhance data security. Required Skills 3 to 5 years of experience in data engineering and ETL pipeline development . Hands-on experience with Databricks on AWS . Proven experience designing and implementing scalable data warehousing solutions . Expertise in AWS data services , particularly DynamoDB, Glue, Athena, EMR, Redshift, Lambda, and Kinesis. Strong programming skills in Python (PySpark/Spark SQL experience preferred) and Java. Desireable / Preferred Skills Knowledge of streaming data processing (e.g., Kafka, Kinesis, Spark Streaming). Experience with CI/CD tools and automation ( Git, Jenkins, Ansible, Shell Scripting, Unit/Integration Testing ). Familiarity with Agile methodologies and DevOps best practices . Benefits We Offer Access to Learnerbly, Udemy , and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs . Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model : 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance , and a Home Office Setup Allowance. Employer PF Contribution , gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards . Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India .
Posted 3 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Flutter Entertainment Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. Flutter Entertainment India Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 900+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. Overview Of The Role We are looking for a Data Engineer with 3 to 5 years of experience to help design, build, and maintain the next-generation data platform for our Sisal team . This role will leverage modern cloud technologies, infrastructure as code (IaC), and advanced data processing techniques to drive business value from our data assets. You will collaborate with cross-functional teams to ensure data availability, quality, and reliability while applying expertise in Databricks on AWS, Python, CI/CD, and Agile methodologies to deliver scalable and efficient solutions. Key Responsibilities Design and implement scalable ETL processes and data pipelines that integrate with diverse data sources Build streaming and batch data processing solutions using Databricks on AWS Develop and optimize Lakehouse architectures ; work with big data access patterns to process large-scale datasets efficiently Drive automation and efficiency using CI/CD pipelines , IaC , and DevOps practices. Improve database performance, implement best practices for data governance, and enhance data security. Required Skills 3 to 5 years of experience in data engineering and ETL pipeline development . Hands-on experience with Databricks on AWS . Proven experience designing and implementing scalable data warehousing solutions . Expertise in AWS data services , particularly DynamoDB, Glue, Athena, EMR, Redshift, Lambda, and Kinesis. Strong programming skills in Python (PySpark/Spark SQL experience preferred) and Java. Desireable / Preferred Skills Knowledge of streaming data processing (e.g., Kafka, Kinesis, Spark Streaming). Experience with CI/CD tools and automation ( Git, Jenkins, Ansible, Shell Scripting, Unit/Integration Testing ). Familiarity with Agile methodologies and DevOps best practices . Benefits We Offer Access to Learnerbly, Udemy , and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs . Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model : 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance , and a Home Office Setup Allowance. Employer PF Contribution , gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards . Why Choose Us Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India .
Posted 3 days ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a seasoned Engineering Manager (Data Engineering) to lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Any Degree and 9-13 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 3 days ago
0 years
0 Lacs
India
On-site
Visionqual IT Services a Hyderabad based IT Service company looking for + 4 yrs experienced resources for SAP Databricks Consultant Role 1. Proficiency in Databricks 2. Understanding of Medallion Architecture 3. Understanding of AWS environment 4. Good to have SAP Datasphere skill 5. Proficient in SQL for data manipulation andSA optimization 6. Strong understanding of data warehouse concepts and dimensional modeling 7. Advanced knowledge of DAX, M language, and PowerQuery for sophisticated data modeling 8. Strong expertise in semantic modeling principles and best practices 9. Extensive experience with custom visualizations and complex dashboard design 10. Good to have Python programming skill Interested resources who can join immediately may share your profiles to sateesh.varma@visionqual.com and info@visionqual.com
Posted 3 days ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Description Lead the design, development, and implementation of scalable data pipelines and ELT processes using Databricks, DLT, dbt, Airflow, and other tools. Collaborate with stakeholders to understand data requirements and deliver high-quality data solutions. Optimize and maintain existing data pipelines to ensure data quality, reliability, and performance. Develop and enforce data engineering best practices, including coding standards, testing, and documentation. Mentor junior data engineers, providing technical leadership and fostering a culture of continuous learning and improvement. Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption to business operations. Stay up to date with the latest industry trends and technologies, and proactively recommend improvements to our data engineering practices. Qualifications Systems (MIS), Data Science or related field. 15 years of experience in data engineering and/or architecture, with a focus on big data technologies. Extensive production experience with Databricks, Apache Spark, and other related technologies. Familiarity with orchestration and ELT tools like Airflow, dbt, etc. Expert SQL knowledge. Proficiency in programming languages such as Python, Scala, or Java. Strong understanding of data warehousing concepts. Experience with cloud platforms such as Azure, AWS, Google Cloud. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication and leadership skills, with the ability to effectively mentor and guide Experience with machine learning and data science workflows Knowledge of data governance and security best practices Certification in Databricks, Azure, Google Cloud or related technologies. Job Information Technology Primary Location India-Maharashtra-Mumbai Schedule: Full-time Travel: No Req ID: 250903 Job Hire Type Experienced Not Applicable #BMI N/A
Posted 3 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Databricks & AWS Lakehouse Engineer Budget : Max 25 LPA Gurugram Client: HCL Strong hands-on skills in Python (primary), Spark SQL, pipeline engineering, CI/CD automation , observability, and platform governance. 7–10 years of data engineering experience, with 5+ years on Databricks and Apache Spark. Expert-level hands-on experience with Databricks and AWS (S3, Glue, EMR, Kinesis, Lambda, IAM, CloudWatch). Primary language: Python ; strong skills in Spark SQL . Deep understanding of Lakehouse architecture, Delta Lake, Parquet, Iceberg. Strong experience with Databricks Workflows, Unity Catalog, Runtime upgrades, and cost optimization. Experience with Databricks native monitoring tools and Datadog integration. Security and compliance expertise across data governance and infrastructure layers. Experience with CI/CD automation using Terraform, CloudFormation, and Git. Hands-on experience with disaster recovery and multi-region architecture. Strong problem-solving, debugging, and documentation skills.
Posted 3 days ago
7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Lead Data Engineer (Databricks) Experience: 7-10 Years Exp Salary: Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Hybrid (Ahmedabad) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills: Databricks, SQL OR Python, ETL tools OR Data Modelling OR Data Warehousing Inferenz (One of Uplers' Clients) is looking for: About Inferenz: At Inferenz, our team of innovative technologists and domain experts help accelerating the business growth through digital enablement and navigating the industries with data, cloud and AI services and solutions. We dedicate our resources to increase efficiency and gain a greater competitive advantage by leveraging various next generation technologies. Our technology expertise has helped us delivering the innovative solutions in key industries such as Healthcare & Life Sciences, Consumer & Retail, Financial Services and Emerging industries. Our main capabilities and solutions: Data Strategy & Architecture Data & Cloud Migration Data Quality & Governance Data Engineering Predictive Analytics Machine Learning/Artificial Intelligence Generative AI Specialties: Data and Cloud Strategy, Data Modernization, On-Premise to Cloud Migration, SQL to Snowflake Migration, Hadoop to Snowflake Migration, Cloud Data Platform and Warehouses, Data Engineering and Pipeline, Data Virtualization, Business Intelligence, Data Democratization, Marketing Analytics, Attribution Modelling, Machine Learning, Computer Vision, Natural Language Processing and Augmented Reality. Job Description Key Responsibilities: Lead the design, development, and optimization of data solutions using Databricks, ensuring they are scalable, efficient, and secure. Collaborate with cross-functional teams to gather and analyse data requirements, translating them into robust data architectures and solutions. Develop and maintain ETL pipelines, leveraging Databricks and integrating with Azure Data Factory as needed. Implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensure data quality, governance, and security practices are adhered to, maintaining the integrity and reliability of data solutions. Provide technical leadership and mentorship to junior engineers, fostering an environment of learning and growth. Stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. Required Skills & Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. 7+ to 10 years of experience in data engineering, with a focus on Databricks. Proven expertise in building and optimizing data solutions using Databricks and integrating with Azure Data Factory/AWS Glue. Proficiency in SQL and programming languages such as Python or Scala. Strong understanding of data modelling, ETL processes, and Data Warehousing/Data Lakehouse concepts. Familiarity with cloud platforms, particularly Azure, and containerization technologies such as Docker. Excellent analytical, problem-solving, and communication skills. Demonstrated leadership ability with experience mentoring and guiding junior team members. Preferred Qualifications: Experience with Generative AI technologies and their applications. Familiarity with other cloud platforms, such as AWS or GCP. Knowledge of data governance frameworks and tools How to apply for this opportunity: Easy 3 Step Process: 1. Click On Apply and register or log in to our portal 2.Upload updated Resume & complete the Screening Form 3. Increase your chances of getting shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: India Job Type: Full-time Experience Level: Mid-Level/Senior Must have skills: Strong proficiency in PySpark , Python, SQL and Azure Data Factory . Good to have skills: working knowledge on Azure Synapse Analytics, Azure functions, Logic app workflows, Log analytics and Azure DevOps. Job Summary We are looking for a highly skilled Azure Data Engineer / Databricks Developer to join our data and analytics team. The ideal candidate will have deep expertise in building robust, scalable, and efficient data solutions using Azure cloud services and Apache Spark on Databricks . You will be instrumental in developing end-to-end data pipelines that support advanced analytics, and business intelligence initiatives. Key Responsibilities Design and implement scalable data pipelines using Databricks , Azure Data Factory , Azure SQL , and other Azure services. Write efficient PySpark / Spark SQL code for data transformation, cleansing, and enrichment. Implement data ingestion from various sources including structured, semi-structured, and unstructured data. Optimize data processing workflows for performance, cost, and reliability. Collaborate with data analysts, and stakeholders to understand data needs and deliver high-quality datasets. Ensure data governance, security, and compliance using Azure-native tools. Participate in code reviews, documentation, and deployment of data solutions using DevOps practices.
Posted 3 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
We're looking for a Senior Data Engineer to design and build scalable data solutions using Azure Data Services, Power BI, and modern data engineering best practices. You'll work across teams to create efficient data pipelines, optimise databases, and enable smart, data-driven decisions. If you enjoy solving complex data challenges, collaborating globally, and making a real impact, we'd love to hear from you. Be a part of our Data Management & Reporting team and help us deliver innovative solutions that make a real impact. At SkyCell, we're on a mission to change the world by revolutionizing the global supply chain. Our cutting-edge temperature-controlled container solutions are designed to ensure the safe and secure delivery of life-saving pharmaceuticals, with sustainability at the core of everything we do. We're a fast-growing, purpose-driven scale-up where you'll make an impact, feel empowered, and thrive in a diverse, innovative environment. Why SkyCell? 🌱 Purpose-Driven Work: Make a real difference by contributing to a more sustainable future in global logistics and healthcare 🚀 Innovation at Heart: Work with cutting-edge technology and be at the forefront of supply chain innovation 🌎 Stronger together: Join a supportive team of talented individuals from over 40 countries, where we work together every step of the way 💡 Growth Opportunities: We believe in investing in our people - continuous learning and development are key pillars of SkyCell 🏆 Award-Winning Culture: Join a workplace recognized for its commitment to excellence with a ‘Great Place to work' award, as well as a Platinum Ecovadis rating highlighting our sustainability and employee well-being What You'll Do: Design, build, and maintain scalable data pipelines and databases using Azure Data Services for both structured and unstructured data Optimise and monitor database performance, ensuring efficient data retrieval and storage Develop, optimise, and maintain complex SQL queries, stored procedures, and data transformation logic Develop efficient workflows, automate data processes, and provide analytical support through data extraction, transformation, and interpretation Create and optimise Power BI dashboards and models, including DAX tuning and data modelling; explore additional reporting tools as needed Integrate data from multiple sources and ensure accuracy through quality checks, validation, and consistency measures Implement data security measures and ensure compliance with data governance policies Investigate and support the business in providing solutions for data issues Collaborate with cross-functional teams, contribute to code reviews, and uphold coding standards Continuously evaluate tools, document data flows and system architecture, and improve engineering practices Provide technical leadership, mentor junior engineers, and support in hiring, onboarding, and training Requirements What You'll Bring: Bachelor's degree in Computer Science or a related field (Master Degree is a plus) Proven expertise in designing and implementing data solutions using Azure Data Factory, Azure Databricks, and Azure SQL Database; certifications are a plus Strong proficiency in SQL development and optimization, good knowledge of NoSQL databases Extensive experience in developing complex dashboards and reports including DAX Knowledge of at least one data analysis language (Python, R, etc.) Knowledge of SAP Analytics Cloud is advantageous Ability to design and implement data models to ensure efficient storage, retrieval, and analysis of structured and unstructured data Benefits What's In It For You? ⚡ Flexibility & Balance: Flexible working hours and work-life balance allow you to tailor work to fit your life 🌟 Recognition & Growth: Opportunities for career advancement in a company that values your contributions 💼 Hybrid Workplace: Modern workspaces (in Zurich, Zug and Hyderabad as well as our Skyhub in Basel) and a remote-friendly culture to inspire collaboration amongst a globally diverse team 🎉 Company-wide Events: Join us for company events to celebrate successes, build teams, and share our vision. Plus, new joiners experience SkyWeek, our immersive onboarding program 👶 Generous Maternity & Paternity Leave: Support for new parents with competitive maternity and paternity leave 🏖️ Annual Leave & Bank Holidays: Enjoy a generous annual leave package, plus local bank holidays to recharge and unwind Ready to Make an Impact? We're not just offering a job; we're offering a chance to be part of something bigger. At SkyCell, you'll help build a future where pharmaceutical delivery is efficient, sustainable, and transformative. Stay Connected with SkyCell Visit http://www.skycell.ch and explore #WeAreSkyCell on LinkedIn How To Apply Simply click ‘apply for this job' below! We can't wait to meet you and discuss how you can contribute to our mission! Please note, we are unable to consider applications sent to us via email. If you have any questions, you can contact our Talent Team (talent@skycell.ch). SkyCell AG is an equal opportunity employer that values diversity and is committed to creating an inclusive environment for all. We do not discriminate based on race, religion, colour, national origin, gender, sexual orientation, gender identity, age, disability, or any other legally protected characteristic. For this position, if you are not located in, or able to relocate (without sponsorship) to one of the above locations, your application cannot be considered.
Posted 3 days ago
0 years
0 Lacs
Thiruporur, Tamil Nadu, India
On-site
Job Description The Finance Data Steward is responsible for supporting the Finance Data services team – governance and operations - in measuring and reporting the master data consumption, consolidated reports, volumes (BVI), process performance and quality metrics (KPI’s) such as consistency, accuracy, validity etc. for all the relevant finance data objects covered by the team (e.g. cost centers, Project WBS, GL accounts, Internal Business Partners etc.) The above is achieved in close collaboration with the finance data team (data stewards, data SME’s, Data maintainers, reporting teams, data curators, business analysts and any other stakeholders), the Finance Data Steward being responsible and accountable for gathering the data requirements, setting up the data model design, architecture, documentation and developing the scalable data models for Finance data objects via Power BI dashboards, SQL programming, Power Automate and other data analysis and processing tools. How You Will Contribute And What You Will Learn Connect and integrate various data sources (e.g., MDG, ERP systems, EDP, Redbox, Project Cube, ngOCt etc.) to create a unified view of Finance and Business partner master data consolidated reports, volumes (BVI), process performance and quality metrics (KPI’s) Design and implement data models and transformations to prepare Finance and Business Partner master data for performance and quality analysis, reporting and consolidation and visualization. Build interactive and insightful dashboards using data visualization tools (Power BI – data flows and Design, SQL, Power Automate, Data Bricks, Azure) Manage and execute technical activities and projects for Finance and Business partner master data analytics use cases Prepare schedule and technical activities plan for Finance and Business partner master data analytics use cases Seek and communicate cost efficient solutions for Finance and Business partner data analytics Write functional and technical specifications and other guiding documentation for Finance and Business partner master data analytics Proactively identify new data insights and opportunities for improvement based on master data analysis. Maintain and continuously improve the current Finance and Business partner master data dashboards addressing Business user feedback Work closely with business stakeholders to understand their data needs and translate them into effective dashboards. Adhere to data governance policies and regulations, ensuring data security and privacy. Develop and implement data quality checks and validation processes to ensure the accuracy and completeness of master data. Key Skills And Experience Deep understanding of Master data management principles, processes, and tools. This includes data governance, data quality, data cleansing, and data integration. Programming & Data visualization Skills: knowledge of Power BI – dashboards, data flows and design (advanced), SQL, DataBricks, Power Automate, HTML, Python, Sharepoint etc. Experience with Data repositories such as EDP, Azure Excellent written and oral communication in English Hands-on experience with data analytics tools Problem-solving aptitude and analytic mindset Effective networking capabilities and comfortable in multicultural environment and virtual teams Team player About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed.
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities: Able to develop , manage, design and performance on SQL and PL/SQL Skill set Design, develop, and maintain robust Databricks pipelines to support data processing and analytics. Implement and optimize Spark engine concepts for efficient data processing. Collaborate with business stakeholders to understand and translate business requirements into technical solutions. Utilize Informatica for data integration and ETL processes. Write complex SQL queries to extract, manipulate, and analyze data. Perform data analysis to support business decision-making and identify trends and insights. Ensure data quality and integrity across various data sources and platforms. Communicate effectively with cross-functional teams to deliver data solutions that meet business needs. Stay updated with the latest industry trends and technologies in data engineering and analytics. Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 5+ years of experience in data engineering or a related role. Strong expertise in Databricks and Spark engine concepts. Proficiency in Informatica for ETL processes. Advanced SQL skills for data extraction and analysis. Excellent analytical skills with the ability to interpret complex data sets. Strong communication skills to effectively collaborate with business stakeholders and technical teams. Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus. Knowledge of data warehousing concepts and tools is desirable.
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description NielsenIQ is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. We provide consumer packaged goods manufacturers/fast-moving consumer goods and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary NielsenIQ data with other data sources to help clients around the world understand what’s happening now, what’s happening next, and how to best act on this knowledge. We like to be in the middle of the action. That’s why you can find us at work in over 90 countries, covering more than 90% of the world’s population. For more information, visit www.niq.com. Job Description YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a senior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Python, Pyspark, SQL, Hive, Databricks, Airflow. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Git HubAction, and GIT Hub. WHAT YOU’LL DO: Develop, troubleshoot, debug and make application enhancements and create code leveraging Python, SQL as the core development languages. Develop new BE functionalities working closely with the FE team. Contribute to the expansion of NRPS scope Qualifications WE’RE LOOKING FOR PEOPLE WHO HAVE: Must have 5-10 Years of years of applicable software engineering experience Must have a strong experience Python Strong fundamentals with experience in Bigdata, Python, Pyspark, SQL, Hive, Airflow Must have SQL knowledge. Good to have Good to have experience in scala and databricks. Good to have experience in Linux and KSH Good to have experience with DevOps Technologies as GIT Hub, GIT Hub action, Docker. Good to have experience in the Retail Domain. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 3 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity You will help our clients navigate the complex world of modern data science and analytics. We'll look to you to provide advice to our clients on how best to design, implement, stabilise, and optimise internal controls utilising cutting edge and scalable AI & Big Data technologies. Your Key Responsibilities As a Senior AI/ML Engineer you Build and leverage cutting edge Gen AI & Big Data platforms to deliver insights to build a comprehensive Risk & Controls monitoring mechanisms. Creating executive reporting, proposals, and marketing material, ensuring the highest quality deliverables. Work within the team and leverage your knowledge and expertise to solve some of the most intriguing technical challenges in design and development of next generation AI & Data led Risk technology platform. Skills and Summary of Accountabilities: Develop robust AI/ML models to drive valuable insights and enhance the decision-making process across multiple business lines. Strong technical knowledge of Big Data Platforms, AI technologies around Large Language Models, Vector Databases, comprehensive DevOps services and full stack application development Lead the design and development of bespoke machine learning algorithms to achieve business objectives. Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes. Good understanding of Machine learning/NLP/LLM/GenAI/deep learning, and generative AI techniques. Apply advanced machine learning techniques (like SVM, Decision Trees, Neural Networks, Random Forests, Gradient Boosting etc.) to complex problems. Ensure adherence to ethical AI guidelines and data governance policies. Utilize expertise in Prompt Engineering to design and implement innovative solutions for workflow orchestration, leveraging tools such as Python, Airflow, and Terraform. Collaborate closely with cross-functional teams, including data scientists, analysts, and other engineers, to identify opportunities for AI and analytics-driven improvements in the bank's operations and services. Stay up to date with the latest advancements in AI, analytics, and Prompt Engineering, ensuring remains at the forefront of innovation and maintains a competitive edge in the industry. Maintain and improve existing AI/ML architectures and ensure they continue to deliver significant results. Intellectual strength and flexibility to resolve complex problems and rationalise these into a workable solution which can then be delivered. Develop current and relevant client propositions, delivering timely high-quality output against stated project objectives. Willingness to learn, apply new and cutting-edge AI technologies to deliver insight driven business solutions. To qualify for the role you must have 4+ years of working experience in Large Scale AI/ML models and data science. Deep understanding of statistics, AI/ML algorithms, and predictive modeling. Proficiency in AI/ML programming languages like Python, R, SQL. Proficiency with a deep learning framework such as TensorFlow, PyTorch or Keras. Expertise in machine learning algorithms and data mining techniques (like SVM, Decision Trees, and Deep Learning Algorithms). Implement monitoring and logging tools to ensure AI model performance and reliability. Strong programming skills in Python including libraries for machine learning such as Scikit-learn, Pandas, NumPy, etc. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Automate tasks through Python scripting, databases, and other advanced technologies like databricks, synapse, ML, AI, ADF etc Should have good understanding of Git, JIRA, Change / Release management, build/deploy, CI/CD Azure Devops & Share Point. Strong understanding of Large Enterprise Applications like SAP, Oracle ERP & Microsoft Dynamics etc. Ideally, you'll also have Bachelor's Degree or above in mathematics, information systems, statistics, computer science, Data Science, or related disciplines. Relevant certifications are considered a plus Self-driven and creative problem-solver who enjoys the fast-paced world of software development and can perform well in a team. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Full-time Company Description NielsenIQ is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. We provide consumer packaged goods manufacturers/fast-moving consumer goods and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary NielsenIQ data with other data sources to help clients around the world understand what’s happening now, what’s happening next, and how to best act on this knowledge. We like to be in the middle of the action. That’s why you can find us at work in over 90 countries, covering more than 90% of the world’s population. For more information, visit www.niq.com. Job Description YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a senior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Python, Pyspark, SQL, Hive, Databricks, Airflow. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Git HubAction, and GIT Hub. What You’ll Do Develop, troubleshoot, debug and make application enhancements and create code leveraging Python, SQL as the core development languages. Develop new BE functionalities working closely with the FE team. Contribute to the expansion of NRPS scope Qualifications WE’RE LOOKING FOR PEOPLE WHO HAVE: Must have 5-10 Years of years of applicable software engineering experience Must have a strong experience Python Strong fundamentals with experience in Bigdata, Python, Pyspark, SQL, Hive, Airflow Must have SQL knowledge. Good to have Good to have experience in scala and databricks. Good to have experience in Linux and KSH Good to have experience with DevOps Technologies as GIT Hub, GIT Hub action, Docker. Good to have experience in the Retail Domain. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane