Jobs
Interviews

6431 Databricks Jobs - Page 34

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Ciklum is looking for an Expert Data Scientist to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As an Expert Data Scientist, become a part of a cross-functional development team engineering experiences of tomorrow. Responsibilities: Development of prototype solutions, mathematical models, algorithms, machine learning techniques, and robust analytics to support analytic insights and visualization of complex data sets Work on exploratory data analysis so you can navigate a dataset and come out with broad conclusions based on initial appraisals Provide optimization recommendations that drive KPIs established by product, marketing, operations, PR teams, and others Interacts with engineering teams and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability Work directly with business analysts and data engineers to understand and support their use cases Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions Drive innovation by exploring new experimentation methods and statistical techniques that could sharpen or speed up our product decision-making processes Cross-train other team members on technologies being developed, while also continuously learning new technologies from other team members Contribute to the Unit activities and community building, participate in conferences, and provide excellence in exercise and best practices Support marketing & sales activities, customer meetings and digital services through direct support for sales opportunities & providing thought leadership & content creation for the service Requirements: We know that sometimes, you can’t tick every box. We would still love to hear from you if you think you’re a good fit! General technical requirements: BSc, MSc, or PhD in Mathematics, Statistics, Computer Science, Engineering, Operations Research, Econometrics, or related fields Strong knowledge of Probability Theory, Statistics, and a deep understanding of the Mathematics behind Machine Learning Proficiency with CRISP-ML(Q) or TDSP methodologies for addressing commercial problems through data science solutions Hands-on experience with various machine learning techniques, including but not limited to: Regression Classification Clustering Dimensionality reduction Proficiency in Python for developing machine learning models and conducting statistical analyses Strong understanding of data visualization tools and techniques (e.g., Python libraries such as Matplotlib, Seaborn, Plotly, etc.) and the ability to present data effectively Specific technical requirements: Proficiency in SQL for data processing, data manipulation, sampling, and reporting Experience working with imbalanced datasets and applying appropriate techniques Experience with time series data, including preprocessing, feature engineering, and forecasting Experience with outlier detection and anomaly detection Experience working with various data types: text, image, and video data Familiarity with AI/ML cloud implementations (AWS, Azure, GCP) and cloud-based AI/ML services (e.g., Amazon SageMaker, Azure ML) Domain experience: Experience with analyzing medical signals and images Expertise in building predictive models for patient outcomes, disease progression, readmissions, and population health risks Experience in extracting insights from clinical notes, medical literature, and patient-reported data using NLP and text mining techniques Familiarity with survival or time-to-event analysis Expertise in designing and analyzing data from clinical trials or research studies Experience in identifying causal relationships between treatments and outcomes, such as propensity score matching or instrumental variable techniques Understanding of healthcare regulations and standards like HIPAA, GDPR (for healthcare data), and FDA regulations for medical devices and AI in healthcare Expertise in handling sensitive healthcare data in a secure, compliant way, understanding the complexities of patient consent, de-identification, and data sharing Familiarity with decentralized data models such as federated learning to build models without transferring patient data across institutions Knowledge of interoperability standards such as HL7, SNOMED, FHIR, or DICOM Ability to work with clinicians, researchers, health administrators, and policy makers to understand problems and translate data into actionable healthcare insights Good to have skills: Experience with MLOps, including integration of machine learning pipelines into production environments, Docker, and containerization/orchestration (e.g., Kubernetes) Experience in deep learning development using TensorFlow or PyTorch libraries Experience with Large Language Models (LLMs) and Generative AI applications Advanced SQL proficiency, with experience in MS SQL Server or PostgreSQL Familiarity with platforms like Databricks and Snowflake for data engineering and analytics Experience working with Big Data technologies (e.g., Hadoop, Apache Spark) Familiarity with NoSQL databases (e.g., columnar or graph databases like Cassandra, Neo4j) Business-related requirements: Proven experience in developing data science solutions that drive measurable business impact, with a strong track record of end-to-end project execution Ability to effectively translate business problems into data science problems and create solutions from scratch using machine learning and statistical methods Excellent project management and time management skills, with the ability to manage complex, detailed work and effectively communicate progress and results to stakeholders at all levels Desirable: Research experience with peer-reviewed publications Recognized achievements in data science competitions, such as Kaggle Certifications in cloud-based machine learning services (AWS, Azure, GCP) What`s in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Security Consultant EY Technology: Technology has always been at the heart of what we do and deliver at EY. We need technology to keep an organization the size of ours working efficiently. We have 250,000 people in more than 140 countries, all of whom rely on secure technology to be able to do their job every single day. Everything from the laptops we use, to the ability to work remotely on our mobile devices and connecting our people and our clients, to enabling hundreds of internal tools and external solutions delivered to our clients. Technology solutions are integrated in the client services we deliver and is key to us being more innovative as an organization. EY Technology supports our technology needs through three business units: Client Technology (CT) - focuses on developing new technology services for our clients. It enables EY to identify new technology-based opportunities faster and pursue those opportunities more rapidly. Enterprise Workplace Technology (EWT) – EWT supports our Core Business Services functions and will deliver fit-for-purpose technology infrastructure at the cheapest possible cost for quality services. EWT will also support our internal technology needs by focusing on a better user experience. Information Security (Info Sec) - Info Sec prevents, detects, responds and mitigates cyber-risk, protecting EY and client data, and our information management systems. The opportunity As a Security Consultant within EY’s internal Global Information Security team, the individual will be a trusted security advisor to the Client Technology Platforms Delivery organization within IT Services. The Client Technology Platforms delivery organization is responsible for end-to-end delivery of technology programs and projects supporting EY’s Client Techmology service lines including delivery of a global managed services platform, big data and analytics solutions as well as individual line of business solutions and services. This role will directly engage in delivery on programs and projects, defining security architectures, providing security guidance, identifying and prioritizing security-related requirements, promoting secure-by-default designs and facilitating delivery of information security services throughout the system development life cycle (SDLC). The role will also direct consultants in developing appropriate risk treatment and mitigation options to address security vulnerabilities to translate these vulnerabilities into business risk terminology for communication to business stake holders. Your Key Responsibilities Define security architectures and provide pragmatic security guidance that balance business benefit and risks. Design and develop cloud platform-specific security policies, standards, and procedures for management group and account/subscription management and configuration (e.g. Azure Policy, Azure Security Center, AWS Config), identity management and access control, firewall management, auditing and monitoring, security incident and event management, data protection, user and administrator account management, SSO, conditional access controls and password/secrets management. Engage IT project teams throughout the SDLC to identify and prioritize applicable security controls and provide guidance on how to implement these controls Perform risk assessments of information systems and infrastructure Maintain and enhance the Information Security risk assessment methodology Define security configuration standards for platforms and technologies Develop appropriate risk treatment and mitigation options to address security risks identified during security review or audit Translate technical vulnerabilities into business risk terminology for business units and recommend corrective actions to customers and project stake-holders Provide knowledge sharing and technical assistance to other team members Act as Subject Matter Expert (SME) in responsible technologies and have deep technical understanding of responsible portfolios Skills And Attributes For Success Experience with Cloud Identity and Access management solutions (AAD, Federation services, SAML, Ping) in implementation and operations. experience with Big Data and advanced analytics, AI/ML services (such as /Azure SQL/Google Cloud SQL /Azure HDInsight/Key management solutions, Storage and backup, Load balancing, Security Management, Databases and EC2 or VM machine hosting Databricks, Data Factory, Data Lake Storage/BigQuery, Azure Analysis Services, Synapse Analytics, Machine Learning, etc.) Experience in working with different Cloud platforms (Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS)) and environments (Public, Private, Hybrid) in a security role. hands-on technical experience implementing security solutions for leading Cloud service providers e.g., Amazon AWS, Microsoft Azure, Google Cloud. Ability to appropriately balance firm security needs with business impact & benefit Ability to facilitate compromise to incrementally advance security strategy and objectives Ability to team well with others to facilitate and enhance the understanding & compliance to security policies Experience facilitating meetings with multiple customers and technical staff, including building consensus and mediating compromise Five or more years Working experience with the architecture, design and engineering of web-based multi-tier information systems or network infrastructures Experience conducting risk assessments, vulnerability assessments, vendor and third party risk assessments and recommending risk remediation strategies Experience working with common information security standards, such as: ISO 27001/27002, NIST, PCI DSS, ITIL, COBIT To qualify for the role, you must have Five or more years of experience in the management of a significant Information Security risk management function 5 or more years of experience in an Information Security or Information Technology discipline Experience in managing the communication of security findings and recommendations to IT project teams and management Ideally, you’ll also have Exceptional judgment, tact, and decision-making ability Flexibility to adjust to multiple demands, shifting priorities, ambiguity, and rapid change Outstanding management, interpersonal, communication, organizational, and decision-making skills Strong English language skills are required Candidates are preferred to hold or be actively pursuing related professional certifications within the GIAC family of certifications or CISSP, CISM or Azure certifications (AZ500, AZ303, AZ304, AZ900) What Working At EY Offers We offer a competitive remuneration package where you’ll be rewarded for your individual and team performance. Our comprehensive Total Rewards package includes support for flexible working and career development, and with FlexEY you can select benefits that suit your needs, covering holidays, health and well-being, insurance, savings and a wide range of discounts, offers and promotions. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY is committed to be an inclusive employer and we are happy to consider flexible working arrangements. We strive to achieve the right balance for our people, enabling us to deliver excellent client service whilst allowing you to build your career without sacrificing your personal priorities. While our client-facing professionals can be required to travel regularly, and at times be based at client sites, our flexible working arrangements can help you to achieve a lifestyle balance. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 1 week ago

Apply

9.0 - 14.0 years

40 - 80 Lacs

Bengaluru

Work from Office

About the Role: We are seeking a highly skilled Data Solutions Architect - Business Intelligence & AI to lead the design and delivery of advanced data solutions. This role requires a seasoned professional with deep technical expertise, consulting experience, and leadership capabilities to drive data transformation initiatives. The ideal candidate will play a pivotal role in architecting scalable data platforms, enabling AI-driven automation, and mentoring a team of data engineers and analysts.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years’ experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills - Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

India

On-site

Must Haves: 3-5 years Strong Analytical Skills : Ability to interpret data and derive meaningful insights Databricks: Experience with notebooks, data engineering, and Delta Lake SQL: Advanced querying, performance tuning, and data modeling Tableau: Tableau Desktop, Tableau Server, Tableau Online Azure Services: Azure SQL Database, Azure Data Factory, Azure Synapse Analytics Data Integration Tools : Tableau Prep, DAX Studio, Tabular Editor DevOps : CI/CD pipelines, GitHub QA & Optimization: Tableau Performance Recorder, Analyze in Excel Preferred Qualifications: Experience with Power BI is a plus Familiarity with Azure ADO CI/CD pipelines Knowledge of ARM Templates Exposure to other BI tools like SAP Day-To-Day: In this role, you will work closely with stakeholders to understand their business intelligence needs and translate them into actionable insights. Your day will involve analyzing large and complex datasets using SQL and Databricks to uncover trends and patterns that inform decision-making. You will design and develop interactive dashboards and reports using Tableau, ensuring they are both visually compelling and performance-optimized. Leveraging Azure services such as Data Factory and Synapse Analytics, you’ll build and maintain robust data pipelines. A key part of your responsibilities will be to validate data accuracy, perform QA checks, and document your processes thoroughly. Your strong analytical mindset will be essential in transforming raw data into meaningful narratives that drive business outcomes. This position pays between 8-10 LPA

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Overview We are PepsiCo PepsiCo is one of the world's leading food and beverage companies with more than $79 Billion in Net Revenue and a global portfolio of diverse and beloved brands. We have a complementary food and beverage portfolio that includes 22 brands that each generate more than $1 Billion in annual retail sales. PepsiCo's products are sold in more than 200 countries and territories around the world. PepsiCo's strength is its people. We are over 250,000 game changers, mountain movers and history makers, located around the world, and united by a shared set of values and goals. We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Pep+ Positive . For more information on PepsiCo and the opportunities it holds, visit www.pepsico.com. PepsiCo Data Analytics & AI Overview: With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCo’s leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AI Products. And will support “pre-engagement” activities as requested and validated by the prioritization framework of DA&AI. Data Scientist-Gurugram and Hyderabad The role will work in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Machine Learning Services and Pipelines. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Collaborate with data engineers and ML engineers to understand data and models and leverage various advanced analytics capabilities Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Use big data technologies to help process data and build scaled data pipelines (batch to real time) Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure/AWS/GCP Pipelines. Setup cloud alerts, monitors, dashboards, and logging and troubleshoot machine learning infrastructure Automate ML models deployments Qualifications Minimum 3years of hands-on work experience in data science / Machine learning Minimum 3year of SQL experience Experience in DevOps and Machine Learning (ML) with hands-on experience with one or more cloud service providers. BE/BS in Computer Science, Math, Physics, or other technical fields. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models Programming Skills - Hands-on experience in statistical programming languages like Python and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Any Cloud - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Model deployment experience will be a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is required Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills

Posted 1 week ago

Apply

12.0 - 17.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Overview As Data Modelling Assoc Manager, you will be the key technical expert overseeing data modeling and drive a strong vision for how data modelling can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data modelers who create data models for deploying in Data Foundation layer and ingesting data from various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data modelling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will independently be analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be a key technical expert performing all aspects of Data Modelling working closely with Data Governance, Data Engineering and Data Architects teams. You will provide technical guidance to junior members of the team as and when needed. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Independently complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Advocates existing Enterprise Data Design standards; assists in establishing and documenting new standards. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the data science team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 12+ years of overall technology experience that includes at least 6+ years of data modelling and systems architecture. 6+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 6+ years of experience developing enterprise data models. 6+ years in cloud data engineering experience in at least one cloud (Azure, AWS, GCP). 6+ years of experience with building solutions in the retail or in the supply chain space. Expertise in data modelling tools (ER/Studio, Erwin, IDM/ARDM models). Fluent with Azure cloud services. Azure Certification is a plus. Experience scaling and managing a team of 5+ data modelers Experience with integration of multi cloud services with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring, hiring and scaling data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to lead others without direct authority in a matrixed environment. Differentiating Competencies Required Ability to work with virtual teams (remote work locations); lead team of technical resources (employees and contractors) based in multiple locations across geographies Lead technical discussions, driving clarity of complex issues/requirements to build robust solutions Strong communication skills to meet with business, understand sometimes ambiguous, needs, and translate to clear, aligned requirements Able to work independently with business partners to understand requirements quickly, perform analysis and lead the design review sessions. Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Places the user in the centre of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams

Posted 1 week ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Responsibilities Responsible for leading software application backend design, development, delivery, and maintenance. Evaluate and select alternative technical solutions for identified requirements with knowledge of backend and J2EE application development. Work with an onshore team to clarify business requirements into product features, acting as a liaison between business and technical teams. Resolve technical issues and provide technical support. Provide technical guidance and assistance to other software engineers. Prepare staffing plan and allotment of resources. Assist the project managers in resolving any issues and conflicts within their projects. Improve customer relations by effective communication, managing expectations, and meeting commitments. Keep abreast of technical and organizational developments in your own professional field. Required Qualifications Bachelor's degree in computer science, information technology, or related area (equivalent work experience will be considered). 1+ years' experience in developing business applications in a full software development life cycle using web technologies. 1+ years' experience in Software development analysis and design (UML). Advanced experience with Node.js, ReactJS, JavaScript, TypeScript, HTML5, CSS3, SASS, Python and web service integration Have a solid technical background in J2EE, Structs, Spring, Hibernate, and MuleSoft. Experience in PostgreSQL, Microsoft SQL Server, Nginx, Docker, Redis, Spring Boot and Spring Cloud, Web Service, WebSphere/JBoss/WebLogic. Experience using at least one of the following cloud platforms: Azure, AWS, GCP. Prefer deep understanding with Azure DevOps, Azure Synapse Analytics, Databricks, Delta Lake and Lakehouse. Experience in designing, developing, and optimizing data processing applications using Apache Spark in Databricks. Capable of writing efficient Spark jobs in languages such as Scala, Python, PySpark, Spark SQL. Familiarity with the application and integration of Generative AI, Prompt Engineering and Large Language Models (LLMs) in enterprise solutions. Demonstrate the ability to independently design and implement the backend of an entire business module. Demonstrate excellent interpersonal skills, particularly in balancing requirements, managing expectations, collaborating with team members, and driving effective results. Proactive attitude, ability to work independently, and a desire to continuously learn new skills and technology. Excellent written and verbal communication skills in English. Additional Or Preferred Qualifications Master’s degree in computer science, information technology, or related majors. Technical lead experience. 3+ years’ experience in developing business applications in a full software development life cycle using web technologies. Experience using Azure and either AWS or GCP. Experience with data visualization tools such as Power BI or Tableau.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role We are seeking a motivated and detail-oriented Mid-Level Data Engineer with 2–3 years of experience in designing, developing, and optimizing data pipelines within the healthcare domain. The ideal candidate will have hands-on experience with Databricks , strong SQL skills, and a solid understanding of healthcare data standards (e.g., HL7, EDI X12 – 837/835, HCC, CPT/ICD codes). Key Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using Databricks, PySpark, and Delta Lake for large-scale healthcare datasets. Collaborate with data scientists, analysts, and product managers to understand data requirements and deliver clean, reliable data. Ingest, process, and transform healthcare-related data such as claims (837/835), EHR/EMR, provider/member, and clinical datasets. Implement data quality checks, validations, and transformations to ensure high data integrity and compliance with healthcare regulations. Optimize data pipeline performance, reliability, and cost in cloud environments (preferably Azure or AWS). Maintain documentation of data sources, data models, and transformations. Support analytics and reporting teams with curated datasets and data marts. Adhere to HIPAA and organizational standards for handling PHI and sensitive data. Assist in troubleshooting data issues and root cause analysis across systems. Required Qualifications 2–3 years of experience in a data engineering role, preferably in the healthcare or healthtech sector. Hands-on experience with Databricks, Apache Spark (PySpark), and SQL. Familiarity with Delta Lake, data lakes, and modern data architectures. Solid understanding of healthcare data standards: EDI 837/835, CPT, ICD-10, DRG, or HCC. Experience with version control (e.g., Git), CI/CD workflows, and task orchestration tools (e.g., Airflow, Azure Data Factory, dbt). Ability to work with both structured and semi-structured data (JSON, Parquet, Avro, etc.). Strong communication skills and ability to collaborate in cross-functional teams. Education Bachelor’s degree in Business Administration, Healthcare Informatics, Information Systems, or a related field.

Posted 1 week ago

Apply

14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Company: We are a leading technology firm dedicated to providing innovative data solutions that empower businesses to harness the full potential of their data. Our mission is to drive success through cutting-edge technology and exceptional service, fostering a culture of collaboration, integrity, and continuous improvement. About the Role: We are hiring a Director of Databricks Engineering to lead multiple client engagements, drive project delivery, and manage high-performing engineering teams. Responsibilities: 14+ years in data engineering, including 3+ years in leadership/director-level roles Proven experience with Databricks, Delta Lake, and cloud data architecture Strong track record of project delivery, team management, and client success Excellent communication and leadership skills in fast-paced environments Oversee and deliver multiple Databricks-based data engineering projects Manage project budgets, costing, staffing, and client expectations Lead and mentor engineering teams across engagements Collaborate with clients on architecture, strategy, governance, and reporting Ensure high-quality delivery aligned with best practices and business value Qualifications: Databricks – Full-platform expertise for scalable data solutions: Strong hands-on experience with Databricks for building and managing ETL pipelines, Delta Lake, notebooks, and job orchestration. Skilled in cluster optimization, workspace management, and integrating Databricks with Azure services. Cloud – Azure (preferred), or similar cloud environments: Deep hands-on experience with Azure data services such as Azure Data Lake, Azure Synapse, Azure Data Factory, and integration with Databricks. Ability to design and deploy cloud-native data architectures. Data Engineering – Spark, PySpark, and Python for scalable data processing: Strong background in building scalable, high-performance ETL/ELT pipelines using Spark and PySpark. Ability to write optimized, production-grade Python code for data transformation, orchestration, and automation in distributed environments. Data Warehousing & SQL – Designing and querying enterprise data models: Proven experience in designing data warehouses or lakehouses, dimensional modeling, and writing complex SQL queries for analytics and reporting. Governance – Implementation and management of Unity Catalog: Hands-on experience implementing Unity Catalog for managing metadata, access control, and data lineage in Databricks. Reporting Tools – Power BI or similar (Tableau, Looker, etc.): Ability to work with business teams to build insightful dashboards and visualizations using Power BI. Required Skills: Strong hands-on experience with Databricks Deep hands-on experience with Azure data services Strong background in building scalable ETL/ELT pipelines Proven experience in designing data warehouses or lakehouses Hands-on experience implementing Unity Catalog Ability to work with business teams to build dashboards Preferred Skills: Experience with additional cloud environments Familiarity with other reporting tools Pay range and compensation package: Competitive salary based on experience and qualifications. Equal Opportunity Statement: We are committed to creating a diverse and inclusive workplace. We encourage applications from all qualified individuals, regardless of race, gender, age, sexual orientation, disability, or any other characteristic protected by law.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Role: Senior Data Engineer Experience: 8 to 12 Years Work Mode: Remote Shift timing: - 12:30 pm to 9:30 pm IST Primary Skills: Advanced SQL Server, ETL development using SSIS, Azure Data Factory, and Databricks, Power BI, Data warehousing, Data quality and validation, General Microsoft technology stack Job Summary: We are seeking a highly skilled and experienced Senior Data Engineer to join our growing data team. The ideal candidate will have strong expertise in SQL Server , ETL development , and cloud-based data integration tools like Azure Data Factory and Databricks . This role requires a proactive mindset and a willingness to work with legacy SSIS solutions while actively contributing to modern data platform strategies. Key Responsibilities: Design, develop, and maintain robust ETL workflows using SSIS , Azure Data Factory , and Databricks . Write complex, optimized SQL Server queries for data transformation, analysis, and reporting. Perform data modeling, integration, and warehousing in support of enterprise analytics initiatives. Support and enhance existing SSIS packages; troubleshoot and resolve issues related to older ETL pipelines. Develop and publish dashboards and reports using Power BI . Implement data quality checks and validation routines to ensure the accuracy and consistency of data. Collaborate with cross-functional teams including BI developers, analysts, and business stakeholders. Contribute to the architecture, design, and implementation of scalable and secure data pipelines. Document technical designs, solutions, and processes as per organizational standards. Required Skills: 8–12 years of hands-on experience with Microsoft SQL Server and writing complex T-SQL queries. Strong ETL development experience using SSIS and Azure Data Factory . Experience with Azure Databricks for big data processing and integration. Willingness and capability to work with and improve legacy SSIS-based solutions . Strong understanding of Data Warehousing concepts , star/snowflake schema, and dimensional modeling. Proficiency in Power BI for data visualization and reporting. Strong background in data quality , cleansing, validation, and auditing. Experience with the Microsoft technology stack (Azure, SQL Server, SSRS, SSAS, etc.). Preferred Qualifications: Azure Data Engineer certification (e.g., DP-203) is a plus. Experience working in Agile/Scrum environments. Exposure to DevOps for DataOps pipelines (CI/CD for data). About IGT Solutions: IGT Solutions is a next-gen customer experience (CX) company, defining and delivering transformative experiences for the global and most innovative brands using digital technologies. With the combination of Digital and Human Intelligence, IGT becomes the preferred partner for managing end-to-end CX journeys across Travel and High Growth Tech industries. We have a global delivery footprint, spread across 30 delivery centers in China, Colombia, Egypt, India, Indonesia, Malaysia, Philippines, Romania, South Africa, Spain, UAE, the US, and Vietnam, with 25000+ CX and Technology experts from 35+ nationalities. IGT's Digital team collaborates closely with our customer’s business & technology teams to take solutions faster to market while sustaining quality while focusing on business value and improving overall end-Customer Experience. Our offerings include industry solutions as well as Digital services. We work with leading global enterprise customers to improve synergies between business & technology by enabling rapid business value realization leveraging Digital Technologies. These include lifecycle transformation & rapid development / technology solution delivery services delivered leveraging traditional as well as Digital Technologies, deep functional understanding and software engineering expertise. IGT is ISO 27001:2013, CMMI SVC Level 5 and ISAE-3402 compliant for IT, and COPC® Certified v6.0, ISO 27001:2013 and PCI DSS 3.2 certified for BPO processes. The organization follows Six Sigma rigor for process improvements. It is our policy to provide equal employment opportunities to all individuals based on job-related qualifications and ability to perform a job, without regard to age, gender, gender identity, sexual orientation, race, color, religion, creed, national origin, disability, genetic information, veteran status, citizenship or marital status, and to maintain a non-discriminatory environment free from intimidation, harassment or bias based upon these grounds.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Strong programming skills in SQL, Python and PySpark for data processing and automation. Experience with Databricks and Snowflake (preferred) for building and maintaining data pipelines. Understanding of Machine Learning and AI techniques, especially for data quality and anomaly detection. Experience with cloud platforms such as Azure and AWS and familiarity with Azure Web Apps Knowledge of Data Quality and Data Governance concepts (Preferred) Nice to have: Power BI dashboard development experience. Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417177 Relocation Package No

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bangalore Urban, Karnataka, India

Remote

About Airties At Airties we are on a mission to empower broadband operators to deliver a better-connected home experience for their subscribers. We have an exciting story to tell, and we want you to help us tell it! Airties offers broadband operators advanced Wi-Fi solutions and services to allow them bring an improved user experience for their subscribers. The Airties portfolio includes Smart Wi-Fi software, a cloud-based experience management platform with its companion app and data engine. Our company also offers expert, bespoke engineering and testing services. Globally, Airties is the most widely deployed provider of Smart Wi-Fi solutions to network service providers and our technologies are driving a better-connected user experience in more than 35 million homes. Introduction Airties is looking for a Field Application Engineer (FAE) in India. An FAE is the primary technical point of contact to Airties customers across APAC and Australia, accommodating different time zones. This is a multi-faceted role that supports product pre-sales, acceptance, launch and post deployment phases, requiring communication internally and externally at all levels, providing fast and high quality response to customers and being the customer advocate to internal teams. What you will do: Provide onsite installations, product trials and deployments, and other professional services to Airties customers. Work with customers in the field cooperating with Airties Sales, Project, Product and Engineering teams Support sales efforts by explaining current products and solutions, sending samples, conducting trials and proof-of-concepts Escalate product customization and localization needs of the customer to Product, Engineering, and Technical Support management After product release, take responsibility for recording, tracking and handling defects and technical feedback from the customer. Conduct first level root-cause analysis, issue replication and answer technical questions real time on site and escalate appropriately Provide timely and effective resolution to support requests based on internal and external service level agreements (SLA) Work closely with Engineering teams to investigate, assign, and resolve defects. Deploy software defect fixes at customer sites. · Document each customer issue/request using Airties ticket management system Provide ongoing, regular updates to customers to keep them apprised of progress toward problem resolution Respond to requests for technical information and assistance in a timely and professional manner Provide regular reports on field services and/or tests performed. · Travel to customer sites to rectify problems when/if necessary Work with the customer's staff to train and develop operations capability on Airties' products Support alpha and beta tests of new Airties products at customer sites Provide feedback to Sales, Product, and Engineering teams to improve Airties products Promote Airties products at customers and establish strong lasting customer relationships What you should ideally bring: Bachelor’s or higher degree in Network Engineering, EE or similar technical field is required 5+ years professional experience of which 2+ years of hands-on field experience in networking in a customer facing role delivering professional services is required Excellent communication, presentation and reporting skills in English is mandatory. This job requires extensive oral communication skills to deal with customer’s teams, and written communication skills to produce reports and technical notes to customers. Demonstration of oral and written English proficiency will be required during application Strong understanding of network protocols/standards: TCP, UDP, IP, Ethernet, Wi-Fi protocols, and IEEE 802.11 standards is mandatory Knowledge of network tools like Wireshark, tcpdump, Iperf, etc. · Shell/Phyton scripting knowledge is a plus General understanding of AWS and similar cloud technologies along with Tableau, Databricks and Grafana Linux and very comfortable with CLI in various environments Expert in remote access tools and applications: Telnet / SSH / SCP / TFTP / Serial Console Experience with broadband, IPTV, and streaming video technologies is big plus Experience with Customer Premise Equipment devices, residential gateways, set top boxes is required. Familiarity with CPE management software solutions is a plus Ability to travel within short notice is required. This position requires international travel up to 50% of work time. Airties has a zero tolerance to discrimination policy. In this regard, during the course of the evaluation of your job application and during all your employment relation, if any, all discriminatory factors such as race, sex, sexual orientation, social gender definitions/roles, color, national or social background, ethnicity, religion, age, disablement, political opinion or any status that is protected under law shall be totally disregarded. *By applying to this job opening, you agree, acknowledge and consent to the transfer of your personal data by Airties to outside of Turkey; in particular to its subsidiaries. *By applying to this job opening, you agree, acknowledge and consent to the transfer of your personal data by Airties to its headquarters established in Turkey.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Greater Delhi Area

Remote

About Tide At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. About The Role A talented Data Scientist (DS-3) with generative AI expertise to contribute to the development and implementation of advanced AI solutions. You'll work on generative AI projects, applying your skills in LangChain, LangGraph, SQL, Python, AWS Bedrock, and Databricks to solve business challenges as part of our innovative team. As a Lead Data Scientist You’ll Be Implementation & Development: Implement generative AI solutions leveraging AWS Bedrock and Databricks based on established architectural patterns Technical Execution: Apply your expertise in generative AI, LLMs, and related technologies to deliver high-quality solutions Cross-Functional Collaboration: Work with product, engineering, and business teams to understand requirements and implement effective AI solutions Statistical Analysis: Apply appropriate statistical methods to validate models and hypotheses Data Analysis: Conduct thorough exploratory data analysis and handle data quality issues effectively Feature Engineering: Implement feature engineering techniques for ML models, including dimensionality reduction and feature selection Model Development: Build and tune generative AI models using frameworks like LangChain and LangGraph with proper regularization techniques Intent Classification: Implement intent detection systems combining LLMs with data analysis techniques Response Generation: Develop components for generating structured responses by combining AI with business rules Continuous Learning: Stay current with generative AI techniques and apply new approaches to improve solutions Documentation & Knowledge Sharing: Create clear documentation and share knowledge with team members Testing & Validation: Thoroughly test generative AI applications to ensure they meet requirements and quality standards What Are We Looking For 6+ years of experience in data science, machine learning, or related field 1+ years of hands-on experience with generative AI, large language models (LLMs), or natural language processing Experience with traditional NLP models and techniques Strong proficiency in Python and SQL for data retrieval and processing Solid understanding of statistics, hypothesis testing, and probability distributions Experience with LangChain, LangGraph for implementing AI workflows Understanding of prompt engineering, RAG (Retrieval Augmented Generation), and fine-tuning techniques for LLMs Experience implementing intent detection systems using generative AI Familiarity with rule-based systems to augment AI capabilities Experience with cloud platforms, particularly AWS services Familiarity with Knowledge Graphs for LLM use cases Knowledge of evaluation approaches for generative AI outputs Experience with Databricks or similar data processing platforms Nice To Have Advanced degree in Computer Science, Data Science, Machine Learning, or related field Experience in customer support AI applications Knowledge of vector databases and embedding techniques Familiarity with semantic search technologies Experience with data pipelines and ETL processes Knowledge of software development best practices including version control Contributions to data science or AI projects or communities What You Will Get In Return Make work, work for you! We are embracing new ways of working and support flexible working arrangements. With our Working Out of Office (WOO) policy our colleagues can work remotely from home or anywhere in their assigned Indian state. Additionally, you can work from a different country or Indian state for 90 days of the year. Plus, you’ll get: Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 15 days of Privilege leaves 12 days of Casual leaves 12 days of Sick leaves 3 paid days off for volunteering or L&D activities Stock Options TIDEAN WAYS OF WORKING At Tide, we champion a flexible workplace model that supports both in-person and remote work to cater to the specific needs of our different teams. While remote work is supported, we believe in the power of face-to-face interactions to foster team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, where we encourage regular in-person gatherings to foster a strong sense of community. TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members’ diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone’s voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone’s voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .

Posted 1 week ago

Apply

4.0 - 12.0 years

10 - 30 Lacs

Hyderabad, Pune, Delhi / NCR

Work from Office

Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data pipelines using PySpark on Microsoft Azure platform. Collaborate with cross-functional teams to gather requirements and deliver high-quality data solutions that meet business needs. Develop complex SQL queries to extract insights from Snowflake databases and integrate them into the pipeline. Troubleshoot issues related to data processing, storage, and retrieval in real-time. Job Requirements : 4-12 years of experience in designing and developing big data architectures on cloud platforms like Microsoft Azure. Strong expertise in working with PySpark for building scalable data pipelines. Proficiency in writing complex SQL queries for querying large datasets stored in Snowflake databases.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

About Tide At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. About The Role A talented Data Scientist (DS-3) with generative AI expertise to contribute to the development and implementation of advanced AI solutions. You'll work on generative AI projects, applying your skills in LangChain, LangGraph, SQL, Python, AWS Bedrock, and Databricks to solve business challenges as part of our innovative team. As a Lead Data Scientist You’ll Be Implementation & Development: Implement generative AI solutions leveraging AWS Bedrock and Databricks based on established architectural patterns Technical Execution: Apply your expertise in generative AI, LLMs, and related technologies to deliver high-quality solutions Cross-Functional Collaboration: Work with product, engineering, and business teams to understand requirements and implement effective AI solutions Statistical Analysis: Apply appropriate statistical methods to validate models and hypotheses Data Analysis: Conduct thorough exploratory data analysis and handle data quality issues effectively Feature Engineering: Implement feature engineering techniques for ML models, including dimensionality reduction and feature selection Model Development: Build and tune generative AI models using frameworks like LangChain and LangGraph with proper regularization techniques Intent Classification: Implement intent detection systems combining LLMs with data analysis techniques Response Generation: Develop components for generating structured responses by combining AI with business rules Continuous Learning: Stay current with generative AI techniques and apply new approaches to improve solutions Documentation & Knowledge Sharing: Create clear documentation and share knowledge with team members Testing & Validation: Thoroughly test generative AI applications to ensure they meet requirements and quality standards What Are We Looking For 6+ years of experience in data science, machine learning, or related field 1+ years of hands-on experience with generative AI, large language models (LLMs), or natural language processing Experience with traditional NLP models and techniques Strong proficiency in Python and SQL for data retrieval and processing Solid understanding of statistics, hypothesis testing, and probability distributions Experience with LangChain, LangGraph for implementing AI workflows Understanding of prompt engineering, RAG (Retrieval Augmented Generation), and fine-tuning techniques for LLMs Experience implementing intent detection systems using generative AI Familiarity with rule-based systems to augment AI capabilities Experience with cloud platforms, particularly AWS services Familiarity with Knowledge Graphs for LLM use cases Knowledge of evaluation approaches for generative AI outputs Experience with Databricks or similar data processing platforms Nice To Have Advanced degree in Computer Science, Data Science, Machine Learning, or related field Experience in customer support AI applications Knowledge of vector databases and embedding techniques Familiarity with semantic search technologies Experience with data pipelines and ETL processes Knowledge of software development best practices including version control Contributions to data science or AI projects or communities What You Will Get In Return Make work, work for you! We are embracing new ways of working and support flexible working arrangements. With our Working Out of Office (WOO) policy our colleagues can work remotely from home or anywhere in their assigned Indian state. Additionally, you can work from a different country or Indian state for 90 days of the year. Plus, you’ll get: Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 15 days of Privilege leaves 12 days of Casual leaves 12 days of Sick leaves 3 paid days off for volunteering or L&D activities Stock Options TIDEAN WAYS OF WORKING At Tide, we champion a flexible workplace model that supports both in-person and remote work to cater to the specific needs of our different teams. While remote work is supported, we believe in the power of face-to-face interactions to foster team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, where we encourage regular in-person gatherings to foster a strong sense of community. TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members’ diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone’s voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone’s voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice .

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Greater Hyderabad Area

On-site

Expertise in AWS services like EC2, CloudFormation, S3, IAM, SNS, SQS, EMR, Athena, Glue, lake formation etc. Expertise in Hadoop/EMR/DataBricks with good debugging skills to resolve hive and spark related issues. Sound fundamentals of database concepts and experience with relational or non-relational database types such as SQL, Key-Value, Graphs etc. Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc. Experience in programming languages such as Python/PySpark. Excellent written and verbal communication skills. Key Responsibilities Working closely with the Data lake engineers to provide technical guidance, consultation and resolution of their queries. Assist in development of simple and advanced analytics best practices, processes, technology & solution patterns and automation (including CI/CD) Working closely with various stakeholders in US team with a collaborative approach. Develop data pipeline in python/pyspark to be executed in AWS cloud. Set up analytics infrastructure in AWS using cloud formation templates. Develop mini/micro batch, streaming ingestion patterns using Kinesis/Kafka. Seamlessly upgrading the application to higher version like Spark/EMR upgrade. Participates in the code reviews of the developed modules and applications. Provides inputs for formulation of best practices for ETL processes / jobs written in programming languages such as PySpak and BI processes. Working with column-oriented data storage formats such as Parquet , interactive query service such as Athena and event-driven computing cloud service - Lambda Performing R&D with respect to the latest and greatest Big data in the market, perform comparative analysis and provides recommendations to choose the best tool as per the current and future needs of the enterprise. Required Qualifications Bachelors or Masters degree in Computer Science or similar field 2-4 years of strong expeirence in big data development Expertise in AWS services like EC2, CloudFormation, S3, IAM, SNS, SQS, EMR, Athena, Glue, lake formation etc. Expertise in Hadoop/EMR/DataBricks with good debugging skills to resolve hive and spark related issues. Sound fundamentals of database concepts and experience with relational or non-relational database types such as SQL, Key-Value, Graphs etc. Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc. Experience in programming languages such as Python/PySpark. Excellent written and verbal communication skills. Preferred Qualifications Cloud certification (AWS, Azure or GCP) About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology

Posted 1 week ago

Apply

3.0 - 7.0 years

7 - 7 Lacs

Cochin

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity As a Senior BI Consultant, you will be responsible for supporting and enhancing Business Intelligence and Data Analytics platforms with a primary focus on Power BI and Databricks. You will work across global engagements, helping clients translate complex data into actionable insights. This role involves day-to-day application management, dashboard development, troubleshooting, and stakeholder collaboration to ensure high data quality, performance, and availability. Your key responsibilities BI Support & Monitoring: Provide daily application support for Power BI dashboards and Databricks pipelines, resolving incidents, fulfilling service requests, and implementing enhancements. Dashboard Development: Design, develop, and maintain Power BI reports and data models tailored to evolving business requirements. Root Cause Analysis: Investigate and resolve data/reporting issues, bugs, and performance bottlenecks through detailed root cause analysis. Requirement Gathering: Collaborate with business users and technical stakeholders to define BI requirements and translate them into scalable solutions. Documentation: Maintain technical documentation, including data flows, dashboard usage guides, and QA test scripts. On-Call & Shift Support: Participate in shift rotations and be available for on-call support for critical business scenarios. Integration & Data Modeling: Ensure effective data integration from diverse systems and maintain clean, performant data models within Power BI and Databricks. Skills and attributes for success Hands-on expertise in Power BI, including DAX, data modeling, and report optimization Working experience in Databricks, especially with Delta Lake, SQL, and PySpark for data transformation Familiarity with ETL/ELT design, especially within Azure data ecosystems Ability to troubleshoot BI performance issues and manage service tickets efficiently Strong communication skills to interact with global stakeholders and cross-functional teams Ability to manage and prioritize multiple support tasks in a fast-paced environment To qualify for the role, you must have 3–7 years of experience in Business Intelligence and Application Support Strong hands-on skills in Power BI and Databricks, preferably in a global delivery model Working knowledge of ETL processes, data validation, and performance tuning Familiarity with ITSM practices for service request, incident, and change management Willingness to work in rotational shifts and support on-call requirements Bachelor’s degree in Computer Science, Engineering, or equivalent work experience Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Power BI : Expertise in report design, data modelling, and DAX Databricks : Experience with notebooks, Delta Lake, SQL, and PySpark Azure Ecosystem : Familiarity with Azure Data Lake and Azure Synapse (consumer layer) ETL & Data Modelling : Good understanding of data integration and modelling best practices ITSM Tools : Experience with ServiceNow or equivalent for ticketing and change management Good to have Data Integration : Experience integrating with ERP, CRM, or POS systems Python : For data transformation and automation scripting Monitoring : Awareness of Azure Monitor or Log Analytics for pipeline health Certifications : Microsoft Certified Data Analyst Associate or Databricks Certified Data Engineer Associate Industry Exposure : Experience in retail or consumer goods industries What we look for People with client orientation, experience and enthusiasm to learn new things in this fast-moving environment. An opportunity to be a part of a market-leading, multi-disciplinary team of hundreds of professionals. Opportunities to work with EY BI application maintenance, practices globally with leading businesses across a range of industries. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you : We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership : We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. About EY EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. If you can demonstrate that you meet the criteria above, please contact us as soon as possible. The exceptional EY experience. It’s yours to build. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

7.0 years

1 - 4 Lacs

Hyderābād

On-site

We are seeking a senior data services engineer with experience in databricks to join our innovative development team. The ideal candidate will have a robust background in java development, distributed computing, and big data technologies, with a focus on Databricks. Proficiency in cloud platforms like Azure and data warehousing solutions such as Snowflake is essential. This role offers the opportunity to lead the design and implementation of cutting-edge data solutions that drive business intelligence and analytics. Key Responsibilities: Lead the design, development, and deployment of scalable applications integrated with Databricks for big data processing Architect and optimize distributed computing solutions using Apache Spark within the Databricks environment to handle large-scalable datasets efficiently Implement and manage data lake house architecture using Databricks, ensuring data integrity, security, and accessibility. Develop and maintain ETL pipelines using Databricks and Apache Spark, automating data workflows to improve processing efficiency Provide technical leadership in system architecture design, making informed decisions to meet business requirements and scalability needs Optimize cloud resource utilization on platforms like Azure ensuring cost-effective and reliable data processing solutions Must-Have Skills : Bachelor's degree in Computer Science, Information Technology, or a related field Minimum of 7 years of hands-on experience in Java development, with a deep understanding of object-oriented principles Extensive experience working with Databricks, including designing and implementing data pipelines, managing clusters, and utilizing Databricks notebooks Proficiency in distributed computing frameworks, particularly Apache Spark, within the Databricks environment. Strong experience with cloud platforms especially Microsoft Azure, including services like Azure Data Lake Storage (ADLS) and Azure Data Factory Solid understanding of data warehousing concepts and hands-on experience with Snowflake Experience with version control system such as Git and familiarity with CI/CD pipelines Excellent problem-solving skills, attention to detail and the ability to work effectively in a collaborative team environment Good-to-Have Skills: Experience with additional programming languages such as Python or Scala Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Understanding of agile development methodologies and experience working in Agile teams Familiarity with monitoring and logging tools to ensure application reliability and performance Contributions to open-source projects or active participation in relevant technical communities

Posted 1 week ago

Apply

8.0 - 13.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for developing and maintaining the overall data architecture and integration with Amgen Clinical Trial and Registry System (CTRS). This role involves defining the data integrations vision, creating roadmaps, and ensuring that IT strategies align with business goals. The role will be working closely with team members to understand requirements, develop data integration blueprints, and ensure that solutions are scalable, secure, and aligned with enterprise standards. Roles & Responsibilities: Collaborate with business and data team members to gather requirements, assess data needs, and ensure data quality, privacy, and compliance across ETL pipelines. Develop, monitor, and maintain CI/ CD-enabled ETL pipelines using Databricks and AWS services (S3, Glue, Lambda, EMR, etc.). Optimize ETL jobs for performance, scalability, cost-efficiency, and data quality using standard methodologies for data ingestion, transformation, and loading. Identify, fix, and resolve pipeline bottlenecks or failures to maintain data pipeline health. Partner with MDM and Reference Data teams to enforce standards and reusability to enforce data standards and reusability across ecosystem. Maintain user documentation for data definitions, flows, lineage, and quality rules to support audit readiness and data governance. Ensure alignment to data governance policies and contribute to improvement initiatives around data standardization and harmonization. Engage in Agile/SAFe ceremonies, provide story estimates, and deliver user stories related to data platform enhancements. Lead in design discussions, sprint reviews, and retrospectives, contributing to continuous improvement of the ETL platform and framework. Conduct and share data profiling, quality assessments, and performance analysis using Databricks tools and AWS monitoring services. Automate deployment and migration of ETL jobs across development, QA, and production environments using version-controlled CI/CD practices. Research and implement new technologies or frameworks that improve the robustness, flexibility, or efficiency of the data pipelines. Partner with business teams to identify compliance requirements with data privacy, security, and regulatory policies for the assigned domains. Build strong relationships with key business leads and partners to ensure their needs are met. Be a key team member that assists in design and development of the data pipeline for Veeva Vault platform. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Identify and resolve complex data-related challenges. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation Work with data engineers on data quality assessment, data cleansing and data analytics. Share and discuss findings with team members practicing SAFe Agile delivery model. Automate and Optimize data pipeline and framework for easier and cost-effective development process. Advice and support project teams (project managers, architects, business analysts, and developers) on cloud platforms (AWS, Databricks preferred), tools, technology, and methodology related to the design, build scalable, efficient and maintain Data Lake and other Big Data solutions. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 8 to 13 years Computer Science, IT or related field Preferred Qualifications: Functional Skills: Must-Have Skills Strong knowledge of Data Lake technologies like Databricks, etc. Experience in MuleSoft, Python script and REST/RESTful API script development. Experience in Clinical system integration. Extensive knowledge of enterprise architecture frameworks, technologies and methodologies like AWS, and methodologies such as SOA (Service-Oriented Architecture). Experience with system integration and IT infrastructure Experience with data, change, and technology governance processes on the platform level. Experience working in agile methodology, including Product Teams and Product Development models leading Scrum teams using Jira and Confluence, and adopting Agile methodologies like Kanban and Lean for product development. Proficiency in designing scalable, secure, and cost-effective solutions like cost optimization using AWS Cost Explorer. Have stakeholder and team management skills. Could lead and guide multiple teams to meet business needs and goals. Good-to-Have Skills: Good Knowledge of Global Pharmaceutical Industry Understanding of GxP process Strong solution design and problem-solving skills Strong understanding of technology, function, or platform Experience in developing differentiated and deliverable solutions Ability to analyze client requirements and translate them into solutions Working late hours Professional Certifications (please mention if the certification is preferred or mandatory for the role): SAFe – DevOps Practitioner (preferred) SAFe for teams (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com

Posted 1 week ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Consultant- Data Engineer- Databricks! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. You would be part of the data integrity/analysis team in the Banking and financial domain. You will be responsible to independently build data analysis around complex business problems from data available in the client owned or accessible systems. For these tasks, you would be encouraged to understand the business ask/problem, assess the scope, quantity and quality of the available data, prepare and build the code using Pyspark /Databricks, Python programming and loading data in to DWH and Data Mart for downstream consumption team. Responsibilities Extensive hands-on experience on Python ( Pyspark ) and Pyspark with SQL The experience shall be to carry RDDs, Struct types and more on pyspark Exposure to work on Databricks notebook for Pyspark and pyspark with sql coding Good hands on to collaborate with AWS services using Python. Experience with cloud technologies like AWS (S3, Redshift,SNS ) Expertise in developing ETL and batch processes to support data movement Candidate shall be good in communication and SELF - Driven May work in silos with his own deliverables and discussion points with onshore customer. Qualifications we seek in you! Minimum Qualifications / Skills Degree [BE, B.sc. ] Preferred Qualifications Candidate must have good communication skills and client handling Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 21, 2025, 1:56:27 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

Telangana

On-site

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Role : ML Engineer (Associate / Senior) Experience : 2-3 Years (Associate) 4-5 Years (Senior) Mandatory Skill: Python/MLOps/Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description: Other Skills: Azure/LLMOps/ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers, software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers

Posted 1 week ago

Apply

8.0 years

5 - 10 Lacs

Bengaluru

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you'll do: We are looking for a Senior Software Engineer – Java to join and strengthen the App2App Integration team within SAP Business Data Cloud. This role is designed to accelerate the integration of SAP’s application ecosystem with its unified data fabric, enabling low-latency, secure and scalable data exchange. You will take ownership of designing and building core integration frameworks that enable real-time, event-driven data flows between distributed SAP systems. As a senior contributor, you will work closely with architects to drive the evolution of SAP’s App2App integration capabilities, with hands-on involvement in Java, ETL and distributed data processing, Apache Kafka, DevOps, SAP BTP and Hyperscaler platforms. Responsibilities: Design and develop App2App integration components and services using Java, RESTful APIs and messaging frameworks such as Apache Kafka. Build and maintain scalable data processing and ETL pipelines that support real-time and batch data flows. Integrate data engineering workflows with tools such as Databricks, Spark or other cloud-based processing platforms (experience with Databricks is a strong advantage). Accelerate the App2App integration roadmap by identifying reusable patterns, driving platform automation and establishing best practices. Collaborate with cross-functional teams to enable secure, reliable and performant communication across SAP applications. Build and maintain distributed data processing pipelines, supporting large-scale data ingestion, transformation and routing. Work closely with DevOps to define and improve CI/CD pipelines, monitoring and deployment strategies using modern GitOps practices. Guide cloud-native secure deployment of services on SAP BTP and major Hyperscaler (AWS, Azure, GCP). Collaborate with SAP’s broader Data Platform efforts including Datasphere, SAP Analytics Cloud and BDC runtime architecture What you bring: Bachelor’s or Master’s degree in Computer Science, Software Engineering or a related field. 8+ years of hands-on experience in backend development using Java, with strong object-oriented design and integration patterns. Hands-on experience building ETL pipelines and working with large-scale data processing frameworks. Experience or experimentation with tools such as Databricks, Apache Spark or other cloud-native data platforms is highly advantageous. Familiarity with SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud or HANA is highly desirable. Design CI/CD pipelines, containerization (Docker), Kubernetes and DevOps best practices. Working knowledge of Hyperscaler environments such as AWS, Azure or GCP. Passionate about clean code, automated testing, performance tuning and continuous improvement. Strong communication skills and ability to collaborate with global teams across time zones Meet your Team: SAP is the market leader in enterprise application software, helping companies of all sizes and industries run at their best. As part of the Business Data Cloud (BDC) organization, the Foundation Services team is pivotal to SAP’s Data & AI strategy, delivering next-generation data experiences that power intelligence across the enterprise. Located in Bangalore, India, our team drives cutting-edge engineering efforts in a collaborative, inclusive and high-impact environment, enabling innovation and integration across SAP’s data platforms #DevT3 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 426958 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.

Posted 1 week ago

Apply

5.0 - 6.0 years

4 - 7 Lacs

Bengaluru

Remote

Senior Database Administrator Req ID: 55552 Location: Bangalore, IN Sapiens is on the lookout for a Senior Database Administrator to become a key player in our Bangalore team. If you're a seasoned DBA pro and ready to take your career to new heights with an established, globally successful company, this role could be the perfect fit. Location: Bangalore Working Model: Our flexible work arrangement combines both remote and in-office work, optimizing flexibility and productivity. This position will be part of Sapiens’ Digital (Data Suite) division, for more information about it, click here: https://sapiens.com/solutions/digitalsuite-customer-experience-and-engagement-software-for-insurers/ Designation: Senior Database Administrator Must Skills: 5-6 Years Experience in MS SQL Server DBA, Oracle DBA, MongoDB DBA, Azure DevOps, Azure Kubernetes Services, SAP BO/BODS, Apache Tomcat, Apache Superset and PowerBI tools. Criteria’s Job Requirements General Job Description A seasoned, experienced professional with a full understanding of area of specialization; resolves a wide range of issues in creative ways. This job is the fully qualified, career-oriented, journey-level position. Pre - requisites Knowledge & Experience Bachelor's degree in Engineering (B.E.) or an equivalent qualification. Ability to adapt quickly to new tools and technologies to support evolving infrastructure requirements. Experience supporting MS SQL, Oracle, MongoDB, or other NoSQL database environments. Ensure implementation of database standards and best practices. Test database solutions to ensure they meet functional and technical specifications. Monitor database performance and conduct performance tuning as needed. Collaborate with project management teams to implement and maintain configuration management for database environments. Foundational knowledge of MongoDB Atlas. Have knowledge of DevOps activities. Experience in DevOps tool administration and troubleshooting. Knowledge of Azure Kubernetes Services (AKS). Administration skills for both Windows and Linux operating systems. Understanding of SAP BusinessObjects tool administration and troubleshooting. Knowledge of SAP Data Services tool administration and troubleshooting. Basic familiarity with Apache Tomcat server administration. Knowledge of SAP IQ database administration, including tablespace management, database replication, and schema refresh tasks for application teams. Knowledge of Apache Superset administration on Linux platforms, including report issue troubleshooting. Understanding of Tableau administration, including Tableau Bridge configuration and support. Knowledge of Databricks workspace creation and access provisioning for project team members. Administrative knowledge of Microsoft Power BI Desktop and Power BI Gateway. Willingness to take ownership and provide support beyond DBA responsibilities, including tools like Talend Studio and DBeaver within ETL environments. Required Product/project Knowledge Ability to work in an agile development environment. Hand on experience in document preparation Proven experience in fine tuning and identifying the potential bottle necks on the applications, Infrastructure Required Skills Ability to work on tasks (POCs, Stories, Installations) without much help. Technical ability includes Troubleshooting skills. Ability to guide juniors in completion of POC, Stories, Installations Common Tasks Database Administration & Support Manage and support various database systems including MS SQL, Oracle, MongoDB, and SAP IQ. Monitor database performance and conduct regular performance tuning. Implement and enforce database standards, security, and best practices. Handle backup, recovery, and replication for disaster recovery scenarios. DevOps and Infrastructure Support Assist with basic DevOps tasks including tool administration and troubleshooting. Support and maintain environments running on Azure Kubernetes Services (AKS). Perform basic system administration tasks on Windows and Linux servers. Application & Tool Support Provide administrative support for SAP BusinessObjects, SAP Data Services, and Apache Tomcat. Maintain and troubleshoot data visualization and reporting tools like Tableau, Apache Superset, Power BI, and Databricks. Environment Setup & Access Control Create and configure workspaces (e.g., Databricks) and manage user access provisioning. Support configuration management and coordinate with project teams for database deployments. Documentation & Knowledge Sharing Create and maintain technical documentation for configuration management and knowledge bases. Contribute to troubleshooting guides and SOPs for Tier 1 and Tier 2 support teams. Cross-functional Tool Support Provide L1/L2 level support for tools like Talend Studio and DBeaver in ETL and data integration tasks. Troubleshoot and support MongoDB Atlas and related NoSQL environments. Required Soft Skills Providing technical leadership Collaboration and teamwork skills Self-motivated with strong initiative and excellent Communication Skills Abilities of becoming a technical activity leader Proactive and initiative approach Self-motivated, flexible and a team player Have good understanding of the requirements Disclaimer: Sapiens India does not authorise any third parties to release employment offers or conduct recruitment drives via a third party. Hence, beware of inauthentic and fraudulent job offers or recruitment drives from any individuals or websites purporting to represent Sapiens . Further, Sapiens does not charge any fee or other emoluments for any reason (including without limitation, visa fees) or seek compensation from educational institutions to participate in recruitment events. Accordingly, please check the authenticity of any such offers before acting on them and where acted upon, you do so at your own risk. Sapiens shall neither be responsible for honouring or making good the promises made by fraudulent third parties, nor for any monetary or any other loss incurred by the aggrieved individual or educational institution. In the event that you come across any fraudulent activities in the name of Sapiens , please feel free report the incident at sapiens to sharedservices@sapiens.com.

Posted 1 week ago

Apply

4.0 years

6 - 10 Lacs

Bengaluru

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Azure Data Engineer + Power BI Senior– Consulting As part of our GDS Consulting team, you will be part of NCLC team delivering specific to Microsoft account. You will be working on latest Microsoft BI technologies and will collaborate with other teams within Consulting services. The opportunity We’re looking for resources with expertise in Microsoft BI, Power BI, Azure Data Factory, Data Bricks to join the group of our Data Insights team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of our service offering. Your key responsibilities Responsible for managing multiple client engagements. Understand and analyse business requirements by working with various stakeholders and create the appropriate information architecture, taxonomy and solution approach Work independently to gather requirements, cleansing extraction and loading of data Translate business and analyst requirements into technical code Create interactive and insightful dashboards and reports using Power BI, connecting to various data sources and implementing DAX calculations. Design and build complete ETL/Azure Data Factory processes moving and transforming data for ODS, Staging, and Data Warehousing Design and development of solutions in Data Bricks, Scala, Spark, SQL to process and analyze large datasets, perform data transformations, and build data models. Design SQL Schema, Database Schema, Stored procedures, function, and T-SQL queries. Skills and attributes for success Collaborating with other members of the engagement team to plan the engagement and develop work program timelines, risk assessments and other documents/templates. Able to manage Senior stakeholders. Experience in leading teams to execute high quality deliverables within stipulated timeline. Skills in PowerBI, Azure Data Factory, Databricks, Azure Synapse, Data Modelling, DAX, Power Query, Microsoft Fabric Strong proficiency in Power BI, including data modelling, DAX, and creating interactive visualizations. Solid experience with Azure Databricks, including working with Spark, PySpark (or Scala), and optimizing big data processing. Good understanding of various Azure services relevant to data engineering, such as Azure Blob Storage, ADLS Gen2, Azure SQL Database/Synapse Analytics Strong SQL Skills and experience with of one of the following: Oracle, SQL, Azure SQL. Good to have experience in SSAS or Azure SSAS and Agile Project Management. Basic Knowledge on Azure Machine Learning services. Excellent Written and Communication Skills and ability to deliver technical demonstrations Quick learner with “can do” attitude Demonstrating and applying strong project management skills, inspiring teamwork and responsibility with engagement team members To qualify for the role, you must have A bachelor's or master's degree A minimum of 4-7 years of experience, preferably background in a professional services firm. Excellent communication skills with consulting experience preferred Ideally, you’ll also have Analytical ability to manage multiple projects and prioritize tasks into manageable work products. Can operate independently or with minimum supervision What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies