Jobs
Interviews

6020 Databricks Jobs - Page 31

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Job Description: You will be responsible for managing data effectively by collecting, analyzing, and interpreting large datasets to derive actionable insights. Additionally, you will develop and oversee Advanced Analytics solutions, including the design of BI Dashboards, Reports, and digital solutions. Collaboration with stakeholders to comprehend business requirements and translate them into technical specifications will be a crucial aspect of your role. As a Project Manager, you will lead Data and Advanced Analytics projects, ensuring their timely delivery and alignment with organizational objectives. Furthermore, maintaining documentation such as design specifications and user manuals will be a part of your routine tasks. It will also be your responsibility to identify areas for process enhancement and recommend digital solutions for continuous improvement. Experience: - Essential experience with Databricks for big data processing and analytics. - Proficiency in SQL for database querying, Data Modeling, and Data Warehouse (DWH). - Ability to create design documentation by translating business requirements into source-to-target mappings. - Hands-on experience with Power BI and Qlik Sense; a development background is considered advantageous. - Familiarity with Azure services for data storage, processing, and analytics. - Understanding of data fabric architecture and its implementation. - Expertise in Azure Data Factory (ADF) for data integration and orchestration (Advantage). - Proficient in utilizing Power Platform tools such as Power Apps, Power Automate, and Power Virtual Agents for creating and managing digital solutions. - Knowledge of AI tools and frameworks to develop predictive models and automate data analysis. Key Requirements: - AI proficiency rating: 4 out of 5. - Data Warehouse (DWH) proficiency rating: 4 out of 5. - Data Management proficiency rating: 3 out of 5.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

Are you passionate about the intersection of data, technology, and science, and excited by the potential of Real-World Data (RWD) and AI Do you thrive in collaborative environments and aspire to contribute to the discovery of groundbreaking medical insights If so, join the data42 team at Novartis! At Novartis, we reimagine medicine by leveraging state-of-the-art analytics and our extensive internal and external data resources. Our data42 platform grants access to high-quality, multi-modal preclinical and clinical data, along with RWD, creating the optimal environment for developing advanced AI/ML models and generating health insights. Our global team of data scientists and engineers utilizes this platform to uncover novel insights and guide drug development decisions. As an RWD SME / RWE Execution Data Scientist, you will focus on executing innovative methodologies and AI models to mine RWD on the data42 platform. You will be the go-to authority for leveraging diverse RWD modalities patterns crucial to understanding patient populations, biomarkers, and drug targets, accelerating the development of life-changing medicines. Duties and Responsibilities: - Collaborate with R&D stakeholders to co-create and implement innovative, repeatable, scalable, and automated data and technology solutions in line with data42 strategy. - Be a data Subject Matter Expert (SME), understand Real World Data (RWD) of different modalities, vocabularies (LOINC, ICD, HCPCS, etc.), non-traditional RWD (Patient reported outcomes, Wearables and Mobile Health Data) and where and how they can be used, including in conjunction with clinical data, omics data, pre-clinical data, and commercial data. - Contribute to data strategy implementation such as Federated Learning, tokenization, data quality frameworks, regulatory requirements (submission data to HL7 FHIR formats conversion, Sentinel initiative), conversion to common data models and standards (OMOP, FHIR, SEND, etc.), FAIR principles, and integration with enterprise catalog. - Define and execute advanced integrated and scalable analytical approaches and research methodologies (including industry trends) in support of exploratory and regulatory use of AI models for RWD analysis across the Research Development Commercial continuum by facilitating research questions. - Stay current with emerging applications and trends, driving the development of advanced analytic capabilities for data42 across the Real-world evidence generation lifecycle, from ideation to study design and execution. - Demonstrate high agility working across various cross-located and cross-functional associates across business domains (commercial, Development, Biomedical Research) or Therapeutic area divisions for our priority disease areas to execute complex and critical business problems with quantified business impact/ROI. Ideal Candidate Profile: - PhD or MSc. in a quantitative discipline (e.g., but not restricted to Computer Science, Physics, Statistics, Epidemiology) with proven expertise in artificial Intelligence / Machine Learning. - 8+ years of relevant experience in Data Science (or 4+ years post-qualification in case of PhD). - Extensive experience in Statistical and Machine Learning techniques: Regression, Classification, Clustering, Design of Experiments, Monte Carlo Simulations, Statistical Inference, Feature Engineering, Time Series Forecasting, Text Mining, and Natural Language Processing, LLMs, and multi-modal Generative AI. - Good to have skills: Stochastic models, Bayesian Models, Markov Chains, Optimization techniques including, Dynamic Programming Deep Learning techniques on structured and unstructured data, Recommender Systems. - Proficiency in tools and packages: Python, R(optional), SQL; exposure to dashboard or web-app building using PowerBI, R-Shiny, Flask, open source or proprietary software and packages is an advantage. - Knowledge in data standards e.g. OHDSI OMOP, and other data standards, FHIR HL7 for regulatory, and best practices. - Good to have: Foundry, big data programming, working knowledge of executing data science on AWS, DataBricks, or SnowFlake. - Strong in Matrix collaboration environments with good communication and collaboration skills with country/ regional/ global stakeholders in an individual contributor capacity. Novartis is committed to building an outstanding, inclusive work environment and diverse teams representative of the patients and communities we serve. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting, and inspiring each other. Combining to achieve breakthroughs that change patients" lives. Ready to create a brighter future together Join our Novartis Network: Not the right Novartis role for you Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up. Benefits and Rewards: Read our handbook to learn about all the ways we'll help you thrive personally and professionally.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a versatile AI/ML Engineer to join the Our team, contributing to the design and deployment of scalable AI solutions across the full stack. This role blends machine learning engineering with frontend/backend development and cloud native microservices. You’ll work closely with data scientists, MLOps engineers, and product teams to bring generative AI capabilities like RAG and LLM based systems into production. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or masters in computer science, Engineering, or related field. 5+ years of experience in AI/ML engineering, full stack development, or MLOps. Proven experience deploying AI models in production environments. Solid understanding of microservices architecture and cloud native development. Familiarity with Agile/Scrum methodologies Technical Skills: Languages & Frameworks: Python, JavaScript/TypeScript, SQL, Scala ML Tools: MLflow, TensorFlow, PyTorch, Scikit learn Frontend: React.js, Angular (preferred), HTML/CSS Backend: Node.js, Spring Boot, REST APIs Cloud: Azure (preferred), UAIS, AWS DevOps & MLOps: Git, Jenkins, Docker, Kubernetes, Azure DevOps Data Engineering: Apache Spark/Databricks, Kafka, ETL pipelines Monitoring: Prometheus, Grafana RAG/LLM: LangChain, LlamaIndex, embedding pipelines, prompt engineering Preferred Qualifications Experience with Spark, Hadoop Familiarity with Maven, Spring, XML, Tomcat Proficiency in Unix shell scripting and SQL Server At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As a skilled MLOps Support Engineer, you will be responsible for monitoring and managing ML model operational pipelines in AzureML and MLflow. Your primary focus will be on automation, integration validation, and CI/CD pipeline management to ensure stability and reliability in model deployment lifecycles. Your objectives in this role include supporting and monitoring MLOps pipelines in AzureML and MLflow, managing CI/CD pipelines for model deployment and updates, handling model registry processes, performing testing and validation of integrated endpoints, automating monitoring and upkeep of ML pipelines, as well as troubleshooting and resolving pipeline and integration-related issues. In your day-to-day responsibilities, you will support production ML pipelines using AzureML and MLflow, configure and manage model versioning and registry lifecycle, automate alerts, monitoring tasks, and routine pipeline operations, validate REST API endpoints for ML models, implement CI/CD workflows for ML deployments, document and troubleshoot operational issues related to ML services, and collaborate with data scientists and platform teams to ensure delivery continuity. To excel in this role, you should possess proficiency in AzureML, MLflow, and Databricks, have a strong command over Python, experience with Azure CLI and scripting, a good understanding of CI/CD practices in MLOps, knowledge of model registry management and deployment validation, and at least 3-5 years of relevant experience in MLOps environments. While not mandatory, it would be beneficial to have skills such as exposure to monitoring tools like Azure Monitor and Prometheus, experience with REST API testing tools such as Postman, and familiarity with Docker/Kubernetes in ML deployments.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As an AI/ML Specialist, you will be responsible for building intelligent systems utilizing OT sensor data and Azure ML tools. Your primary focus will be collaborating with data scientists, engineers, and operations teams to develop scalable AI solutions addressing critical manufacturing issues such as predictive maintenance, process optimization, and anomaly detection. This role involves bridging the edge and cloud environments by deploying AI solutions to run effectively on either cloud platforms or industrial edge devices. Your key functions will include designing and developing ML models using time-series sensor data from OT systems, working closely with engineering and data science teams to translate manufacturing challenges into AI use cases, implementing MLOps pipelines on Azure ML, and integrating with Databricks/Delta Lake. Additionally, you will be responsible for deploying and monitoring models at the edge using Azure IoT Edge, conducting model validation, retraining, and performance monitoring, as well as collaborating with plant operations to contextualize insights and integrate them into workflows. To qualify for this role, you should have a minimum of 5 years of experience in machine learning and AI. Hands-on experience with Azure ML, ML flow, Databricks, and PyTorch/TensorFlow is essential. You should also possess a proven ability to work with OT sensor data such as temperature, vibration, flow, etc. A strong background in time-series modeling, edge inferencing, and MLOps is required, along with familiarity with manufacturing KPIs and predictive modeling use cases.,

Posted 1 week ago

Apply

7.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As a Business Analyst with 7-14 years of experience, you will be responsible for various tasks including Business Requirement Documents (BRD) and Functional Requirement Documents (FRD) creation, Stakeholder Management, User Acceptance Testing (UAT), understanding Datawarehouse Concepts, SQL queries, and subqueries, as well as utilizing Data Visualization tools such as Power BI or MicroStrategy. It is essential that you have a deep understanding of the Investment Domain, specifically in areas like Capital markets, Asset management, and Wealth management. Your primary responsibilities will involve working closely with stakeholders to gather requirements, analyzing data, and testing systems to ensure they meet business needs. Additionally, you should have a strong background in investment management or financial services, with experience in areas like Asset management, Investment operations, and Insurance. Your familiarity with concepts like Critical Data Elements (CDEs), data traps, and reconciliation workflows will be beneficial in this role. Technical expertise in BI and analytics tools like Power BI, Tableau, and MicroStrategy is required, along with proficiency in SQL. You should also possess excellent communication skills, analytical thinking capabilities, and the ability to engage effectively with stakeholders. Experience in working within Agile/Scrum environments with cross-functional teams is highly valued. In terms of technical skills, you should demonstrate proven abilities in analytical problem-solving, with a deep knowledge of investment data platforms such as Golden Source, NeoXam, RIMES, and JPM Fusion. Expertise in cloud data technologies like Snowflake, Databricks, and AWS/GCP/Azure data services is essential. Understanding data governance frameworks, metadata management, and data lineage is crucial, along with compliance standards in the investment management industry. Hands-on experience with Investment Book of Records (IBORs) like Blackrock Alladin, CRD, Eagle STAR (ABOR), Eagle Pace, and Eagle DataMart is preferred. Familiarity with investment data platforms including Golden Source, FINBOURNE, NeoXam, RIMES, and JPM Fusion, as well as cloud data platforms like Snowflake and Databricks, will be advantageous. Your background in data governance, metadata management, and data lineage frameworks will be essential in ensuring data accuracy and compliance within the organization.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are invited to join InfoBeans Technologies as a Data Engineer with a minimum of 5 years of experience in the field. This is a full-time position and we are looking for individuals who are proficient in Snowflake, along with expertise in either Azure Data Factory (ADF) and Python or Power BI and Data Modeling. As a Data Engineer at InfoBeans Technologies, you will be required to have hands-on experience with tools such as WhereScape RED + 3D, DataVault 2.0, SQL, and data transformation pipelines. A strong understanding of Data Management & Analytics principles is essential for this role. Additionally, excellent communication skills and the ability to engage in requirements engineering are highly valued. The successful candidate will be responsible for delivering and supporting production-ready data systems at an expert level of proficiency. The primary skill areas required for this role include Data Engineering & Analytics. If you are passionate about building robust data pipelines, modeling enterprise data, and visualizing meaningful insights, we would love to connect with you. Immediate availability or joining within 15 days is preferred for this position. To apply for this exciting opportunity, please send your resume to mradul.khandelwal@infobeans.com or reach out to us directly. Join us in shaping the future of data analytics and engineering at InfoBeans Technologies.,

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About the role: Gartner is seeking an Advanced Data Engineer specializing in data modeling and reporting with Azure Analysis Services and Power BI. As a key member of the team, you will contribute to the development and support of Gartner’s Enterprise Data Warehouse and a variety of data products. This role involves integrating data from both internal and external sources using diverse ingestion APIs. You will have the opportunity to work with a broad range of data technologies, focusing on building and optimizing data pipelines, as well as supporting, maintaining, and enhancing existing business intelligence solutions. What you will do: Develop, manage, and optimize enterprise data models within Azure Analysis Services, including configuration, scaling, and security management Design and build tabular data models in Azure Analysis Services for seamless integration with Power BI Write efficient SQL queries and DAX (Data Analysis Expressions) to support robust data models, reports, and dashboards Tune and optimize data models and queries for maximum performance and efficient data retrieval Design, build, and automate data pipelines and applications to support data scientists and business users with their reporting and analytics needs Collaborate with a team of Data Engineers to support and enhance the Azure Synapse Enterprise Data Warehouse environment What you will need: 2–4 years of hands-on experience developing enterprise data models in Azure Analysis Services Strong expertise in designing and developing tabular models using Power BI and SQL Server Data Tools (SSDT) Advanced proficiency in DAX for data analysis and SQL for data manipulation and querying Proven experience creating interactive Power BI dashboards and reports for business analytics Deep understanding of relational database systems and advanced SQL skills Experience with T-SQL, ETL processes, and Azure Data Factory is highly desirable Solid understanding of cloud computing concepts and experience with Azure services such as Azure Data Factory, Azure Blob Storage, and Azure Active Directory Nice to Have: Experience with version control systems (e.g., Git, Subversion) Familiarity with programming languages such as Python or Java Knowledge of various database technologies (NoSQL, Document, Graph databases, etc.) Experience with Data Intelligence platforms like Databricks Who you are: Effective time management skills and ability to meet deadlines Excellent communications skills interacting with technical and business audience’s Excellent organization, multitasking, and prioritization skills Must possess a willingness and aptitude to embrace new technologies/ideas and master concepts rapidly. Intellectual curiosity, passion for technology and keeping up with new trends Delivering project work on-time within budget with high quality Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:101546 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an integral part of our team at Proximity, you will be taking on the role of both a hands-on tech lead and product manager. Your primary responsibility will be to deliver data/ML platforms and pipelines within a Databricks-Azure environment. In this capacity, you will be leading a small delivery team and collaborating with enabling teams to drive product, architecture, and data science initiatives. Your ability to translate business requirements into product strategy and technical delivery with a platform-first mindset will be crucial to our success. To excel in this role, you should possess technical proficiency in Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, and Azure. Additionally, expertise in Agile delivery, discovery cycles, outcome-focused planning, and trunk-based development will be advantageous. You should also be adept at collaborating with engineers, working across cross-functional teams, and fostering self-service platforms. Clear communication skills will be key in articulating decisions, roadmap, and priorities effectively. Joining our team comes with a host of benefits. You will have the opportunity to engage in Proximity Talks, where you can interact with fellow designers, engineers, and product enthusiasts, and gain insights from industry experts. Working alongside our world-class team will provide you with continuous learning opportunities, allowing you to challenge yourself and acquire new knowledge on a daily basis. Proximity is a leading technology, design, and consulting partner for prominent Sports, Media, and Entertainment companies globally. With headquarters in San Francisco and additional offices in Palo Alto, Dubai, Mumbai, and Bangalore, we have a track record of creating high-impact, scalable products used by 370 million daily users. The collective net worth of our client companies stands at $45.7 billion since our inception in 2019. At Proximity, we are a diverse team of coders, designers, product managers, and experts dedicated to solving complex problems and developing cutting-edge technology at scale. As our team of Proxonauts continues to expand rapidly, your contributions will play a significant role in the company's success. You will have the opportunity to collaborate with experienced leaders who have spearheaded multiple tech, product, and design teams. To learn more about us, you can watch our CEO, Hardik Jagda, share insights about Proximity, explore our values and meet our team members, visit our website, blog, and design wing at Studio Proximity, and gain behind-the-scenes access through our Instagram accounts @ProxWrks and @H.Jagda.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

A career at Conga is more than just a job - it's a complete package! At Conga, we have fostered a community where our colleagues can flourish. Here, you will have the chance to innovate, receive support for your growth through both individual and team development, and work in an environment where every voice is valued and heard. Conga specializes in simplifying complexity within an ever-evolving world. Our revenue lifecycle management solution is designed to streamline order configuration, execution, fulfillment, and contract renewal processes by utilizing a single critical insights data model that adapts to changing business requirements and aligns the efforts of all teams. Our mission at Conga is to empower customers to achieve transformational revenue growth by harmonizing teams, processes, and technology to maximize customer lifetime value. The Software Architect role at Conga is a pivotal position within the Platform and AI Services team. This team is dedicated to building and innovating foundational services, components, and frameworks essential for the SaaS Revenue Lifecycle Management platform powered by AI. As an AI Architect at Conga, you will play a key role in developing AI capabilities for the Conga Revenue Lifecycle Platform, catering to customer needs and expanding your own skill set. Your responsibilities will involve building and supporting high-scale production code in a multi-tenant SaaS environment. One of the key aspects of this role is to contribute to the architecture and development of core AI services for the Revenue Lifecycle Platform. As an AI Architect, you will be involved in designing end-to-end AI solutions, developing data pipelines, models, deployment strategies, and integrating with existing systems. Moreover, you will collaborate with data scientists and software engineers to implement robust, scalable, and efficient AI solutions. Your role will also encompass managing the technical architecture of the AI platform, ensuring scalability, performance, security, and cost efficiency. Additionally, you will actively participate in an agile team lifecycle, including design, development, testing, planning, backlog grooming, and support. To excel in this role, you should possess a Bachelor's degree in Computer Science or a related field, with 10+ years of expertise in areas such as Machine Learning, Pattern Recognition, Natural Language Processing, Information Retrieval, Large Scale Distributed Systems, and Cloud Computing. We are looking for a talented individual who can provide technical leadership, design reusable components and services, stay abreast of the latest advancements in AI, and possess strong communication and interpersonal skills. If you are self-driven, enjoy problem-solving, and are willing to work with a global team, then we encourage you to apply for this exciting opportunity at Conga. If you believe you are the right candidate for this role or have a genuine interest in working in an inclusive and diverse workplace, we invite you to submit your application. Your resume can be in any format, but we recommend using PDF or plain text for easier review by our recruiters. At Conga, we value diversity and inclusivity, so even if you do not meet every qualification listed, we encourage you to apply as you may still be the ideal candidate for this or other roles.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You are an experienced BI Architect with a strong background in Power BI and the Microsoft Azure ecosystem. Your main responsibility will be to design, implement, and enhance business intelligence solutions that aid in strategic decision-making within the organization. You will play a crucial role in leading the BI strategy, architecture, and governance processes, while also guiding a team of BI developers and Data analysts. Your key responsibilities will include designing and implementing scalable BI solutions using Power BI and Azure services, defining BI architecture, data models, security models, and best practices for enterprise reporting. You will collaborate closely with business stakeholders to gather requirements and transform them into data-driven insights. Additionally, you will oversee data governance, metadata management, and Power BI workspace design, optimizing Power BI datasets, reports, and dashboards for performance and usability. Furthermore, you will be expected to establish standards for data visualization, development lifecycle, version control, and deployment. As a mentor to BI developers, you will ensure adherence to coding and architectural standards, integrate Power BI with other applications using APIs, Power Automate, or embedded analytics, and monitor and troubleshoot production BI systems to maintain high availability and data accuracy. To qualify for this role, you should have a minimum of 12 years of overall experience with at least 7 years of hands-on experience with Power BI, including expertise in data modeling, DAX, M/Power Query, custom visuals, and performance tuning. Strong familiarity with Azure services such as Azure SQL Database, Azure Data Lake, Azure Functions, and Azure DevOps is essential. You must also possess a solid understanding of data warehousing, ETL, and dimensional modeling concepts, along with proficiency in SQL, data transformation, and data governance principles. Experience in managing enterprise-level Power BI implementations with large user bases and complex security requirements, excellent communication and stakeholder management skills, the ability to lead cross-functional teams, and influence BI strategy across departments are also prerequisites for this role. Knowledge of Microsoft Fabric architecture and its components, a track record of managing BI teams of 6 or more, and the capability to provide technical leadership and team development are highly desirable. In addition, having the Microsoft Fabric Certification DP 600 and PL-300 would be considered a bonus for this position.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

You are seeking an Analytics Developer with expertise in Databricks, Power BI, and ETL technologies to design, develop, and deploy advanced analytics solutions. Your focus will be on creating robust, scalable data pipelines, implementing actionable business intelligence frameworks, and delivering insightful dashboards and reports to drive strategic decision-making. This role involves close collaboration with technical teams and business stakeholders to ensure analytics initiatives align with organizational objectives. With 8+ years of experience in analytics, data integration, and reporting, you should possess 4+ years of hands-on experience with Databricks, including proficiency in Databricks Notebooks for development and testing. Your key responsibilities will include leveraging Databricks to develop and optimize scalable data pipelines for real-time and batch data processing, designing and implementing Databricks Notebooks for exploratory data analysis, ETL workflows, and machine learning models, managing and optimizing Databricks clusters for performance, cost efficiency, and scalability, using Databricks SQL for advanced query development, data aggregation, and transformation, incorporating Python and/or Scala within Databricks workflows to automate and enhance data engineering processes, developing solutions to integrate Databricks with other platforms such as Azure Data Factory for seamless data orchestration, creating interactive and visually compelling Power BI dashboards and reports to enable self-service analytics, leveraging DAX for building calculated columns, measures, and complex aggregations, designing effective data models in Power BI using star schema and snowflake schema principles for optimal performance, configuring and managing Power BI workspaces, gateways, and permissions for secure data access, implementing row-level security and data masking strategies in Power BI to ensure compliance with governance policies, building real-time dashboards by integrating Power BI with Databricks, Azure Synapse, and other data sources, providing end-user training and support for Power BI adoption across the organization, developing and maintaining ETL/ELT workflows ensuring high data quality and reliability, implementing data governance frameworks to maintain data lineage, security, and compliance with organizational policies, optimizing data flow across multiple environments including data lakes, warehouses, and real-time processing systems, collaborating with data governance teams to enforce standards for metadata management and audit trails, working closely with IT teams to integrate analytics solutions with ERP, CRM, and other enterprise systems, troubleshooting and resolving technical challenges related to data integration, analytics performance, and reporting accuracy, staying updated on the latest advancements in Databricks, Power BI, and data analytics technologies, driving innovation by integrating AI/ML capabilities into analytics solutions using Databricks, contributing to the enhancement of organizational analytics maturity through scalable and reusable approaches. You should possess self-management skills, thinking outside the box, learning new technologies, logical thinking, fluency in English, strong communication skills, a Bachelor's degree in Computer Science, Data Science, or a related field (Masters preferred), relevant certifications, and the ability to manage multiple priorities in a fast-paced environment with high customer expectations.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As an AWS Data Engineer with a focus on Databricks, you will play a crucial role in designing, developing, and optimizing scalable data pipelines. Your expertise in Databricks, PySpark, and AWS development will be key in leading technical efforts and driving innovation across the stack. Your responsibilities will include developing and optimizing data pipelines using Databricks (PySpark), implementing AWS AppSync and Lambda-based APIs for integration with Neptune and OpenSearch, collaborating with React developers and backend teams to enhance architecture, ensuring secure development practices especially around IAM roles and AWS security, driving performance, scalability, and reliability improvements, and taking full ownership of assigned tasks and deliverables. To excel in this role, you should have strong experience in Databricks and PySpark for building data pipelines, proficiency in AWS Neptune and OpenSearch, hands-on experience with AWS AppSync and Lambda functions, a solid grasp of IAM, CloudFront, and API development in AWS, familiarity with React.js front-end applications (a plus), strong problem-solving, debugging, and communication skills, and the ability to work independently and drive innovation. Preferred qualifications include AWS Certifications (Solutions Architect, Developer Associate, or Data Analytics Specialty) and production experience with graph databases and search platforms. This position offers a great opportunity to work with cutting-edge technologies, collaborate with talented teams, and make a significant impact on data engineering projects.,

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As a Senior Data Modeller, you will be responsible for leading the design and development of conceptual, logical, and physical data models for enterprise and application-level databases. Your expertise in data modeling, data warehousing, and data governance, particularly in cloud environments, Databricks, and Unity Catalog, will be crucial for the role. You should have a deep understanding of business processes related to master data management in a B2B environment and experience with data governance and data quality concepts. Your key responsibilities will include designing and developing data models, translating business requirements into structured data models, defining and maintaining data standards, collaborating with cross-functional teams to implement models, analyzing existing data systems for optimization, creating entity relationship diagrams and data flow diagrams, supporting data governance initiatives, and ensuring compliance with organizational data policies and security requirements. To be successful in this role, you should have at least 12 years of experience in data modeling, data warehousing, and data governance. Strong familiarity with Databricks, Unity Catalog, and cloud environments (preferably Azure) is essential. Additionally, you should possess a background in data normalization, denormalization, dimensional modeling, and schema design, along with hands-on experience with data modeling tools like ERwin. Experience in Agile or Scrum environments, proficiency in integration, databases, data warehouses, and data processing, as well as a track record of successfully selling data and analytics software to enterprise customers are key requirements. Your technical expertise should cover Big Data, streaming platforms, Databricks, Snowflake, Redshift, Spark, Kafka, SQL Server, PostgreSQL, and modern BI tools. Your ability to design and scale data pipelines and architectures in complex environments, along with excellent soft skills including leadership, client communication, and stakeholder management will be valuable assets in this role.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Data Engineer at Ethoca, a Mastercard Company in Pune, India, you will play a crucial role in driving data enablement and exploring big data solutions within our technology landscape. Your responsibilities will include designing, developing, and optimizing batch and real-time data pipelines using tools such as Snowflake, Snowpark, Python, and PySpark. You will also be involved in building data transformation workflows, implementing CI/CD pipelines, and administering the Snowflake platform to ensure performance tuning, access management, and platform scalability. Collaboration with stakeholders to understand data requirements and deliver reliable data solutions will be a key part of your role. Your expertise in cloud-based database infrastructure, SQL development, and building scalable data models using tools like Power BI will be essential in supporting business analytics and dashboarding. Additionally, you will be responsible for real-time data streaming pipelines, data observability practices, and planning and executing deployments, migrations, and upgrades across data platforms while minimizing service impacts. To be successful in this role, you should have a strong background in computer science or software engineering, along with deep hands-on experience with Snowflake, Snowpark, Python, PySpark, and CI/CD tooling. Familiarity with Schema Change, Java JDK, Spring & Springboot framework, Databricks, and real-time data processing is desirable. You should also possess excellent problem-solving and analytical skills, as well as effective written and verbal communication abilities for collaborating across technical and non-technical teams. You will be part of a high-performing team that is committed to making systems resilient and easily maintainable on the cloud. If you are looking for a challenging role that allows you to leverage cutting-edge software and development skills while working with massive data volumes, this position at Ethoca may be the right fit for you.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for performing comprehensive testing of ETL pipelines to ensure data accuracy and completeness across different systems. This includes validating Data Warehouse objects such as fact and dimension tables, designing and executing test cases and test plans for data extraction, transformation, and loading processes, as well as conducting regression testing to validate enhancements with no breakage of existing data flows. You will also work with SQL to write complex queries for data verification and backend testing, and test data processing workflows in Azure Data Factory and Databricks environments. Collaboration with developers, data engineers, and business analysts to understand requirements and proactively raise defects is a key part of this role. Additionally, you will be expected to perform root cause analysis for data-related issues and suggest improvements, as well as create clear and concise test documentation, logs, and reports. The ideal candidate for this position should possess strong knowledge of ETL testing methodologies and tools, excellent skills in SQL including joins, aggregation, subqueries, and performance tuning, hands-on experience with Data Warehousing and data models (Star/Snowflake), and experience in test case creation, execution, defect logging, and closure. Proficiency in regression testing, data validation, and data reconciliation is also required, as well as a working knowledge of Azure Data Factory (ADF), Azure Synapse, and Databricks. Experience with test management tools like JIRA, TestRail, or HP ALM is essential. Nice to have qualifications include exposure to automation testing for data pipelines, scripting knowledge in Python or PySpark, understanding of CI/CD in data testing, and experience with data masking, data governance, and privacy rules. To qualify for this role, you should have a Bachelors degree in Computer Science, Information Systems, or a related field, along with at least 3 years of hands-on experience in ETL/Data Warehouse testing. Excellent analytical and problem-solving skills, strong attention to detail, and good communication skills are also necessary for this position.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

You will be responsible for leading the design and implementation of an Azure-based digital and AI platform that facilitates scalable and secure product delivery across IT and OT domains. In collaboration with the Enterprise Architect, you will shape the platform architecture to ensure alignment with the overall digital ecosystem. Your role will involve integrating OT sensor data from PLCs, SCADA, and IoT devices into a centralized and governed Lakehouse environment, bridging plant-floor operations with cloud innovation. Key Responsibilities: - Architect and implement the Azure digital platform utilizing IoT Hub, IoT Edge, Synapse, Databricks, and Purview. - Work closely with the Enterprise Architect to ensure that platform capabilities align with the broader enterprise architecture and digital roadmap. - Design data ingestion flows and edge-to-cloud integration from OT systems such as SCADA, PLC, MQTT, and OPC-UA. - Establish platform standards for data ingestion, transformation (Bronze, Silver, Gold), and downstream AI/BI consumption. - Ensure security, governance, and compliance in accordance with standards like ISA-95 and the Purdue Model. - Lead the technical validation of platform components and provide guidance on platform scaling across global sites. - Implement microservices architecture patterns using containers (Docker) and orchestration (Kubernetes) to enhance platform modularity and scalability. Requirements: - Possess a minimum of 8 years of experience in architecture or platform engineering roles. - Demonstrated hands-on expertise with Azure services including Data Lake, Synapse, Databricks, IoT Edge, and IoT Hub. - Deep understanding of industrial data protocols such as OPC-UA, MQTT, and Modbus. - Proven track record of designing IT/OT integration solutions in manufacturing environments. - Familiarity with Medallion architecture, time-series data, and Azure security best practices. - TOGAF or Azure Solutions Architect certification is mandatory for this role.,

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Are you insatiably curious, deeply passionate about the realm of databases and analytics, and ready to tackle complex challenges in a dynamic environment in the era of AI? If so, we invite you to join our team as a Cloud & AI Solution Engineer in Innovative Data Platform for commercial customers at Microsoft. Here, you'll be at the forefront of innovation, working on cutting-edge projects that leverage the latest technologies to drive meaningful impact. Join us and be part of a team that thrives on collaboration, creativity, and continuous learning. Databases & Analytics is a growth opportunity for Microsoft Azure, as well as its partners and customers. It includes a rich portfolio of products including IaaS and PaaS services on the Azure Platform in the age of AI. These technologies empower customers to build, deploy, and manage database and analytics applications in a cloud-native way. As an Innovative Data Platform Solution Engineer (SE), you will play a pivotal role in helping enterprises unlock the full potential of Microsoft’s cloud database and analytics stack across every stage of deployment. You’ll collaborate closely with engineering leaders and platform teams to accelerate the Fabric Data Platform, including Azure Databases and Analytics, through hands-on engagements like Proof of Concepts, hackathons, and architecture workshops. This opportunity will allow you to accelerate your career growth, develop deep business acumen, hone your technical skills, and become adept at solution design and deployment. You’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform, all while enjoying flexible work opportunities. As a trusted technical advisor, you’ll guide customers through secure, scalable solution design, influence technical decisions, and accelerate database and analytics migration into their deployment workflows. In summary, you’ll help customers modernize their data platform and realize the full value of Microsoft’s platform. Responsibilities Drive technical sales with decision makers using demos and PoCs to influence solution design and enable production deployments. Lead hands-on engagements—hackathons and architecture workshops—to accelerate adoption of Microsoft’s cloud platforms. Build trusted relationships with platform leads, co-designing secure, scalable architectures and solutions Resolve technical blockers and objections, collaborating with engineering to share insights and improve products. Maintain deep expertise in Analytics Portfolio: Microsoft Fabric (OneLake, DW, real-time intelligence, BI, Copilot), Azure Databricks, Purview Data Governance and Azure Databases: SQL DB, Cosmos DB, PostgreSQL. Maintain and grow expertise in on-prem EDW (Teradata, Netezza, Exadata), Hadoop & BI solutions. Represent Microsoft through thought leadership in cloud Database & Analytics communities and customer forums Qualifications 10+ years technical pre-sales or technical consulting experience OR Bachelor's Degree in Computer Science, Information Technology, or related field AND 4+ years technical pre-sales or technical consulting experience OR Master's Degree in Computer Science, Information Technology, or related field AND 3+ year(s) technical pre-sales or technical consulting experience OR equivalent experience Expert on Azure Databases (SQL DB, Cosmos DB, PostgreSQL) from migration & modernize and creating new AI apps. Expert on Azure Analytics (Fabric, Azure Databricks, Purview) and competitors (BigQuery, Redshift, Snowflake) in data warehouse, data lake, big data, analytics, real-time intelligent, and reporting using integrated Data Security & Governance. Proven ability to lead technical engagements (e.g., hackathons, PoCs, MVPs) that drive production-scale outcomes. 6+ years technical pre-sales, technical consulting, or technology delivery, or related experience OR equivalent experience 4+ years experience with cloud and hybrid, or on premises infrastructure, architecture designs, migrations, industry standards, and/or technology management Proficient on data warehouse & big data migration including on-prem appliance (Teradata, Netezza, Oracle), Hadoop (Cloudera, Hortonworks) and Azure Synapse Gen2. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Are you passionate about crafting engaging web applications, from front end interfaces to robust back-end systems? Join our team and take ownership of the full development lifecycle. We are hiring! Bring your skills in front-end technologies, back-end development to our dynamic team! In this role, you we are looking - • 6+ years of back-end development experience, including: • 5+ years in cloud-native development using AWS • Strong proficiency in Nodejs, TypeScript, and REST APIs • Experience with AWS CloudFront, S3, Aurora PostgreSQL • Demonstrated experience leading small teams or engineering squads • Deep understanding of microservice architectures • Familiarity with Reactjs, Nextjs, and Auth0 integration • Experience working in agile environments using Jira and Confluence • Strong communication skills and ability to influence cross-functional stakeholders • Develop and maintain REST APIs to support various applications and services • Ensure secure and efficient access/login mechanisms using Auth0 • Collaborate with cross-functional teams to define, design, and ship new features • Mentor and guide junior developers, fostering a culture of continuous learning and improvement • Conduct code reviews and ensure adherence to best practices and coding standards • Troubleshoot and resolve technical issues, ensuring high availability and performance of applications • Stay updated with the latest industry trends and technologies to drive innovation within the team Preferred Qualifications • Experience with enterprise-scale, web-based applications • Exposure to Databricks or other large-scale data platforms (no direct engineering required) • Previous experience rotating between teams and adapting to different scopes and responsibilities Want to know more about this position, or know someone in your network that would be great or this role? Apply now and let's shape the future together!

Posted 1 week ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title Databricks Engineer Location [NCR / Bengaluru] Job Type [Full-time] Experience Level 4+ years in data engineering with a strong focus on Databricks Domain [Healthcare] Job Summary We are seeking a highly skilled and motivated Databricks Engineer to join our data engineering team. The ideal candidate will have strong experience in designing, developing, and optimizing large-scale data pipelines and analytics solutions using the Databricks Unified Analytics Platform, Apache Spark, Delta Lake, Data Factory and modern data lake/lakehouse architectures. You will work closely with data architects, data scientists, and business stakeholders to enable high-quality, scalable, and reliable data processing frameworks that support business intelligence, advanced analytics, and machine learning initiatives. Key Responsibilities Design and implement batch and real-time ETL/ELT pipelines using Databricks and Apache Spark. Ingest, transform, and deliver structured and semi-structured data from diverse data sources (e.g., file systems, databases, APIs, event streams). Develop reusable Databricks notebooks, jobs, and libraries for repeatable data workflows. Implement and manage Delta Lake solutions to support ACID transactions, time-travel, and schema evolution. Ensure data integrity through validation, profiling, and automated quality checks. Apply data governance principles, including access control, encryption, and data lineage, using available tools (e.g., Unity Catalog, external metadata catalogs). Work with data scientists and analysts to deliver clean, curated, and analysis-ready data. Profile and optimize Spark jobs for performance, scalability, and cost. Monitor, debug, and troubleshoot data pipelines and distributed processing issues. Set up alerting and monitoring for long-running or failed jobs. Participate in the CI/CD lifecycle using tools like Git, GitHub Actions, Jenkins, or Azure DevOps. Required Skills & Experience 4+ years of experience in data engineering. Strong hands-on experience with Apache Spark (DataFrames, Spark SQL, RDDs, Structured Streaming). Proficient in Python (PySpark) and SQL for data processing and transformation. Understanding of Cloud environment (Azure & AWS). Solid understanding of Delta Lake, Data Factory and Lakehouse architecture. Experience working with various data formats such as Parquet, JSON, Avro, CSV. Familiarity with DevOps practices, version control (Git), and CI/CD pipelines for data workflows. Experience with data modeling, dimensional modeling, and data warehouse concepts.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Key Attributes: ü - Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform. Design/ implement, and maintain the data architecture for all AWS data services ü - A strong understanding of data modelling, data structures, databases (Redshift), and ETL processes ü - Work with stakeholders to identify business needs and requirements for data-related projects ü - Strong SQL and/or Python or PySpark knowledge ü - Creating data models that can be used to extract information from various sources & store it in a usable format ü - Optimize data models for performance and efficiency ü - Write SQL queries to support data analysis and reporting ü - Monitor and troubleshoot data pipelines ü - Collaborate with software engineers to design and implement data-driven features ü - Perform root cause analysis on data issues ü - Maintain documentation of the data architecture and ETL processes ü - Identifying opportunities to improve performance by improving database structure or indexing methods ü - Maintaining existing applications by updating existing code or adding new features to meet new requirements ü - Designing and implementing security measures to protect data from unauthorized access or misuse ü - Recommending infrastructure changes to improve capacity or performance ü - Experience in Process industry Mandatory skill sets: Data Modelling, AWS, ETL Preferred skill sets: Data Modelling, AWS, ETL Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills ETL Development Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 week ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description You are a strategic thinker passionate about driving solutions in Data Governance. You have found the right team. As a Data Governance Associate in our Finance team, you will spend each day defining, refining, and delivering set goals for our firm. In your role as a Senior Associate in the CAO – Data Governance Team, you will execute data quality initiatives and contribute to data governance practices, including data lineage, contracts, and classification. Under the guidance of the VP, you will ensure data integrity and compliance, utilizing cloud platforms, data analytics tools, and SQL expertise. You will be part of a team that provides resources and support to manage data risks globally, lead strategic data projects, and promote data ownership within JPMC’s Chief Administrative Office. Job Responsibilities Collaborate with leadership and stakeholders to support the CAO Data Governance program by facilitating communication and ensuring alignment with organizational goals. Implement and maintain a data quality operating model, including standards, rules, and processes, to ensure prioritized data is fit for purpose and meets business needs. Manage the data quality issue management lifecycle, coordinating between CDO, application owners, data owners, information owners, and other stakeholders to ensure timely resolution and continuous improvement. Align with evolving firmwide CDAO Data Quality policies, standards, and best practices, incorporating requirements into the CAO CDAO data governance framework to ensure compliance and consistency. Implement data governance frameworks on CAO Data Lake structures to enhance data accessibility, usability, and integrity across the organization. Required Qualifications, Capabilities, And Skills 8+ years of experience in data quality management or data governance within financial services. Experience with data management tools such as Talend, Alteryx, Soda, Collibra. Experience with visualization tools like Tableau and Qlik Sense. Experience with Agile/Scrum methodologies and tools (Confluence, Jira). Familiarity with Microsoft desktop productivity tools (Excel, PowerPoint, Visio, Word, SharePoint, Teams). Preferred Qualifications, Capabilities, And Skills Lean/Six Sigma experience is a plus. Proficiency in cloud platforms like GCP and AWS, with data lake implementation experience. Experience with Databricks or similar for data processing and analytics. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Preferred Education Master's Degree Required Technical And Professional Expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do The Global Information and AI Security Senior Manager provides internal BCG technical consulting around information security architecture and security design measures for new projects, ventures and systems. The architect defines the desired end state to meet solution Security Goals and overall business goals. The Security Architect ensures the digital applications, tools, and services protect our data, our clients’ data, and our intellectual property; are resilient to cyber-attack; meet BCG policy and standards, regulatory requirements, and industry best practices; while using a risk-based approach to meeting BCG business needs and objectives. The Global Information and AI Security Senior Manager works with teams inside BCG to secure the building and maintenance of complex computing environments to train, deploy, and operate Artificial Intelligence/ML systems by determining security requirements; planning, implementing and testing security systems; participate in AI/ML/LLM projects as the Security Subject Matter Expert; preparing security standards, policies and procedures; and mentoring team members. What You'll Bring Bachelor's degree (or equivalent experience) required. CSSLP certification required; additional certifications such as CISSP, CCSP, or CCSK strongly preferred. 7+ years of progressive experience in information security, specifically focused on secure architecture, secure development practices, and cloud-native security. Proven expertise supporting software engineering, data science, and AI/ML development teams, specifically with secure model lifecycle management, secure deployment practices, and secure data engineering. Expert understanding of the Secure Software Development Lifecycle (SSDLC), including secure architecture, threat modeling frameworks (e.g., MAESTRO, PASTA, STRIDE), penetration testing, secure coding practices, vulnerability management, and incident response. Demonstrated technical proficiency across multiple security technologies, platforms, and frameworks, with strong hands-on experience implementing secure cloud-native infrastructures (AWS, Azure, GCP). Familiarity with data warehouse and data lake environments such as Databricks, Azure Fabric, or Snowflake, including security best practices in managing and securing large-scale data ecosystems. In-depth knowledge and practical experience with AI and machine learning model security, ethical AI frameworks, secure handling of data, and comprehensive understanding of CI/CD pipelines specifically tailored for data science workloads. Extensive experience conducting security assessments, vulnerability triage, intrusion detection and prevention, firewall management, network vulnerability analysis, cryptographic implementations, and incident response analysis. Exceptional communication skills (written and oral), influencing capabilities, and ability to clearly articulate complex security concepts to stakeholders across various levels of the organization. Proactive professional development, continuous learning, active participation in industry forums, professional networks, and familiarity with current and emerging security trends and standards. Additional info YOU'RE GOOD AT The Senior Manager, Security And AI Architect Excels At Collaborating closely with software engineering, data science, data engineering, and cybersecurity teams to design, implement, and maintain secure solutions in agile environments leveraging cloud-native technologies and infrastructure. Defining security requirements by deeply understanding business objectives, evaluating strategies, and implementing robust security standards throughout the full Software Development Life Cycle (SDLC). Leading security risk assessments, threat modeling (utilizing frameworks such as MAESTRO, PASTA, STRIDE, etc.), security architecture reviews, and vulnerability analyses for client-facing digital products, particularly involving complex AI/ML-driven solutions. Advising development teams, including AI engineers and data scientists, on secure coding practices, secure data handling, secure AI/ML model deployment, and related infrastructure security considerations. Providing specialized guidance on secure AI model development lifecycle, including secure data usage, ethical AI practices, and robust security controls in Generative AI and large language model deployments. Actively participating in the APAC Dex process for managing digital builds, ensuring alignment with regional requirements, standards, and best practices. Staying ahead of emerging security trends and technologies, conducting continuous research, evaluation, and advocacy of new security tools, frameworks, and architectures relevant to digital solutions. Ensuring robust compliance with regulatory frameworks and industry standards, including ISO 27001, SOC2, NIST, and GDPR, particularly as they pertain to data privacy and AI-driven product development. Developing and delivering training programs on secure development, AI security considerations, and incident response practices. Partnering with internal stakeholders, articulating security risks clearly, influencing technical directions, and promoting comprehensive secure architecture roadmaps. Conducting vendor and market assessments, guiding tests, evaluations, and implementation of security products that address enterprise and client-specific information security requirements. Advising teams on compensating controls and alternative security measures to facilitate business agility without compromising security posture. Leading the implementation and continuous improvement of security tooling and practices within CI/CD pipelines, infrastructure-as-code (IaC), and model deployment automation. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies