Jobs
Interviews

6216 Databricks Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Principal Data Engineer (Associate Director) at Fidelity in Bangalore, you will be an integral part of the ISS Data Platform Team. This team plays a crucial role in building and maintaining the platform that supports the ISS business operations. You will have the opportunity to lead a team of senior and junior developers, providing mentorship and guidance, while taking ownership of delivering a subsection of the wider data platform. Your role will involve designing, developing, and maintaining scalable data pipelines and architectures to facilitate data ingestion, integration, and analytics. Collaboration will be a key aspect of your responsibilities as you work closely with enterprise architects, business analysts, and stakeholders to understand data requirements, validate designs, and communicate progress. Your innovative mindset will drive technical advancements within the department, focusing on enhancing code reusability, quality, and developer productivity. By challenging the status quo and incorporating the latest data engineering practices and techniques, you will contribute to the continuous improvement of the data platform. Your expertise in leveraging cloud-based data platforms, particularly Snowflake and Databricks, will be essential in creating an enterprise lake house. Additionally, your advanced proficiency in the AWS ecosystem and experience with core AWS data services like Lambda, EMR, and S3 will be highly valuable. Experience in designing event-based or streaming data architectures using Kafka, along with strong skills in Python and SQL, will be crucial for success in this role. Furthermore, your role will involve implementing data access controls to ensure data security and performance optimization in compliance with regulatory requirements. Proficiency in CI/CD pipelines for deploying infrastructure and pipelines, experience with RDBMS and NOSQL offerings, and familiarity with orchestration tools like Airflow will be beneficial. Your soft skills, including problem-solving, strategic communication, and project management, will be key in leading problem-solving efforts, engaging with stakeholders, and overseeing project lifecycles. By joining our team at Fidelity, you will not only receive a comprehensive benefits package but also support for your wellbeing and professional development. We are committed to creating a flexible work environment that prioritizes work-life balance and motivates you to contribute effectively to our team. To explore more about our work culture and opportunities for growth, visit careers.fidelityinternational.com.,

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a seasoned Senior Data Analyst to support a customer in their data transformation journey The customer is transitioning from an on premises SQL Server SSIS based Enterprise Data Warehouse to a Databricks platform on AWS This role involves close collaboration with business stakeholders to analyze legacy Excel based reports and help modernize reporting capabilities within the new cloud based data framework Key Responsibilities Engage with business stakeholders to understand existing reporting needs primarily driven by Excel based legacy mechanisms Perform detailed discovery and gap analysis to identify reporting logic data sources and transformation rules Translate business requirements into functional and technical specifications for integration into the new reporting framework Collaborate with data engineers and developers to enhance and extend the reporting platform using tools like SSIS and MS SQL Server while supporting migration to Databricks on AWS Support the design and validation of enriched data pipelines and reporting outputs aligned with business expectations Understand and interpret complex SQL queries and stored procedures to support data analysis and transformation Write complex source to target mapping requirements and own the mapping document for the modules assigned Participate in Agile Scrum ceremonies and contribute to sprint planning backlog grooming and user story creation Document data mappings business rules and reporting logic to ensure traceability and maintainability Required Skills And Qualifications 10 plus years of experience as a Data Analyst Experience working with Excel based legacy reports and translating them into scalable automated reporting solutions Proficiency in SSIS MS SQL Server and SQL for data analysis and ETL support Strong ability to understand and work with complex SQL queries and stored procedures Experience in writing and managing source to target mapping documents Familiarity with Agile Scrum methodologies and tools like Jira Excellent communication and stakeholder engagement skills especially in ambiguous or evolving requirement scenarios

Posted 6 days ago

Apply

4.0 - 7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Responsibilities: - Hands on Experience in Azure Data Components like ADF / Databricks / Azure SQL - Good Programming Logic Sense in SQL - Good PySpark knowledge for Azure Data Bricks - Data Lake and Data Warehouse Concept Understanding - Unit and Integration testing understanding - Good communication skill to express thoughts and interact with business users - Understanding of Data Security and Data Compliance - Agile Model Understanding - Project Documentation Understanding - Certification (Good to have) - Domain Knowledge Mandatory skill sets: Azure DE, ADB, ADF, ADL Experience required: 4 to 7 years Location: Ahmedabad

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

kochi, kerala

On-site

You should have 8-12 years of experience in a Data Engineer role, with at least 3 years as an Azure data engineer. A bachelor's degree in Computer Science, Information Technology, Engineering, or a related field is required. You must be proficient in Python, SQL, and have a deep understanding of PySpark. Additionally, expertise in Databricks or similar big data solutions is necessary. Strong knowledge of ETL/ELT frameworks, data structures, and software architecture is expected. You should have proven experience in designing and deploying high-performance data processing systems and extensive experience with Azure cloud data platforms. As a Data Engineer, your responsibilities will include designing, constructing, installing, testing, and maintaining highly scalable and robust data management systems. You will apply data warehousing concepts to design and implement data warehouse tables in line with business requirements. Building complex ETL/ELT processes for large-scale data migration and transformation across platforms and Enterprise systems such as Oracle ERP, ERP Fusion, and Salesforce is essential. You must have the ability and expertise to extract data from various sources like APIs, JSONs, and Databases. Utilizing PySpark and Databricks within the Azure ecosystem to manipulate large datasets, improve performance, and enhance scalability of data operations will be a part of your role. Developing and implementing Azure-based data architectures consistent across multiple projects while adhering to best practices and standards is required. Leading initiatives for data integrity and normalization within Azure data storage and processing environments is expected. You will evaluate and optimize Azure-based database systems for performance efficiency, reusability, reliability, and scalability. Additionally, troubleshooting complex data-related issues within Azure and providing expert guidance and support to the team is necessary. Ensuring all data processes adhere to governance, data security, and privacy regulations is also a critical part of the role.,

Posted 6 days ago

Apply

12.0 - 16.0 years

0 Lacs

pune, maharashtra

On-site

As a Microsoft Fabric Professional at YASH Technologies, you will be responsible for leveraging your 12+ years of experience in Microsoft Azure Data Engineering to drive analytical projects. Your expertise will be crucial in designing, developing, and deploying high-volume ETL pipelines using Azure, Microsoft Fabric, and Databricks for complex models. Your hands-on experience with Azure Data Factory, Databricks, Azure Functions, Synapse Analytics, Data Lake, Delta Lake, and Azure SQL Database will be utilized for managing and processing large-scale data integrations. In this role, you will be expected to optimize Databricks clusters and manage workflows to ensure cost-effective and high-performance data processing. Your knowledge of data modeling, governance, quality management, and modernization processes will be essential in developing architecture blueprints and technical design documentation for Azure-based data solutions. You will provide technical leadership on cloud architecture best practices, stay updated on emerging Azure technologies, and recommend enhancements to existing systems. As part of the job requirements, mandatory certifications are a prerequisite for this role. At YASH Technologies, you will have the opportunity to work in an inclusive team environment where you can shape your career path. The company emphasizes continuous learning, unlearning, and relearning through career-oriented skilling models and technology-enabled collective intelligence. The workplace culture at YASH is built upon principles of flexible work arrangements, emotional positivity, self-determination, trust, transparency, and open collaboration, all aimed at supporting the realization of business goals in a stable employment environment with a great atmosphere and ethical corporate culture.,

Posted 6 days ago

Apply

0.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Role: Assistant Manager - Data Engineering Experience: 5 to 8 years Location: Chennai, Tamil Nadu , India (CHN) Job Description: An Azure Data Engineer designs, builds, and manages data solutions within the Azure cloud platform. They focus on creating scalable, secure, and efficient data pipelines for analysis, reporting, and decision-making. This involves designing data models, managing data storage, building ETL processes, and ensuring data quality and consistency. Job Responsibilities: Data Pipeline Design and Implementation, Data Storage Management, Data Transformation and ETL, Data Modeling, Data Security and Compliance Skills Required: Pyspark advanced, Python Basics and SQL and Databricks Advanced Airflow Job Snapshot Updated Date 22-07-2025 Job ID J_3666 Location Chennai, Tamil Nadu, India Experience 5 - 8 Years Employee Type Permanent

Posted 1 week ago

Apply

0.0 - 3.0 years

0 Lacs

Vadodara, Gujarat

On-site

Analytics Posted on Jul 22, 2025 Vadodara-Gujarat Minimum Required Experience : 3 years Full Time Skills Machine Learning TensorFlow Pytorch NLP Description Designation/Role Name Machine Learning Engineer Org Structure Data Science & AI Team Work Schedule According to the business needs Job Description With excellent analytical and problem-solving skills, you should understand business problems of the customers, translate them into scope of work and technical specifications for developing into Data Science projects. Efficiently utilize cutting edge technologies in AI areas (Machine Learning, NLP, Computer Vision) and development of solutions for business problems. Good exposure technology platforms for Data Science, AI, Gen AI, cloud with implementation experience. Ability to understand data, requirements, design and develop a Machine Learning Model for the requirements. This Job requires the following: Designing, developing, and implementing end-to-end machine learning production pipelines (data exploration, sampling, training data generation, feature engineering, model building, and performance evaluation) Experience in predictive analytics and statistical modeling Experience in successfully making use of the following: Logistic Regression Multivariate Regression, Support Vector Machines, Stochastic Processes, Decision Trees, Lifetime analysis, common clustering algorithms, Optimization, CNN Essential Qualifications B-Tech or BE - computer / IT or MCA or MSC- Computer Science along with necessary certifications is preferred Technical Qualifications (Essential) Hands-on programming experience Hands-on technical design experience Hands-on prompt engineering experience Design and Development of at least 3 Data Science, AI Projects with design and development of Machine Learning models 1 Generative AI Project designed, developed and delivered to production is desirable Primary Skills Hands-on coding experience in Python, PyTorch, Spark/PySpark, SQL, TensorFlow, NLP Frameworks and similar tools/frameworks Good understanding of business and domain of the applications Hands-on experience in design and development of Gen AI applications using Open Source LLMs and cloud platforms Hands-on experience in design and development of API based applications for AI and Data Science Projects Understanding in GenAI concepts, RAG and Models fine-tuning techniques is desirable Understand the concepts of major AI models such as OpenAI, Llama, Hugging Face, Mistral AI etc., Understanding of DevOps pipelines for deployment Good understanding of Data Engineering lifecycle – data pipelines, data warehouse, data lake Secondary Skills Experience using Databricks and Azure Data platform Knowledge of any configuration management tools is desirable Familiarity with containerization and container orchestration services like Docker and Kubernetes Experience 3+ years in Machine Learning Model development in Data Science/AI Projects. Awareness of LLM integrations / development is desirable. Description of Responsibility Understand customer’s requirements (Business, Functional, Non-Functional etc.,), design and develop Machine Learning Models Design and implement Machine Learning Models using major technology and computing platforms (open source and cloud) Possess excellent analytical and problem-solving skills and be able to understand various forms of data, patterns and derive insights. Collaborate internal and external stakeholders for deriving solution that requires cross functional teams and smoother execution of the projects Knowledge of data modeling and understanding of different data structures Experience with design of AI/ML solutions either as standalone or integrated with other applications Experience in Generative AI solutions for the business/automation requirements using open source LLMs (Open AI, LLama, Mistral etc.,) is desirable Sills / Competencies requirement Research Orientation Proactive & Clear Communication Collaboration Solution Orientation Solution Articulation Accountability Adaptability / Flexibility Analytical Skills Listening Skills Customer Service Orientation

Posted 1 week ago

Apply

0.0 - 18.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bengaluru, Karnataka Job ID 30181669 Job Category Digital Technology Job Title – Senior Product Manager Preferred Location - Bangalore India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do Role Responsibilities: Data Strategy & Architecture Develop and execute the enterprise data roadmap in alignment with business and IT objectives. Define best practices for data governance, storage, processing, and analytics. Promote standardized data management patterns, including data lakes, data warehouses, and real-time processing architectures. Ensure data solutions support both legacy and cloud-native systems while maintaining security, scalability, and efficiency. Product Leadership & Execution Define and prioritize data product features, ensuring alignment with business goals. Work with cross-functional teams, including data engineering, business intelligence, and application development, to deliver scalable data solutions. Oversee the full lifecycle of data products, from design and development to deployment and monitoring. Evaluate emerging technologies and tools that enhance data capabilities, including AI/ML and advanced analytics. Governance & Operational Excellence Establish governance policies for data quality, security, and compliance in alignment with regulatory standards. Implement monitoring, logging, and alerting mechanisms to ensure data integrity and availability. Drive automation in data quality testing and deployment using CI/CD practices. Collaborate with security and compliance teams to ensure data protection and privacy compliance. Role Purpose: The Data Product Manager will be responsible for defining and executing the strategy for enterprise data solutions. This role will oversee the development, implementation, and optimization of data pipelines, governance frameworks, and analytics capabilities to support seamless data access and utilization across enterprise applications. The successful candidate will collaborate closely with engineering, business stakeholders, and IT teams to ensure data interoperability, security, and scalability. This role will drive initiatives that enhance real-time analytics, data-driven decision-making, and overall digital transformation efforts. Minimum Requirements: 12 - 18 years of overall experience & 7+ years in data management, product management, or enterprise architecture roles. Proven expertise in designing and managing data solutions, including data lakes, data warehouses, and ETL/ELT pipelines. Hands-on experience with data platforms such as Snowflake, Redshift, BigQuery, Databricks, or similar. Strong understanding of data governance, security, and compliance frameworks. Experience integrating SaaS and on-prem systems, including ERP, CRM, and HR platforms (SAP, Salesforce, Workday, etc.). Familiarity with DevOps and CI/CD tools like GitHub Actions, AWS CodePipeline, or Jenkins. Experience with real-time and batch data movement solutions, including AWS DMS, Qlik Replicate, or custom CDC frameworks. Exposure to data mesh architectures, advanced analytics, or AI/ML-driven data products. Hands-on experience with Python, SQL, JSON/XML transformations, and data testing frameworks. Certifications in AWS, data platforms, or enterprise data management Benefits We are committed to offering competitive benefits programs for all of our employees and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class.

Posted 1 week ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bengaluru, Karnataka Job ID 30181213 Job Category Digital Technology Job Title – Data Lakehouse Platform Architect/ Engineer Preferred Location - Bangalore/Hyderabad, India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do Role Responsibilities: Lead the architecture and engineering of data lakehouse platforms using Apache Iceberg on AWS S3 , enabling scalable storage and multi-engine querying. Design and build infrastructure-as-code solutions using AWS CDK or Terraform to support repeatable, automated deployments. Deliver and optimize ELT/ETL data pipelines for real-time and batch workloads with AWS Glue, Apache Spark, Kinesis , and Airflow . Enable compute engines (Athena, EMR, Redshift, Trino, Snowflake) through efficient schema design, partitioning, and metadata strategies. Champion observability and operational excellence across the platform by implementing robust monitoring, alerting, and logging practices. Drive automation through CI/CD pipelines using GitHub Actions, CircleCI, or AWS CodePipeline , improving deployment speed and reliability. Partner cross-functionally with data engineers, DevOps, security, and FinOps teams to align platform features to evolving business needs. Provide thought leadership on open standards, cost optimization, and scaling data platform capabilities to support AI/ML and analytics initiatives. Role Purpose: 14+ years of experience in data engineering, cloud infrastructure, or platform engineering roles, with at least 3 years in a senior or lead capacity. Expert-level experience with AWS services (S3, Glue, Kinesis, IAM, CloudWatch, EMR). Strong working knowledge of Apache Iceberg or similar open table formats (e.g., Delta Lake, Hudi). Proficiency in Python , with the ability to build infrastructure, automation, and data workflows. Demonstrated experience designing data lakehouse architectures supporting large-scale analytics and ML use cases. Hands-on experience with CI/CD pipelines, infrastructure-as-code, and cloud-native automation tooling. Strong understanding of data governance principles, schema evolution, partitioning, and access controls. Minimum Requirements: Familiarity with AWS Lake Formation, Snowflake, Databricks, or Trino. Experience optimizing cloud cost and performance throughFinOps practices. Prior experience contributing to platform strategy or mentoring junior engineers. Understanding of security, compliance, and operational controls in regulated enterprise environments. Benefits We are committed to offering competitive benefits programs for all of our employees and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class.

Posted 1 week ago

Apply

2.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

Tiger Analytics is a global AI and analytics consulting firm with a team of over 2800 professionals focused on using data and technology to solve complex problems that impact millions of lives worldwide. Our culture is centered around expertise, respect, and a team-first mindset. Headquartered in Silicon Valley, we have delivery centers globally and offices in various cities across India, the US, UK, Canada, and Singapore, along with a significant remote workforce. At Tiger Analytics, we are certified as a Great Place to Work. Joining our team means being at the forefront of the AI revolution, working with innovative teams that push boundaries and create inspiring solutions. We are currently looking for an Azure Big Data Engineer to join our team in Chennai, Hyderabad, or Bangalore. As a Big Data Engineer (Azure), you will be responsible for building and implementing various analytics solutions and platforms on Microsoft Azure using a range of Open Source, Big Data, and Cloud technologies. Your typical day might involve designing and building scalable data ingestion pipelines, processing structured and unstructured data, orchestrating pipelines, collaborating with teams and stakeholders, and making critical tech-related decisions. To be successful in this role, we expect you to have 4 to 9 years of total IT experience with at least 2 years in big data engineering and Microsoft Azure. You should be proficient in technologies such as Azure Data Factory (ADF), PySpark, Databricks, ADLS, Azure SQL Database, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Strong coding skills in SQL, Python, or Scala/Java are essential, as well as experience with big data technologies like Hadoop, Spark, Airflow, NiFi, Kafka, Hive, Neo4J, and Elastic Search. Knowledge of file formats such as Delta Lake, Avro, Parquet, JSON, and CSV is also required. Ideally, you should have experience in building REST APIs, working on Data Lake or Lakehouse projects, supporting BI and Data Science teams, and following Agile and DevOps processes. Certifications like Data Engineering on Microsoft Azure (DP-203) or Databricks Certified Developer (DE) would be a valuable addition to your profile. At Tiger Analytics, we value diversity and inclusivity, and we encourage individuals with different skills and qualities to apply, even if they do not meet all the criteria for the role. We are committed to providing equal opportunities and fostering a culture of listening, trust, respect, and growth. Please note that the job designation and compensation will be based on your expertise and experience, and our compensation packages are competitive within the industry. If you are passionate about leveraging data and technology to drive impactful solutions, we would love to stay connected with you.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You are a strategic thinker passionate about driving solutions in valuation control. You have found the right team. As a Vice President in our Valuation Control Group (VCG), you will spend each day defining, refining, and delivering set goals for our firm. You will lead the Analytics book of work, which involves analyzing business requirements, designing, constructing, testing, and generating data insights and visualizations. Additionally, you will produce operational reports to aid in managerial decision-making and conduct ad-hoc analysis to cater to the needs of all internal partners, utilizing a range of data sources. Our Valuation Control Group (VCG) is organized along business lines, including Corporate & Investment Bank (Macro Products, Credit, Equities, Securitized Products, IB Risk), CIO, Treasury & Corporate (CTC), Asset Management, Consumer & Community Banking (CCB), and Commercial Banking (CB). Clients of the group include senior management, business heads, regulators, and both internal and external audit. Ensure compliance by performing price verification and benchmarking of valuations, calculating valuation adjustments, and accurately reporting in financial statements, adhering to VCG standards, accounting, and regulatory standards. Analyze data by performing data mining and analytics to solve problems and deliver high-quality data insights and visualizations. Build sophisticated tools by utilizing advanced statistical tools and algorithms for data analysis to enhance the accuracy and depth of insights, supporting informed decision-making and strategic planning. Collaborate with Product Owners and Operations by analyzing business requirements to design, build, test, and implement new capabilities. Manage data by understanding and utilizing appropriate data sources while ensuring adherence to team standards. Mitigate risk through proactive engagement in continuous process improvement, root cause analysis, and collaboration with VCG Product Owners, working as part of an agile team. Engage in continuous learning of new tools and technologies, and contribute to value-added projects related to Business, Risk, and Finance initiatives. Experience and Education: Over 8 years of experience in a similar role, with a graduate degree in finance, engineering, mathematics, statistics or data science. Technical Proficiency: Experience in SQL, Python, Databricks, Cloud or other enterprise technologies, along with advanced knowledge of MS Excel and MS PowerPoint Communication and Analytical Skills: Strong verbal and written communication skills, coupled with excellent analytical and problem-solving abilities to provide sound recommendations to management. Complex Product Understanding and Valuation Knowledge: Ability to understand complex products, analyze transaction and process flows, and possess an understanding of valuation concepts related to financial products and derivatives, along with basic accounting knowledge. Task Management and Prioritization: Demonstrated ability to efficiently prioritize and manage multiple tasks simultaneously. Business Intelligence and Data Analysis: Expert knowledge of Business Intelligence tools such as Tableau, Databricks and Python, enhancing data analysis and visualization capabilities.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Data Engineer at UST in Pune, you will be responsible for designing, building, and optimizing secure data pipelines for large-scale data processing. With over 7 years of experience, you will leverage your expertise in Databricks, PySpark, SQL, and ETL/ELT tools to transform complex data across enterprise platforms. Your role will involve collaborating with Data Architects, DQ Analysts, and Cyber SMEs in Agile POD teams to ensure the successful implementation of data quality rules and compliance reporting. Your key responsibilities will include managing data modeling, performance tuning, and infrastructure cost optimization. You will support data governance initiatives and implement DQ controls such as BCBS 239, DUSE, and DMOVE. Additionally, you will document architecture, test strategies, and maintain code quality and scalability standards. To excel in this role, you must possess a strong proficiency in Databricks, PySpark, SQL, and have hands-on experience with ETL tools like Glue, DataProc, ADF, or Informatica. Cloud experience with AWS, Azure, or GCP is essential, along with a solid understanding of data security, encryption, and risk controls. Excellent communication and stakeholder collaboration skills are crucial for successful engagement with various teams. Preferred qualifications for this position include a Bachelor's degree in Computer Science, Engineering, or a related field, as well as experience in banking, financial services, or cybersecurity domains. Familiarity with DUSE/DMOVE frameworks and cybersecurity metrics reporting will be advantageous. Certification in cloud or data engineering tools is considered a plus. Join UST, a global digital transformation solutions provider with a track record of impactful collaborations with leading companies worldwide. With a workforce of over 30,000 employees in 30 countries, UST is committed to embedding innovation and agility into client organizations, creating boundless impact and touching billions of lives in the process.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior DevOps Engineer in our Life Sciences & Healthcare DevOps team, you will have the opportunity to work on cutting-edge Life Sciences and Healthcare products in a DevOps environment. If you are passionate about coding in Python or any scripting language, experienced with Linux, and have worked in a cloud environment, we are excited to hear from you! Our team specializes in container orchestration, Terraform, Datadog, Jenkins, Databricks, and various AWS services. If you have expertise in these areas, we would love to connect with you. You should have at least 7+ years of professional software development experience and 5+ years as a DevOps Engineer or similar role with proficiency in various CI/CD and configuration management tools such as Jenkins, Maven, Gradle, Spinnaker, Docker, Ansible, Cloudformation, Terraform, etc. Additionally, you should possess at least 3+ years of AWS experience managing resources in services like S3, ECS, RDS, EC2, IAM, OpenSearch Service, Route53, VPC, CloudFront, Glue, and Lambda. A minimum of 5 years of experience in Bash/Python scripting, wide knowledge in operating system administration, programming languages, cloud platform deployment, and networking protocols is required. You will be responsible for being on-call for critical production issues and should have a good understanding of SDLC, patching, releases, and basic systems administration activities. It would be beneficial if you also held AWS Solution Architect Certifications and had Python programming experience. In this role, your responsibilities will include designing, developing, and maintaining the product's cloud infrastructure architecture, collaborating with different teams to provide end-to-end infrastructure setup, designing and deploying secure infrastructure as code, staying updated with industry best practices, trends, and standards, owning the performance, availability, security, and reliability of the products running across public cloud and multiple regions worldwide, and documenting solutions and maintaining technical specifications. The products you will be working on rely on container orchestration, Jenkins, various AWS services, Databricks, Datadog, Terraform, and more, supporting the Development team in building them. You will be part of the Life Sciences & HealthCare Content DevOps team, focusing on DevOps operations on Production infrastructure related to Life Sciences & HealthCare Content products. The team consists of five members and reports to the DevOps Manager, providing support for various application products internal to Clarivate. The team also handles Change process on the production environment, Incident Management, Monitoring, and customer service requests. The shift timing for this role is 12PM to 9PM, and you must provide on-call support during non-business hours based on team bandwidth. At Clarivate, we are dedicated to offering equal employment opportunities and comply with applicable laws and regulations governing non-discrimination in all locations.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

Loyalytics is a rapidly growing Analytics consulting and product organization located in Bangalore. We specialize in working with large retail clients globally to help them maximize the value of their data assets through consulting projects and product accelerators. Our team consists of over 100 dynamic analytics practitioners who are dedicated to leveraging cutting-edge tools and technologies. Our technical team is comprised of data scientists, data engineers, and business analysts who handle vast amounts of data points daily. We operate in a massive multi-billion dollar global market opportunity and are led by individuals with a combined industry experience of over 40 years. Loyalytics has earned a reputation for acquiring customers through word-of-mouth and referrals, including prominent retail brands in GCC regions such as Lulu and GMG, showcasing a strong product-market fit. Despite being an 8-year-old bootstrapped company with over 100 employees, we are continuously expanding our team. We are currently seeking a Customer Success Manager (CSM) with the following key requirements: - 3-5 years of experience in campaign analysis, marketing analytics, or a similar CRM/consulting role, with a preference for client-facing experience. - Strong attention to detail, excellent organizational skills, and proficiency in project management. - Hands-on experience in data analysis, segmentation, and customer profiling. - Proficiency in SQL, Databricks, PySpark, and Python. - Familiarity with A/B testing, personalization, and CRM performance measurement. - Ability to effectively communicate insights to both technical and non-technical stakeholders. - Experience with Power BI (preferred), Tableau, or similar visualization tools. Preferred qualifications include: - Experience in CRM, loyalty, or customer engagement environments. - Exposure to retail or e-commerce data. This position is based in Bangalore, Karnataka, and requires on-site presence.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

As an experienced Data Architect with a focus on advanced analytics and Generative AI solutions, your role will involve architecting and delivering cutting-edge analytics and visualization solutions utilizing Databricks, Generative AI frameworks, and modern BI tools. You will be responsible for designing and implementing Generative AI solutions, integrating frameworks like Microsoft Copilot, and developing reference architectures for leveraging Databricks Agent capabilities. In this position, you will lead pre-sales engagements, conduct technical discovery sessions, and provide solution demos. You will collaborate with various stakeholders to align analytics solutions with business objectives and promote best practices for AI/BI Genie and Generative AI-driven visualization platforms. Additionally, you will guide the deployment of modern data architectures that integrate AI-driven decision support with popular BI tools such as Power BI, Tableau, or ThoughtSpot. Your role will also involve serving as a trusted advisor to clients, helping them transform their analytics and visualization strategies with Generative AI innovation. You will mentor and lead teams of consultants, ensuring high-quality solution delivery, reusable assets, and continuous skill development. Staying current on Databricks platform evolution, GenAI frameworks, and next-generation BI trends will be crucial for proactively advising clients on emerging innovations. To be successful in this role, you should have at least 8 years of experience in data analytics, data engineering, or BI architecture roles, with a minimum of 3 years of experience delivering advanced analytics and Generative AI solutions. Hands-on expertise with the Databricks platform, familiarity with Generative AI frameworks, and strong skills in visualization platforms are essential. Pre-sales experience, consulting skills, and knowledge of data governance and responsible AI principles are also required. Preferred qualifications include Databricks certifications, certifications in major cloud platforms, experience with GenAI prompt engineering, exposure to knowledge graphs and semantic search frameworks, industry experience in financial services, healthcare, or manufacturing, and familiarity with MLOps and end-to-end AI/ML pipelines. Your primary skills should include Data Architecture, with additional expertise in Power BI, AI/ML Architecture, Analytics Architecture, and BI & Visualization Development. Joining Infogain, a human-centered digital platform and software engineering company, will provide you with opportunities to work on cutting-edge projects for Fortune 500 companies and digital natives across various industries, utilizing technologies such as cloud, microservices, automation, IoT, and artificial intelligence.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You should have 4.5-6 years of experience in SDET with strong core skills in Data Warehouse, SQL, Databricks, FDL, and ETL concepts. Your primary skill should be SDET with additional expertise in Selenium, Databricks, SQL, and ETL Testing. Infogain is a human-centered digital platform and software engineering company headquartered in Silicon Valley. They specialize in engineering business outcomes for Fortune 500 companies and digital natives across various industries using cutting-edge technologies like cloud, microservices, automation, IoT, and artificial intelligence. As a Microsoft Gold Partner and Azure Expert Managed Services Provider, Infogain accelerates experience-led transformation in the delivery of digital platforms. With multiple offices and delivery centers worldwide, Infogain offers a dynamic and innovative work environment for professionals in the technology sector.,

Posted 1 week ago

Apply

14.0 - 18.0 years

0 Lacs

maharashtra

On-site

You will be leading the architectural design for a migration project, utilizing Azure services, SQL, Databricks, and PySpark to develop scalable, efficient, and reliable solutions. Your responsibilities will include designing and implementing advanced data transformation and processing tasks using Databricks, PySpark, and ADF. You should have a strong understanding of data integration, ETL, and data warehousing concepts. It will be essential to design, deploy, and manage Databricks clusters for data processing, ensuring performance and cost efficiency. Troubleshooting cluster performance issues when necessary is also part of your role. You will mentor and guide developers on using PySpark for data transformation and analysis, sharing best practices and reusable code patterns. Having experience in end-to-end architecture for SAS to PySpark migration will be beneficial. Documenting architectural designs, migration plans, and best practices to ensure alignment and reusability within the team and across the organization is a key aspect of this position. You should be experienced in delivering end-to-end solutions and effectively managing project execution. Collaborating with stakeholders to translate business requirements into technical specifications and designing robust data pipelines, storage solutions, and transformation workflows will be part of your responsibilities. Supporting UAT and production deployment planning is also required. Strong communication and collaboration skills are essential for this role. Experience: 14-16 Years Skills: Primary Skill: Data Architecture Sub Skill(s): Data Architecture Additional Skill(s): ETL, Data Architecture, Databricks, PySpark About the Company: Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. They engineer business outcomes for Fortune 500 companies and digital natives in various industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. Infogain accelerates experience-led transformation in the delivery of digital platforms. The company is a Microsoft Gold Partner and Azure Expert Managed Services Provider. Infogain, an Apax Funds portfolio company, has offices in multiple locations worldwide.,

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Department : Technology / AI Innovation Reports To : AI/ML Lead or Head of Data Science Location : Pune Role Summary We are looking for an experienced AI/ML & Generative AI Developer to join our growing AI innovation team. You will play a critical role in building advanced machine learning models, Generative AI applications, and LLM-powered solutions. This role demands deep technical expertise, creative problem-solving, and a strong understanding of AI workflows and scalable cloud-based deployments. Key Responsibilities Design, develop, and deploy AI/ML models and Generative AI applications for diverse enterprise use cases. Implement, fine-tune, and integrate Large Language Models (LLMs) using frameworks like LangChain, LlamaIndex, and RAG pipelines. Build Agentic AI systems with multi-step reasoning and autonomous decision-making capabilities. Create secure and scalable data ingestion pipelines for structured and unstructured data, enabling indexing, vector search, and advanced retrieval. Collaborate with cross-functional teams (Data Engineers, Product Managers, Architects) to operationalize AI solutions. Build CI/CD pipelines for ML/GenAI workflows and support end-to-end MLOps practices. Leverage Azure and Databricks for training, serving, and monitoring AI models at scale. Required Qualifications & Skills (Mandatory) 4+ years of hands-on experience in AI/ML development, including Generative AI applications. Expertise in RAG, LLMs, and Agentic AI implementations. Strong knowledge of LangChain, LlamaIndex, or similar LLM orchestration frameworks. Proficient in Python and key ML/DL libraries : TensorFlow, PyTorch, Scikit-learn. Solid foundation in Deep Learning, Natural Language Processing (NLP), and Transformer-based architectures. Experience in building data ingestion, indexing, and retrieval pipelines for real-world enterprise scenarios. Hands-on experience with Azure cloud services and Databricks. Proven experience designing CI/CD pipelines and working with MLOps tools like MLflow, DVC, or Kubeflow. Soft Skills Strong problem-solving and critical thinking ability. Excellent communication skills, with the ability to explain complex AI concepts to non-technical stakeholders. Strong collaboration and teamwork in agile, cross-functional environments. Growth mindset with curiosity to explore and learn emerging technologies. Preferred Qualifications Familiarity with vector databases : FAISS, Pinecone, Weaviate. Experience with AutoGPT, CrewAI, or similar agent frameworks. Exposure to Azure OpenAI, Cognitive Search, or Databricks ML tools. Understanding of AI security, responsible AI, and model governance. Key Relationships Internal : Data Scientists, Data Engineers, DevOps Engineers, Product Managers, Solution Architects. External : AI/ML platform vendors, cloud service providers (Microsoft Azure), third-party data providers. Role Dimensions Contribute to AI strategy, architecture, and reusable AI components. Support multiple projects simultaneously in a fast-paced agile environment. Mentor junior engineers and contribute to best practices and standards. Success Measures (KPIs) % reduction in model development time using reusable pipelines. Successful deployment of GenAI/LLM features in production. Accuracy, latency, and relevance improvements in AI search and retrieval. Uptime and scalability of deployed AI models. Integration of responsible AI and compliance practices. Competency Framework Alignment Technical Excellence in AI/ML/GenAI Cloud Engineering & DevOps Enablement Innovation & Continuous Improvement Business Value Orientation Agile Execution & Ownership Cross-functional Collaboration (ref:hirist.tech)

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Chennai Area

On-site

Responsibilities Participate in requirements definition, analysis, and the design of logical and physical data models for Dimensional Data Model, NoSQL, or Graph Data Model. Lead data discovery discussions with Business in JAD sessions and map the business requirements to logical and physical data modeling solutions. Conduct data model reviews with project team members. Capture technical metadata through data modeling tools. Ensure database designs efficiently support BI and end user requirements. Drive continual improvement and enhancement of existing systems. Collaborate with ETL/Data Engineering teams to create data process pipelines for data ingestion and transformation. Collaborate with Data Architects for data model management, documentation, and version control. Maintain expertise and proficiency in the various application areas. Maintain current knowledge of industry trends and standards. Required Skills Strong data analysis and data profiling skills. Strong conceptual, logical, and physical data modeling for VLDB Data Warehouse and Graph DB. Hands-on experience with modeling tools such as ERWIN or another industry-standard tool. Fluent in both normalized and dimensional model disciplines and techniques. Minimum of 3 years' experience in Oracle Database. Hands-on experience with Oracle SQL, PL/SQL, or Cypher. Exposure to Databricks Spark, Delta Technologies, Informatica ETL, or other industry-leading tools. Good knowledge or experience with AWS Redshift and Graph DB design and management. Working knowledge of AWS Cloud technologies, mainly on the services of VPC, EC2, S3, DMS, and Glue. Bachelor's degree in Software Engineering, Computer Science, or Information Systems (or equivalent experience). Excellent verbal and written communication skills, including the ability to describe complex technical concepts in relatable terms. Ability to manage and prioritize multiple workstreams with confidence in making decisions about prioritization. Data-driven mentality. Self-motivated, responsible, conscientious, and detail-oriented. Effective oral and written communication skills. Ability to learn and maintain knowledge of multiple application areas. Understanding of industry best practices pertaining to Quality Assurance concepts and Level : Bachelor's degree in Computer Science, Engineering, or relevant fields with 3+ years of experience as a Data and Solution Architect supporting Enterprise Data and Integration Applications or a similar role for large-scale enterprise solutions. 3+ years of experience in Big Data Infrastructure and tuning experience in Lakehouse Data Ecosystem, including Data Lake, Data Warehouses, and Graph DB. AWS Solutions Architect Professional Level certifications. Extensive experience in data analysis on critical enterprise systems like SAP, E1, Mainframe ERP, SFDC, Adobe Platform, and eCommerce systems. Skill Set Required GCP, Data Modelling (OLTP, OLAP), indexing, DBSchema, CloudSQL, BigQuery Data Modeller - Hands-on data modelling for OLTP and OLAP systems. In-Depth knowledge of Conceptual, Logical and Physical data modelling. Strong understanding of Indexing, partitioning, data sharding with practical experience of having done the same. Strong understanding of variables impacting database performance for near-real time reporting and application interaction. Should have working experience on at least one data modelling tool, preferably DBSchema. People with functional knowledge of the mutual fund industry will be a plus. Good understanding of GCP databases like AlloyDB, CloudSQL and BigQuery (ref:hirist.tech)

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Job Description Job Title : Azure Databricks Administrator Job Summary We are seeking a skilled and proactive Azure Databricks Administrator to manage, monitor, and support our Databricks environment on Microsoft Azure. The ideal candidate will be responsible for system integrations, access control, user support, and CI/CD pipeline administration, ensuring a secure, efficient, and scalable data platform. Key Responsibilities System Integration & Monitoring : Build, monitor, and support integrations between Databricks and enterprise systems such as LogRhythm, ServiceNow, and AppDynamics. Ensure seamless data flow and alerting mechanisms across integrated platforms. Security & Access Management Administer user and group access to the Databricks environment. Implement and enforce security policies and role-based access controls (RBAC). User Support & Enablement Provide initial system support and act as a point of contact (POC) for Databricks users. Assist users with onboarding, workspace setup, and troubleshooting. Vendor Coordination Engage with Databricks vendor support for issue resolution and platform optimization. Platform Monitoring & Maintenance Monitor Databricks usage, performance, and cost. Ensure the platform is up-to-date with the latest patches and features. Database & CI/CD Administration Manage Databricks database configurations and performance tuning. Administer and maintain CI/CD pipelines for Databricks notebooks and jobs. Required Skills & Qualifications Proven experience administering Azure Databricks in a production environment. Strong understanding of Azure services, data engineering workflows, and DevOps practices. Experience with integration tools and platforms like LogRhythm, ServiceNow, and AppDynamics. Proficiency in CI/CD tools (e.g., Azure DevOps, GitHub Actions). Familiarity with Databricks REST APIs, Terraform, or ARM templates is a plus. Excellent problem-solving, communication, and documentation skills. Preferred Certifications Microsoft Certified : Azure Administrator Associate Databricks Certification Azure Data Engineer Associate (ref:hirist.tech)

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Join us as a Data Engineer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. You'll be working on complex technical problems that will involve detailed analytical skills and analysis. This will be done in conjunction with fellow engineers, business analysts and business stakeholders. To be successful as a Data Engineer you should have experience with: Strong experience with ETL tools such as Ab Initio, Glue, PySpark, Python, DBT, DataBricks and various AWS required services / products. Advanced SQL knowledge across multiple database platforms (Teradata , Hadoop, SQL etc.) Experience with data warehousing concepts and dimensional modeling. Proficiency in scripting languages (Python, Perl, Shell scripting) for automation. Knowledge of big data technologies (Hadoop, Spark, Hive) is highly desirable. Bachelor's degree in Computer Science, Information Systems, or related field. Experience in ETL development and data integration. Proven track record of implementing complex ETL solutions in enterprise environments. Experience with data quality monitoring and implementing data governance practices. Knowledge of cloud data platforms (AWS, Azure, GCP) and their ETL services. Some Other Highly Valued Skills Include Strong analytical and problem-solving skills. Ability to work with large and complex datasets. Excellent documentation skills. Attention to detail and commitment to data quality. Ability to work independently and as part of a team. Strong communication skills to explain technical concepts to non-technical stakeholders. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a talented Big Data Engineer, you will be responsible for developing and managing our company's Big Data solutions. Your role will involve designing and implementing Big Data tools and frameworks, implementing ELT processes, collaborating with development teams, building cloud platforms, and maintaining the production system. To excel in this position, you should possess in-depth knowledge of Hadoop technologies, exceptional project management skills, and advanced problem-solving abilities. A successful Big Data Engineer comprehends the company's needs and establishes scalable data solutions to meet current and future requirements effectively. Your responsibilities will include meeting with managers to assess the company's Big Data requirements, developing solutions on AWS utilizing tools like Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, and Hadoop. You will also be involved in loading disparate data sets, conducting pre-processing services using tools such as Athena, Glue, and Spark, collaborating with software research and development teams, building cloud platforms for application development, and ensuring the maintenance of production systems. The requirements for this role include a minimum of 5 years of experience as a Big Data Engineer, proficiency in Python & PySpark, expertise in Hadoop, Apache Spark, Databricks, Delta Tables, and AWS data analytics services. Additionally, you should have extensive experience with Delta Tables, JSON, Parquet file formats, familiarity with AWS data analytics services like Athena, Glue, Redshift, EMR, knowledge of Data warehousing, NoSQL, and RDBMS databases. Good communication skills and the ability to solve complex data processing and transformation-related problems are essential for success in this role.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be joining the newly formed AI, Data & Analytics team as a Software Engineer. The team's primary focus is to drive increased value from the data InvestCloud captures for a smarter financial future, particularly emphasizing enhanced intelligence. Your role will involve working on various projects within the AI Enablement team, ensuring the development of fit-for-purpose modern capabilities to meet the team's key goals. As a Software Engineer with a keen interest in Data Science, Machine Learning, and Generative AI models, you are expected to have a proven track record in delivering business impact and client satisfaction. Your responsibilities will include building efficient and scalable platforms for ML and AI models in production, integrating AI and ML solutions into the InvestCloud product suite, and collaborating with both local and global teams. You may also engage in building products as needed. Key Responsibilities: - Developing and maintaining robust APIs, microservices, and data pipelines supporting data science and AI workloads. - Designing and implementing efficient database schemas and data storage solutions. - Building and optimizing ETL processes for data ingestion, transformation, and delivery. - Creating scalable infrastructure for model training, evaluation, and deployment. - Collaborating with data scientists to implement and productionize machine learning models. - Ensuring high performance, reliability, and security of backend systems. - Participating in code reviews, contributing to engineering best practices, and troubleshooting complex technical issues. - Writing clean, maintainable, and well-documented code. Required Skills: - Bachelor's degree in Computer Science, Engineering, or related field. - 5+ years of experience in backend development. - Strong proficiency in Python and Java; working proficiency in JavaScript. - Experience with RESTful API design and implementation, modern API frameworks, and database systems (both SQL and NoSQL). - Experience with containerization using Docker, cloud platforms (AWS, Azure, or GCP), version control systems (Git), CI/CD pipelines, and DevOps practices. - Experience coding with an AI Assistant and mentoring junior engineers. Preferred Skills: - Working experience with Jakarta EE, FastAPI, and Angular. - Experience working with Snowflake and/or Databricks. What Do We Offer: Join a diverse and international cross-functional team including data scientists, product managers, business analysts, and software engineers. As a key member, you will implement cutting-edge technology to enhance the advisor and client experience. Location and Travel: The ideal candidate will be expected to work from the office. Compensation: The salary range will be determined based on experience, skills, and geographic location. Equal Opportunity Employer: InvestCloud is committed to fostering an inclusive workplace and welcomes applicants from all backgrounds.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

ahmedabad, gujarat

On-site

You are a highly skilled and experienced Solution Architect specializing in Data & AI, with over 8 years of experience. In this role, you will lead and drive the data-driven transformation within the organization. Your main responsibility is to design and implement cutting-edge AI and data solutions that align with the business objectives. Collaborating closely with cross-functional teams, you will create scalable, high-performance architectures utilizing modern technologies in data engineering, machine learning, and cloud computing. Your key responsibilities include architecting and designing end-to-end data and AI solutions to address business challenges and optimize decision-making. You will define and implement best practices for data architecture, data governance, and AI model deployment. Collaborating with data engineers, data scientists, and business stakeholders, you will deliver scalable and high-impact AI-driven applications. Additionally, you will lead the integration of AI models with enterprise applications, ensuring seamless deployment and operational efficiency. It is also part of your role to evaluate and recommend the latest technologies in data platforms, AI frameworks, and cloud-based analytics solutions while ensuring data security, compliance, and ethical AI implementation. Guiding teams in adopting advanced analytics, AI, and machine learning models for predictive insights and automation is also a crucial aspect. Your role requires driving innovation by identifying new opportunities for AI and data-driven improvements within the organization. To excel in this position, you must possess over 8 years of experience in designing and implementing data and AI solutions. Strong expertise in cloud platforms such as AWS, Azure, or Google Cloud is essential. Hands-on experience with big data technologies like Spark, Databricks, Snowflake, etc., is required. Proficiency in TensorFlow, PyTorch, Scikit-learn, etc., is a must. A deep understanding of data modeling, ETL processes, and data governance frameworks is necessary. Experience in MLOps, model deployment, and automation is expected. Proficiency in Generative AI frameworks and strong programming skills in Python, SQL, and Java/Scala (preferred) are essential. Familiarity with containerization and orchestration (Docker, Kubernetes) is a plus. Excellent problem-solving skills and the ability to work in a fast-paced environment are crucial. Strong communication and leadership skills, with the ability to drive technical conversations, are highly valuable. Preferred qualifications for this role include certifications in cloud architecture, data engineering, or AI/ML, experience with generative AI, a background in developing AI-driven analytics solutions for enterprises, experience with Graph RAG, Building AI Agents, Multi-Agent systems, and additional certifications in AI/GenAI. Proven leadership skills are expected in this position. This role offers various perks such as flexible timings, 5 days working schedule, a healthy environment, celebrations, opportunities for learning and growth, building a community, and medical insurance benefits.,

Posted 1 week ago

Apply

13.0 - 17.0 years

0 Lacs

pune, maharashtra

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. We are a cluster of the brightest stars working with cutting-edge technologies, with a purpose anchored in bringing real positive changes in an increasingly virtual world. We are looking to hire Azure Professionals with 13-16 years of experience in the following areas: - Producing and delivering architecture-related solutions including applications and tools, configurations, security, and setup procedures. - Designing and implementing end-to-end data pipelines using Azure and Databricks. - Designing and optimizing clusters and workflows. - Implementing Data governance & Data quality management. - Collaborating with Enterprise Architects and other stakeholders to prepare architecture proposals addressing business demands and requirements. - Developing solution prototypes, proofs of concepts, and conducting pilot projects to support business initiatives. - Staying abreast of data-related technology trends and recommending technologies to advance business software applications capabilities. - Participating in architecture design reviews and optimization to progress the development of the integrated model. - Defining and driving adoption of architecture patterns and techniques for scalable and reusable architectures. - Assessing and reusing architecture patterns in the solution architecture and developing scalable multi-use architecture. The ideal candidate should have expertise in Solution Design, Emerging Technologies, Architecture tools and frameworks, Architecture Concepts and Principles, Technology/Product Knowledge, Proposal Design, Requirement Gathering and Analysis, and Proposal Defence. At YASH, we empower you to create a career in an inclusive team environment with career-oriented skilling models and collective intelligence aided by technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded in flexible work arrangements, free spirit, emotional positivity, agile self-determination, trust, transparency, open collaboration, support for business goals realization, stable employment, great atmosphere, and ethical corporate culture.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies