Jobs
Interviews

5719 Databricks Jobs - Page 45

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the company: We are on a mission to make it viable to extract value from all data in the world — so humanity can capture every insight, cure, invention, and opportunity.Traditional processing solutions based on CPUs and today’s software architectures cannot handle the complexity and volume of data, doubling every two years, with unstructured data now accounting for 90% of all data created. The surge of GenAI and its dependence on huge volumes of unstructured data is compounding the processing challenge. We are creating a new data processing standard for the accelerated computing era to overcome these performance, cost and scalability limitations. About the role: You will play a critical role in ensuring our customers achieve meaningful outcomes using our advanced analytics engine built for Lakehouse platforms. You will partner closely with clients to understand their goals, drive adoption of our product, and deliver long-term value through ongoing engagement, technical enablement, and strategic guidance. This is a hands-on role that requires a deep understanding of data engineering, analytics, Lakehouse architectures (e.g., Delta Lake, Apache Iceberg), and related cloud technologies. You will serve as a trusted advisor to your customers, helping them to integrate, optimize, and expand their use of our product within their data ecosystem. Key Responsibilities: ● Serve as the primary post-sales point of contact and advocate for customers, ensuring a seamless onboarding and implementation process. ● Build deep relationships with technical and business stakeholders to align product capabilities with customer goals. ● Drive product adoption and usage, delivering measurable business outcomes that demonstrate the value of our product. ● Proactively identify risks to customer success and create strategies to mitigate churn while maximizing growth opportunities. ● Partner with sales, product, and engineering teams to communicate customer feedback and drive continuous product improvement. ● Guide clients in co-existing and integrating our product alongside other analytics engines (e.g., Databricks, Snowflake, Dremio) within Lakehouse environments. ● Conduct product walkthroughs, knowledge transfer sessions, and enablement workshops for technical and non-technical teams. ● Support the development of customer reference architectures and case studies based on successful deployments. ● Collaborate with system integrators, consultants, and other partners to ensure joint success in complex enterprise environments. ● Mentor junior team members and contribute to the overall growth of the Customer Success team. ● Handling escalations and production issues ● Funneling improvements from bugs encountered in customer environment to bug fixes, features and supportability enhancements in the product. Qualifications ● Proven experience in customer-facing roles such as Customer Success, Solutions Engineering, Technical Account Management, or Post-Sales Consulting. ● Strong technical acumen in Lakehouse platforms, data engineering, analytics, SQL, and AI/ML. ● Hands-on expertise in public cloud platforms (AWS, Azure, GCP) and common data tools (Spark, Python, Scala, Java). ● Ability to clearly communicate complex technical concepts to both technical and business audiences. ● Experience with onboarding, driving adoption, and demonstrating ROI in enterprise software environments. ● Excellent collaboration and stakeholder management skills. ● Bachelor’s degree in Computer Science, Engineering, or a related field; Master's preferred.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Develop and optimize data pipelines to enhance data processing efficiency. - Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required., 15 years full time education

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required., 15 years full time education

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in ensuring that data is accessible, reliable, and ready for analysis, contributing to informed decision-making within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture and data models. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data lake architectures. - Strong understanding of ETL processes and data integration techniques. - Familiarity with data quality frameworks and data governance practices. - Experience with cloud platforms such as AWS or Azure. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required., 15 years full time education

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required., 15 years full time education

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code for multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of software solutions, performing maintenance and enhancements, and contributing to the overall development process. You will be responsible for delivering high-quality code while adhering to best practices and project timelines, ensuring that the applications meet the needs of the clients effectively. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. - Conduct code reviews to ensure adherence to coding standards and best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data processing and analytics workflows. - Experience with cloud-based data solutions and architectures. - Familiarity with programming languages such as Python or Scala. - Knowledge of data integration techniques and ETL processes. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Hyderabad. - A 15 years full time education is required., 15 years full time education

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

On-site

Job Description: We are seeking a highly skilled Data Engineer with expertise in Azure Data Engineering platforms to join our team. The ideal candidate should have strong technical skills in data modelling, SQL, and Azure technologies, with a deep understanding of end-to-end data flows from ingestion to exploitation. Key Responsibilities: ● Build and optimize complex SQL queries for data pipelines and analytics. ● Design and implement efficient data models using robust data structures and algorithms. ● Manage and optimize data storage using Azure Databricks and Azure Data Lake Storage. ● Orchestrate complex pipelines using Azure Data Factory. ● Demonstrate a thorough understanding of the Azure Data Engineering Platform, including platform architecture and workflows. ● Collaborate with cross-functional teams to ensure seamless data flow and processing. ● Utilise Python for advanced data engineering tasks and automation. Requirements: ● SQL Expertise: Proficiency in building and optimizing queries. ● Experience in Python and Pyspark: Hands-on Experience with Python programming and Pyspark. ● Data Modeling: Experience with data structures, algorithms, and E2E data flows.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

On-site

Description: Senior Full Stack Developer Position Overview: We are seeking a highly skilled Full Stack Developer to join our dynamic team. The ideal candidate will possess a robust understanding of both front-end and back-end development, with a strong emphasis on creating and maintaining scalable, high-performance applications. This role requires a professional who can seamlessly integrate into our team, contributing to the development of innovative solutions that drive our trading operations. To be eligible for this role, you must be able to demonstrate: • Strong communication and interpersonal skills • Ability to collaborate effectively with internal and external customers • Innovative and analytical thinking • Ability to manage workload under time pressure and changing priorities • Adaptability and willingness to learn new technologies and methodologies Required Skills and Qualifications: • Technical Proficiency: • Expert Front-end React Framework & Backend Python Experience • Proficient in front-end technologies such as HTML, CSS, Strong back-end development skills, or similar languages. • Proficient GIT, & CI/CD experience. • Develop and maintain web applications using modern frameworks and technologies • Help maintain code quality, organization, and automation • Experience with relational database management systems. • Familiarity with cloud services (AWS, Azure, or Google Cloud – Primarily Azure). • Industry Knowledge: • Experience in the oil and gas industry, particularly within trading operations, is highly desirable. • Understanding of market data, trading systems, and financial instruments related to oil and gas. Preferred Qualifications: • Certifications in relevant technologies or methodologies. • Proven experience in building, operating, and supporting robust and performant databases and data pipelines. • Experience with Databricks and Snowflake • Solid understanding of web performance optimization, security, and best practices

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title - + + Management Level: Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Responsibilities Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional And Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture , Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture to support data initiatives. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and processes. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Indore office. - A 15 years full time education is required., 15 years full time education

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities: - Need Databricks resource with Azure cloud experience - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with data architects and analysts to design scalable data solutions. - Implement best practices for data governance and security throughout the data lifecycle. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with cloud platforms and services related to data storage and processing. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Pune office. - A 15 years full time education is required., 15 years full time education

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Business Agility Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and supporting data-driven decision-making within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with stakeholders to gather and analyze data requirements. - Design and implement robust data pipelines to ensure efficient data flow. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Business Agility. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with cloud platforms and data storage solutions. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Noida office. - A 15 years full time education is required., 15 years full time education

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to design and implement data platform solutions. - Develop and maintain data pipelines for efficient data processing. - Implement data security and privacy measures to protect sensitive information. - Optimize data storage and retrieval processes for improved performance. - Conduct regular data platform performance monitoring and troubleshooting. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of cloud-based data platforms. - Experience with data modeling and database design. - Hands-on experience with ETL processes and tools. - Knowledge of data governance and compliance standards. Additional Information: - The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Ahmedabad office. - A 15 years full time education is required., 15 years full time education

Posted 2 weeks ago

Apply

0.0 - 2.0 years

5 - 12 Lacs

Pune, Maharashtra

On-site

Company name: PibyThree consulting Services Pvt Ltd. Location : Baner, Pune Start date : ASAP Job Description : We are seeking an experienced Data Engineer to join our team. The ideal candidate will have hands-on experience with Azure Data Factory (ADF), Snowflake, and data warehousing concepts. The Data Engineer will be responsible for designing, developing, and maintaining large-scale data pipelines and architectures. Key Responsibilities: Design, develop, and deploy data pipelines using Azure Data Factory (ADF) Work with Snowflake to design and implement data warehousing solutions Collaborate with cross-functional teams to identify and prioritize data requirements Develop and maintain data architectures, data models, and data governance policies Ensure data quality, security, and compliance with regulatory requirements Optimize data pipelines for performance, scalability, and reliability Troubleshoot data pipeline issues and implement fixes Stay up-to-date with industry trends and emerging technologies in data engineering Requirements: 4+ years of experience in data engineering, with a focus on cloud-based data platforms (Azure preferred) 2+ years of hands-on experience with Azure Data Factory (ADF) 1+ year of experience working with Snowflake Strong understanding of data warehousing concepts, data modeling, and data governance Experience with data pipeline orchestration tools such as Apache Airflow or Azure Databricks Proficiency in programming languages such as Python, Java, or C# Experience with cloud-based data storage solutions such as Azure Blob Storage or Amazon S3 Strong problem-solving skills and attention to detail Job Type: Full-time Pay: ₹500,000.00 - ₹1,200,000.00 per year Schedule: Day shift Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: total work: 4 years (Preferred) Pyspark: 2 years (Required) Azure Data Factory: 2 years (Required) Databricks: 2 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role: DATA ENGINEER Skillsets (must have): AZURE Databricks, AZURE Data factory Programming - Python, PySpark, T-SQL, PL-SQL Agile Software Development Life Cycle and Scrum methodology Healthcare knowledge

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About The Role Grade Level (for internal use): 10 The Team We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. What’s In It For You Opportunity to contribute to the development of a world-class Platform Engineering team. Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of data—ideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks. Grow your career within a globally distributed team, with clear opportunities for advancement and skill development. Responsibilities Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications Bachelor's/Master’s Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks, Scala technologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications Hands-on experience with cloud computing platforms including AWS, Azure, or Google Cloud Platform (GCP). Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317835 Posted On: 2025-07-09 Location: Ahmedabad, Gujarat, India

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

India

Remote

Job Description As an Azure Databricks Data Engineer , your responsibilities include: Technical Requirements Gathering and Development of Functional Specifications. Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks, Data Factory, and other Azure services. Implement and optimize Spark jobs, data transformations, and data processing workflows in Databricks. Develop and integrate custom machine learning models using Azure Machine Learning, MLflow, and other relevant libraries. Leverage Azure DevOps and CI/CD best practices to automate the deployment and management of data pipelines and infrastructure. Conducting troubleshooting on data models. Work with the Agile multicultural teams in Asia, the EU, Canada, and the USA. Profile Requirements For this position of Azure Databricks Data Engineer , we are looking for someone with: (Required) At least 4 years of experience in developing and maintaining data pipelines using Azure Databricks, Azure Data Factory, and Spark. (Required) Fluent English communication and soft skills. (Required) Knowledge and Experience in CICD such as Terraform, ARM, and Bicep Script. (Required) Solid technical skills in Python, and SQL (Required) Familiarity with machine learning concepts, tools, and libraries (e.g., TensorFlow, PyTorch, Scikit-learn, MLflow) (Required) Strong problem-solving, communication, and analytical skills. Willingness to learn and expand technical skills in other areas. Adastra Culture Manifesto Servant Leadership Managers are servants to employees. Managers are elected to make sure that employees have all the processes, resources, and information they need to provide services to clients in an efficient manner. Any manager up to the CEO is visible and reachable for a chat regardless their title. Decisions are taken with a consent in an agile manner and executed efficiently in no overdue time. We accept that wrong decisions happen, and we appreciate the learning before we adjust the process for a continuous improvement. Employees serve clients. Employees listen attentively to client needs and collaborate internally as a team to cater to them. Managers and employees work together to get things done and are accountable to each other. Corporate KPIs are transparently reviewed on monthly company events with all employees. Performance Driven Compensation We recognize and accept that some of us are more ambitious, more gifted, or more hard-working. We also recognize that some of us look for a stable income and lesser hassle at a different stage of their careers. There is a place for everyone, we embrace and need this diversity. Grades in our company are not based on number of years of experience, they are value driven based on everyone’s ability to deliver independently their work to clients and/or lead others. There is no “anniversary/annual” bonus, we distribute bonuses on a monthly recurring basis as an instant gratification for performance and this bonus is practically unlimited. There is no “annual indexation” of salaries, you may be upgraded several times within the year, or none, based on your own pace of progress, ambitions, relevant skillset and recognition by clients. Work-Life Integration We challenge the notion of work-life balance, we embrace the notion of work-life integration instead. This philosophy looks into our lives as a single whole where we serve ourselves, our families and our clients in an integrated manner. We encourage 100% flexible working hours where you arrange your day. This means you are free when you have little work, but this also means extra effort if you are behind schedule. Working on a Western project also means nobody bothers you during the whole day but you may have to jump on a scrum call in the evening to talk to your team overseas. We appreciate time and we minimize time spent on Adastra meetings. We are also a remote-first company. While we have our collaboration offices and social events, we encourage people to work 100% remote from home whenever possible. This means saving time and money on commute, staying home with elderly and little ones, not missing the special moments in life. This also means you can work from any of our other offices in Europe, North America or Australia, or move to a place with lower cost of living without impacting your income. We trust you by default until you fail our trust. Global Diversity Adastra Thailand is an international organization. We hire globally and our biggest partners and clients are in Europe, North America and Australia. We work on teams with individuals from different culture, ethnicity, sexual preference, political views or religion. We have zero tolerance to anyone who doesn’t pay respect to others or is abusive in any way. We speak different languages to one another, but we speak English when we are together or with clients. Our company is a safe space where communication is encouraged but boundaries regarding sensitive topics are respected. We accept and converge together to serve our teams and clients and ultimately have good time at work. Lifelong Learning On annual average we invest 25% of our working hours to personal development and upskilling outside project work, regardless of seniority or role. We feature more than 400 courses on our Training Repo and we continue to actively purchase or tailor hands-on content. We certify people on our expense. We like to say we are technology agnostic; we learn the principles of data management and we apply it on different use cases and different technology stacks. We believe that the juniors today are the seniors tomorrow, we treat everyone with respect and mentor them into the roles they deserve. We encourage seniors to give back to the IT community through leadership and mentorship. On your last day with us we may give you an open dated job offer so that you feel welcome to return home as others did before you. More About Adastra: Visit Adastra (adastracorp.com) and/or contact us: at HRIN@adastragrp.com

Posted 2 weeks ago

Apply

0.0 - 8.0 years

0 Lacs

Ahmedabad, Gujarat

On-site

About the Role: Grade Level (for internal use): 10 The Team: We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact: Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. What’s in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of data—ideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications: Bachelor's/Master’s Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks , Scala technologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317835 Posted On: 2025-07-09 Location: Ahmedabad, Gujarat, India

Posted 2 weeks ago

Apply

0.0 - 8.0 years

0 Lacs

Ahmedabad, Gujarat

On-site

Senior Software Engineer Ahmedabad, India; Hyderabad, India; Islamabad, Pakistan Information Technology 317835 Job Description About The Role: Grade Level (for internal use): 10 The Team: We seek a highly motivated, enthusiastic, and skilled engineer for our Industry Data Solutions Team. We strive to deliver sector-specific, data-rich, and hyper-targeted solutions for evolving business needs. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts and Infrastructure Teams. The Impact: Enterprise Data Organization is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. What’s in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of data—ideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies, including AWS Cloud and Databricks . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages, and tools, including unit testing, performance testing, and monitoring, and implementation Support business and technology teams as necessary during design, development, and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially in writing, with the business and other technical groups Basic Qualifications: Bachelor's/Master’s Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Nice to have knowledge in Grafana, Kibana, Big data, Git Hub, EMR, Terraforms, AI-ML Advanced SQL programming skills Highly recommended skillset in Databricks , Scala technologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317835 Posted On: 2025-07-09 Location: Ahmedabad, Gujarat, India

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0157404 Date posted 07/09/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description The Future Begins Here At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity As a Salesforce Data Cloud, EINSTEIN, Agentforce Software Engineer, you'll combine technical skills on Salesforce Data cloud, Agentforce(GENETIC AI) ,business acumen and development qualities to build solutions that enable transformative capabilities within Takeda. You will support the Platform Owner in onboarding and facilitating the effective utilization of these platforms. You will play a key role in developing and building GENAI use cases leveraging LLM(Open-AI, chatGpt4.0 etc), Salesforce Data cloud, EINSTEIN and Agentforce Responsibilities Act as the technical developer for GENAI commercial and Medical use cases Clarification of technical details in backlog items and technical design and build of backlog items. Set standards & best practices within capability, and encourage adherence to those. Proactively address any risks & issues together with the team, and raise to program/portfolio oversight, as required Ensure appropriate level of technical documentation. Cross-pollination of applicable standards & practices between teams. Discussion on business requirements with stakeholders. Pro-active suggestions of new features or tools. Alignment & communication with other service lines on overarching practices & approaches. Ensure re-useability of features within business functions. Educate service managers on the new upcoming features and capabilities. Skills and Qualifications Required Bachelor’s degree in computer science or a related study, or equivalent experience 5+ years of relevant professional experience in using and managing machine learning models, large language models, salesforce data cloud and Agentforce Strong understanding of large language models, databricks, Salesforce data cloud, Agentforce Strong understanding of prompt engineering,NLP techniques, and Deep Learning models for accurate responses to user queries, Experience in code deployment into AWS, Salesforce agentforce and utilized various AWS Services like Lambda, API Gateway, EC2, S3, DynamoDB, Sage maker etc. Experience in Agile development using agile methods like Scrum and/or Kanban Excellent oral and written communication skills, business acumen, and enterprise knowledge. Strong experience in design or in implementing solutions or products and preferable experience in quality improvement. Understand design thinking and can explain and convince stakeholders. Work with virtual/agile teams in different locations, aligning and adapting different work, culture, and communication styles Proficiency in English in both verbal and written communication is a must. What Takeda Can Offer You: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. Benefits It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) About ICC in Takeda Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. #Li-Hybrid Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Faridabad, Haryana, India

On-site

Position: Senior AI/ML Engineer- NLP/Python (Please Post on Naukri with the Position as it ) Experience : 3 to 5 Years Location : Mohan Corporate Office (Work from Office Only) Job Type: Full-Time Salary: To be discussed during the interview Key Responsibilities : - Design, develop, and deploy AI/ML models for real-world applications. - Work with NLP, deep learning, and traditional ML algorithms to solve complex business problems. - Develop end-to-end ML pipelines, including data preprocessing, feature engineering, model training, and deployment. - Optimize model performance using hyperparameter tuning and model evaluation techniques. - Implement AI-driven solutions using TensorFlow, PyTorch, Scikit-learn, OpenAI APIs, Hugging Face, and similar frameworks. - Work with structured and unstructured data, performing data wrangling, transformation, and feature extraction. - Deploy models in cloud environments (AWS, Azure, or GCP) using SageMaker, Vertex AI, or Azure ML. - Collaborate with cross-functional teams to integrate AI models into production systems. - Ensure scalability, performance, and efficiency of AI/ML solutions. - Stay updated with emerging AI trends and technologies to drive innovation. Required Skills : - Strong experience in machine learning, deep learning, NLP, and AI model development. - Implement Retrieval-Augmented Generation (RAG) using vector databases - Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and OpenAI GPT models. - Expertise in NLP techniques (Word2Vec, BERT, transformers, LLMs, text classification). - Hands-on experience with computer vision (CNNs, OpenCV, YOLO, custom object detection models). - Solid understanding of ML model deployment and MLOps (Docker, Kubernetes, CI/CD for ML models). - Experience in working with cloud platforms (AWS, Azure, GCP) for AI/ML model deployment. - Strong knowledge of SQL, NoSQL databases, and big data processing tools (PySpark, Databricks, Hadoop, Kafka, etc. - Familiarity with API development using Django, Flask, or FastAPI for AI solutions. - Strong problem-solving, analytical, and communication skills. Preferred Skills : - Experience with AI-powered chatbots and OpenAI API integration. - Exposure to LLMs (GPT, LLaMA, Falcon, etc.) for real-world applications. - Hands-on experience in generative AI models.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Key Responsibilities Develop, maintain, and optimize robust data ingestion and ETL/ELT pipelines using Python on the Databricks platform. Work collaboratively with the architecture team to align implementation with long-term data strategies. Leverage Delta Lake for data storage optimization and performance tuning. Utilize Prometheus for monitoring the health and performance of data pipelines and services. Ensure quality, reliability, and scalability of the data platform across multiple domains and use cases. Drive automation and reusability in data processing pipelines to reduce manual overhead. Participate in code reviews, peer programming, and continuous improvement efforts. Ensure adherence to best practices in coding, security, and documentation. Engage in team standups and collaborative sessions in the office 3 days a Skills : 5+ years of professional experience in Python development, primarily in data engineering use cases. Strong experience working with Databricks and Delta Lake. Proficient in SQL, data manipulation, and working with large-scale structured and semi-structured datasets. Hands-on experience with monitoring tools like Prometheus. Solid understanding of modern data architecture, especially around data lakes and distributed processing. Deep understanding of scalable system design, performance optimization, and best coding practices. Familiarity with CI/CD workflows and Git-based version Skills : Experience with Spark (especially PySpark). Exposure to other monitoring tools like Grafana, ELK stack, etc. Knowledge of cloud platforms such as AWS, Azure, or GCP. Understanding of data governance and security practices in enterprise Skills : Strong analytical and problem-solving skills. Excellent verbal and written communication. Ability to collaborate effectively across globally distributed teams. Proactive, self-motivated, and innovative mindset (ref:hirist.tech)

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Key Responsibilities Data Engineering & Development : Develop and maintain cloud-native data pipelines and analytics solutions on Azure and Databricks. Implement ETL/ELT processes using Databricks Delta Lakehouse architecture. Build real-time structured streaming pipelines using PySpark. Database & Modelling Perform data modeling and schema design for relational databases, preferably PostgreSQL. Write optimized and reusable SQL queries for data extraction, transformation, and analysis. Automation & CI/CD Integration Design and deploy CI/CD pipelines to support automated builds, testing, and deployment of data pipelines. Security & Compliance Ensure all solutions adhere to security-first development principles and organizational data governance : : Bachelors degree in Computer Science, Information Technology, or a related : 3 to 5 years of experience in data engineering with a focus on Azure and : Skills : Databricks Python PySpark SQL Preferred Skills Experience in data modeling with PostgreSQL Understanding of Delta Lake architecture Exposure to real-time processing with Structured Streaming Familiarity with secure coding practices and compliance in data environments Strong problem-solving and analytical abilities Ability to work in a fast-paced, collaborative environment (ref:hirist.tech)

Posted 2 weeks ago

Apply

6.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Role : Juniper AI SME. Experience : 6 - 8 Years. Start Date : immediate 15 Days. Location : Type : Full-time. Job Summary We are seeking a highly skilled and innovative AI Solutions Specialist to lead the design, development, and deployment of AI-driven solutions tailored to complex business challenges. The ideal candidate will be responsible for building scalable, ethical, and cutting-edge AI/ML systems, collaborating with cross-functional teams to implement intelligent solutions, and leveraging industry best practices in data science, machine learning, and cloud-native architectures. Key Responsibilities Lead the development of Data Engineering, Machine Learning (ML), and AI capabilities across the full solution lifecycle. Collaborate with project, data science, and development teams to define and implement AI/ML technical roadmaps. Engage with stakeholders to identify AI/ML opportunities aligned with business needs. Architect and implement scalable AI solutions integrated with existing systems and infrastructure. Design, train, test, and optimize machine learning models and AI algorithms. Evaluate third-party AI tools, APIs, and platforms for potential integration. Develop technical documentation, solution architecture, and proof-of-concept prototypes. Partner with data engineering teams to ensure data availability, quality, and integrity. Monitor AI models in production, ensuring continuous learning, tuning, and performance improvement. Uphold responsible AI practices, including bias mitigation, explainability, and data privacy. Communicate AI strategy and progress to both technical and business stakeholders. Continuously research and apply the latest AI trends, techniques, and tools. Work within MLOps frameworks and deployment pipelines including Docker, MLflow, and CI/CD. Technical Skills & Tools Expertise in machine learning libraries and frameworks : TensorFlow, PyTorch, Scikit-learn. Juniper AI SMEProficient in programming : Python Familiarity with AI/ML platforms : AWS SageMaker, Azure ML, Google Vertex AI, Databricks, Snowflake Infrastructure & deployment tools : Docker, Kubernetes, Terraform, Ansible, Prometheus, Grafana, ELK Data technologies : Hadoop, Spark, Kafka, SQL, NoSQL, Postgres, Cassandra, Elastic Search. CRM and enterprise systems : Salesforce Experience in data preprocessing, feature engineering, and model evaluation. Exposure to Generative AI, LLMs (e.g., GPT, diffusion models), and deep learning techniques. Experience with natural language processing (NLP), computer vision, or reinforcement learning. Knowledge of Junos OS architecture, including its operational and feature-specific nuances. Preferred Qualifications Publications, patents, or open-source contributions in AI/ML. Proven leadership in building AI systems at scale Excellent analytical, communication, and stakeholder engagement skills Experience. Minimum 6-8 years in data & analytics with strong focus on AI/ML, data platforms, and data engineering Experience in leading architecture and infrastructure for end-to-end AI/ML lifecycle management. Deep understanding of technology trends, architectures, and integration strategies in AI. Hands-on expertise in predictive modeling, NLP, deep learning, and information retrieval. (ref:hirist.tech)

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for a seasoned Data Engineer with extensive experience in designing and implementing data pipelines using Medallion Architecture, Azure Databricks, and Snowflake. The ideal candidate will be responsible for building scalable ETL pipelines, optimizing data flows, and ensuring data quality for large-scale data platforms. Key Responsibilities Design, develop, and optimize data pipelines following Medallion Architecture (Bronze, Silver, Gold layers). Implement and maintain ETL pipelines using Databricks and Python (multi-threading and multi-processing). Leverage Snowflake for data warehousing, including schema design, data loading, and performance tuning. This also includes experience with Linux, Docker, Anaconda, Pandas, PySpark, Apache Hive and Iceberg, Trino, and Prefect. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver robust data solutions. Develop data models and manage data lakes for structured and unstructured data. Implement data governance and security best practices. Monitor and troubleshoot data pipelines for performance and reliability. Stay up-to-date with industry trends and best practices in data engineering and cloud Qualification : B.Tech/B.E. (Computer MCA Computer diploma in development with 3+ years of experience compulsory (ref:hirist.tech)

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies