Jobs
Interviews

902 Dataset Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview Data Infrastructure & Strategic Initiatives team is responsible for all the automation aspects of testing processes, ensuring quality of data and independent testing of corporate & business level process and regulatory controls by providing seamless access to the appropriate data & platforms required to execute the associated portfolio of tests. A test is defined in the Independent Testing Enterprise Policy as “An independent point-in-time examination of one or more processes, controls, policies and procedures or data sources utilized for managing risk to assess the effectiveness of the control environment. A test is focused on answering a specific objective and has a pre-defined pass/fail criteria.” Compliance testing may include activities such as automated surveillance and transaction level testing and may be performed onsite. Please note : This is not an application/software testing or application/software development role. Job Description Drive sustainable automation in EIT (Enterprise Independent Testing) test design, development, and implementation using a modern infrastructure and thoughtfully-designed solutions. Responsibilities: Developing testing automation that provides timely, useful, and actionable independent insight on the operational health of Bank of America’s processes. Work closely with process owners in the Front-Line Units and Control Functions (FLUs/CFs) to obtain an understanding of their processes, including underlying data, flows, and controls, and to identify risks to successful execution, so that appropriate testing and monitoring can be developed. The processes to be assessed span across multiple Products, Regulations and Enterprise Processes. The outputs of the methodologies will be used to drive process improvements and timely detection and reporting of errors. The role requires being able to document and verbally explain the intuition and details behind the methodologies in a manner that is clear, concise, and consumable for a broad set of audiences that include key stakeholders across the Bank, as well as auditors and regulators. The Test Tools and Automation specialist will then convert the test requirements into automated tests via Python and SQL. Enabling automatic test document generation from code. Leverage SDLC/Agile development, understand Coding standards and best practices. Perform debugging and code reviews to ensure quality of code. Ensure accuracy and quality development and hold validation session with the stakeholders to get a sign-off . Ensure adherence to the SLA’s / Metrics – productivity, turnaround-time and accuracy. Communicate regularly with management and other support colleagues. Manage stakeholders with respect to business deliverables. Drive projects independently with minimal oversight and ensure timely deliverables. Requirements: Education: MBA Experience Range 0-3 years Foundational Skills Prior experience/Knowledge of coding in Python and SQL Working knowledge of relational database and familiarity with data analysis/ mining tools, prior exposure to working with large dataset beneficial Technical Experience in object-oriented or functional software development Experienced in writing effective, scalable code Good understanding of software testing methodologies Worked on varied data problems; structured/unstructured and small/large. Applies critical thinking and connects the dots on how processes relate to one another. Demonstrates understanding of and passion for the “why”. Looks around the corner, explores uncharted territories with an “outside-in” perspective. Life-long learner who not only assertively educates self but encourages others to learn and grow. Feels ownership and accountability for delivering high quality work, able to prioritize effectively, adapt, and meet strict deadlines. Ability to recommend and implement process control improvements. Strong written, verbal, presentation creation and delivery skills. Communications are timely, concise, easy to follow and tailored to topic, audience, and competing priorities. Exercises excellent judgment, discerning appropriate moments to challenge or insert point of view. Presentations tell a compelling story and influence action. Asks the next level of questions, applies context to determine direction. Flexible to shift changes at short notice. Ability to work cross-functionally, and across borders to effectively execute the business goals. Desired skills: Any experience in Operational Risk, Audit or Compliance domain Exposure to Trifacta platform Automation acumen Experience using large data tooling including Hadoop and S3 as well as Spark and Trino Experience building unit and integration tests with Pytest Location: Students should be willing to work in any locations namely – Mumbai, Chennai, Gurugram, Gandhinagar (GIFT), Hyderabad as per company requirement Mandatory Eligibility Requirements Graduates from the Class of 2025 ONLY Must Have Specialization in Finance or as specified Must have scored 60% in the last semester OR CGPA of 6 on a scale of 10 in the last semester No Active Backlogs in any of the semesters Students should be willing to join any of the roles/skills/segment as per company requirement Students should be willing to work in any shifts including night shifts

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you’ll find unrivaled opportunities to succeed and realize your full potential The team Assurance is about much more than just the numbers. It’s about attesting to accomplishments and challenges and helping to assure strong foundations for future aspirations. Deloitte exemplifies the what, how, and why of change so you’re always ready to act ahead. Learn more about Risk Advisory R & LS Your work profile Roles and Responsibilities Experience in Capital Markets on FX/IR/Equity/CDO Products with strong product knowledge. Good understanding of product/trade life cycle, front to back, along with end-to-end trade operations. Experience with project management methodologies (Agile, Waterfall, Hybrid approaches). Proficiency in Project Management Tools (JIRA, MS Project etc.) Ability to drive an agenda and consensus across multi-functional and cross-regional/cross-geography teams. Communicates clearly and concisely ensuring issues are escalated and understood. BA: Skilled in developing Business and Functional Requirement Documents for new initiatives and collaborate with traders, risk, finance, analytics, operations and technology teams. PM: Lead and participate in working groups, workshops and with stakeholders to understand business requirements, define project plans, manage timelines and ensure successful delivery of the project. Skill sets Functional : Working knowledge & experience in Risk, Stress testing, Traded risk, Counterparty credit risk, Basel 3.1 etc. Technical: Advance excel, SAS, Python. Should be able to handle, manage & analyze big dataset (data massaging, data transformation etc). Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Consultant or Assistant Manager across our organization: Builds own understanding of our purpose and values; explores opportunities for impact • Demonstrates strong commitment to personal learning and development, acts as a brand ambassador to help attract top talent • Understands expectations and demonstrates personal accountability for keeping performance on track • Actively focuses on developing effective communication and relationship-building skills • Understands how their daily work contributes to the priorities of the team and the business How you’ll grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to help build world-class skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.

Posted 2 weeks ago

Apply

14.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Backdrop AVIZVA is a Healthcare Technology Organization that harnesses technology to simplify, accelerate, & optimize the way healthcare enterprises deliver care. Established in 2011, we have served as strategic enablers for healthcare enterprises, helping them enhance their overall care delivery. With over 14 years of expertise, we have engineered more than 150 tailored products for leading Medical Health Plans, Dental and Vision Plan Providers, PBMs, Medicare Plan Providers, TPAs, and more. Overview Of The Role As a System Analyst within a product development team in AVIZVA, you will be one of the front- liners of the team spearheading your product’s solution design activities alongside the product owners, system architect, lead developers while collaborating with all business & technology stakeholders. Job Responsibilities Gather & analyze business, functional, data requirements with the PO, & relevant stakeholders and derive system requirements from the same. Work with the system architect to develop an understanding of the product's architecture, components, Interaction, flow, and build clarity around the technological nuances & constructs involved. Develop an understanding of the various datasets relevant to the industry, their business significance and logical structuring from a data modeling perspective. Conduct in-depth industry research around datasets pertinent to the underlying problem statements. Identify, (data) model & document the various entities, relationships & attributes alongwith appropriate cardinality and normalization. Apply ETL principles to formulate & document data dictionaries, business rules, transformation & enrichment logic, for various datasets in question pertaining to various source & target systems in context. Define data flow, validations & business rules driving the interchange of data between components of a system or multiple systems. Define requirements around system integrations and exchange of data such as systems involved, services (APIs) involved, nature of integration, handshake details (data involved, authentication, etc.) Identify use-cases for exposure of data within an entity/dataset via APIs and define detailed API signatures and create API documentation. Provide clarifications to the development team around requirements, system design, integrations, data flows, scenarios. Support to other product teams dependent on the APIs, integrations defined by your product team, in understanding the endpoints, logics, business, entity structure etc. Provide backlog grooming support to the Product Owner through activities such as functional analysis and data analysis. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science or any other analytically inclined field of study. At least 4 years of relevant experience in roles such as Business Analyst, Systems Analyst or Business System Analyst. Experience in analysing & defining systems involving varying levels of complexity in terms of underlying components, data, integrations, flows, etc. Experience working with data (structured, semi-structureed), data modeling, writing database queries with hands-on SQL, and working knowledge of Elasticsearch indexes. Experience with Unstructured data will be a huge plus. Experience of identifying & defining entities & APIs, writing API specifications, & API consumer specifications. Ability to map data from various sources to various consumer endpoints such as a system, a service, UI, process, sub-process, workflow etc. Experience with data management products based on ETL principles, involving multitudes of datasets, disparate data sources and target systems. A strong analytical mindset with a proven ability to understand a variety of business problems through stakeholder interactions and other methods to ideate the most aligned and appropriate technology solutions. Exposure to diagrammatic analysis & elicitation of business processes, data & system flows using BPMN & UML diagrams, such as activity flow, use-cases, sequence diagrams, DFDs, etc. Exposure to writing requirements documentation such as BRD, FRD, SRS, Use-Cases, User-Stories etc. An appreciation for the systems’ technological and architectural concepts with an ability to speak about the components of an IT system, inter-component interactions, database, external and internal data sources, data flows & system flows. Experience (at least familiarity) of working with the Atlassian suite (Jira, & Confluence). Experience in product implementations & customisations through system configurations will be an added plus. Experience of driving UX design activities in collaboration with graphic & UI design teams, by means of enabler tools such as Wireframes, sketches, flow diagrams, information architecture etc. will be an added plus. Exposure to UX designing & collaboration tools such as Figma, Zeplin, etc. will be an added plus. Awareness or prior exposure to Healthcare & Insurance business & data will be a huge advantage.

Posted 2 weeks ago

Apply

100.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We’re Lear For You Lear, a global automotive technology leader in Seating and E-Systems, is Making every drive better by delivering intelligent in-vehicle experiences for customers around the world. With over 100 years of experience, Lear has earned a legacy of operational excellence while building its future on innovation. Our talented team is committed to creating products that ensure the comfort, well-being, convenience, and safety of consumers. Working together, we are Making every drive better™. To know more about Lear please visit our career site: www.lear.com 🚀 We're Hiring: Lead- Data Engineering (Palantir Foundry) at Lear Corporation! Are you a data engineering expert ready to drive impact and support crucial decision-making? Join Lear Corporation as a **Senior Data Engineer** to take the lead on designing, building, and maintaining scalable data pipelines within Palantir Foundry. 🔍 About the Role: In this role, you’ll collaborate with cross-functional teams to ensure data solutions are robust, reliable, and aligned with our organizational needs. Your expertise will shape the future of data-driven insights and support mission-critical projects that drive decision-making. You’ll also mentor junior team members, helping foster a collaborative and innovative environment. ✨ Key Responsibilities: 1 . Manage Execution of Data-Focused Projects: As a senior member of the LEAR foundry team, support in designing, building and maintaining data-focused projects using Lear’s data analytics and application platforms. Participate in projects from conception to root cause analytics and solution deployment. Understand program and product delivery phases, contributing expert analysis across the lifecycle. Ensure Project deliverables are met as per agreed timeline. 2. Tools and Technologies: Utilize key tools within Palantir Foundry, including:  Pipeline Builder: Author data pipelines using a visual interface.  Code Repositories: Manage code for data pipeline development.  Data Lineage: Visualize end-to-end data flows. Leverage programmatic health checks to ensure pipeline durability. Work with both new and legacy technologies to integrate separate data feeds and transform them into new scalable datasets. Mentor junior data engineers on best practices. 3. Data Pipeline Architecture and Development: Lead the design and implementation of complex data pipelines. Collaborate with cross-functional teams to ensure scalability, reliability, and efficiency and utilize Git concepts for version control and collaborative development. Optimize data ingestion, transformation, and enrichment processes. 4. Big Data, Dataset Creation and Maintenance: Utilize pipeline or code repository to transform big data into manageable datasets and produce high-quality datasets that meet the organization’s needs. Implement optimum build time to ensure effective utilization of resource. 5. High-Quality Dataset Production: Produce and maintain datasets that meet organizational needs. Optimize the size and build scheduled of datasets to reflect the latest information. Implement data quality health checks and validation. 6. Collaboration and Leadership: Work closely with data scientists, analysts, and operational teams. Provide technical guidance and foster a collaborative environment. Champion transparency and effective decision-making. 7. Continuous Improvement: Stay abreast of industry trends and emerging technologies. Enhance pipeline performance, reliability, and maintainability. Contribute to the evolution of Foundry’s data engineering capabilities. 8. Compliance and data security: Ensure documentation and procedures align with internal practices (ITPM) and Sarbanes Oxley requirements, continuously improving them. 9. Team Development and Collaboration: Mentor junior team members and contribute to their growth. Foster collaboration within cross-functional teams. Share best practices and encourage knowledge sharing. 10. Quality Assurance & Optimization: Optimize data pipelines and their impact on resource utilization of downstream processes. Continuously test and improve data pipeline performance and reliability. Optimize system performance for all deployed resources. 🎯 Qualifications: Bachelor’s or master’s degree in Computer Science, Engineering, or a related field. Minimum 5 years of experience in data engineering, ETL, and data integration. Proficiency in Python and libraries like Pyspark, Pandas, Numpy. Strong understanding of Palantir Foundry and its capabilities. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). Excellent problem-solving skills and attention to detail. Effective communication and leadership abilities. **Why Lear?** Be part of a team that values innovation, excellence, and growth. Join us and make a real impact in a role that blends hands-on technical expertise with leadership in a collaborative, cutting-edge environment. Ready to advance your data engineering career with Lear? **Apply now!**

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurgaon

On-site

DESCRIPTION At Alexa Shopping Operations strives to become the most reliable source for dataset generation and annotations. We work in collaboration with Shopping feature teams to enhance customer experience (CX) quality across shopping features, devices, and locales. Our primary focus lies in handling annotations for training, measuring, and improving Artificial Intelligence (AI) and Large Language Models (LLMs), enabling Amazon to deliver a superior shopping experience to customers worldwide. Our mission is to empower Amazon's LLMs through Reinforcement Learning from Human Feedback (RLHF) across various categories at high speed. We aspire to provide an end-to-end data solution for the LLM lifecycle, leveraging cuttingedge technology alongside our operational excellence. By joining us, you will play a pivotal role in shaping the future of the shopping experience for customers worldwide. Key job responsibilities The candidate actively seeks to understand Amazon’s core business values and initiatives, and translates those into everyday practices. Some of the key result areas include, but not limited to: Experience in managing process and operational escalations Driving appropriate data oriented analysis, adoption of technology solutions and process improvement projects to achieve operational and business goal Managing stakeholder communication across multiple lines of business on operational milestones, process changes and escalations Communicate and take the lead role in identifying gaps in process areas and work with all stakeholders to resolve the gaps Be a SME for the process and a referral point for peers and junior team members Has the ability to drive business/operational metrics through quantitative decision making, and adoption of different tools and resources Ability to meet deadlines in a fast paced work environment driven by complex software systems and processes Ability to perform deep dive in the process and come up with process improvement solutions Shall collaborate effectively with other teams and subject matter experts (SMEs), Language Engineers (LaEs) to support launches of new process and services BASIC QUALIFICATIONS A Bachelor’s Degree and relevant work experience of 3+ years. Excellent level of English and either of Spanish/French/Italian/Portuguese, C1 level. Candidate must demonstrate ability to analyze and interpret complex SOPs. Excellent problem-solving skills with a proactive approach to identifying and implementing process improvements. Strong communication and interpersonal skills to effectively guide and mentor associates. Ability to work collaboratively with cross-functional teams. Thoroughly understand multiple SOPs and ensure adherence to established processes. Identify areas for process improvement and SOP enhancement, and develop actionable plans for implementation. Lead and participate in process improvement initiatives. Comfortable working in a fast paced, highly collaborative, dynamic work environment · Willingness to support several projects at one time, and to accept re-prioritization as necessary. Adaptive to change and able to work in a fast-paced environment. PREFERRED QUALIFICATIONS Experience with Artificial Intelligence interaction, such as prompt generation. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, HR, Gurugram Editorial, Writing, & Content Management

Posted 2 weeks ago

Apply

3.0 years

3 - 10 Lacs

Chennai

On-site

- Bachelor's degree or equivalent - Experience in Excel (macros, index, conditional list, arrays, pivots, lookups) - Bachelor's degree in business or analytical discipline - Knowledge of SQL - Knowledge of Python, VBA, Macros, Selenium scripts Alexa Shopping Operations strives to become the most reliable source for dataset generation and annotations. We work in collaboration with Shopping feature teams to enhance customer experience (CX) quality across shopping features, devices, and locales. Our primary focus lies in handling annotations for training, measuring, and improving Artificial Intelligence (AI) and Large Language Models (LLMs), enabling Amazon to deliver a superior shopping experience to customers worldwide. Our mission is to empower Amazon's LLMs through Reinforcement Learning from Human Feedback (RLHF) across various categories at high speed. We aspire to provide an end-to-end data solution for the LLM lifecycle, leveraging technology alongside our operational excellence. By joining us, you will play a pivotal role in shaping the future of the shopping experience for customers worldwide. The candidate will leverage data analysis tools like SQL, Excel, and other data management systems to extract and examine data, driving informed decision-making. By monitoring existing metrics and developing new ones in collaboration with internal teams, identify opportunities to enhance processes and systems. Design and implement reporting solutions that empower stakeholders to effectively manage the business. Additionally, support cross-functional teams in the day-to-day execution of program implementation and lead small to medium operational enhancement projects, driving continuous improvement and optimizing outcomes. Key job responsibilities - Retrieve and analyze data using SQL, Excel, and other data management systems. - Monitor existing metrics and create/implement new metrics where needed, partnering with internal teams to identify process and system improvement opportunities. - Design and implement reporting solutions to enable stakeholders to manage the business and make effective decisions. - Support cross-functional teams on the day-to-day execution of the existing program implementation. - Drive small to medium operational enhancement projects. 3+ years of complex Excel VBA macros writing experience Knowledge of Microsoft Excel at an advanced level, including: pivot tables, macros, index/match, vlookup, VBA, data links, etc. Are enrolled in or have completed a Bachelor's degree in business or analytical discipline Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

3.0 years

3 - 10 Lacs

Chennai

On-site

DESCRIPTION Alexa Shopping Operations strives to become the most reliable source for dataset generation and annotations. We work in collaboration with Shopping feature teams to enhance customer experience (CX) quality across shopping features, devices, and locales. Our primary focus lies in handling annotations for training, measuring, and improving Artificial Intelligence (AI) and Large Language Models (LLMs), enabling Amazon to deliver a superior shopping experience to customers worldwide. Our mission is to empower Amazon's LLMs through Reinforcement Learning from Human Feedback (RLHF) across various categories at high speed. We aspire to provide an end-to-end data solution for the LLM lifecycle, leveraging technology alongside our operational excellence. By joining us, you will play a pivotal role in shaping the future of the shopping experience for customers worldwide. The candidate will leverage data analysis tools like SQL, Excel, and other data management systems to extract and examine data, driving informed decision-making. By monitoring existing metrics and developing new ones in collaboration with internal teams, identify opportunities to enhance processes and systems. Design and implement reporting solutions that empower stakeholders to effectively manage the business. Additionally, support cross-functional teams in the day-to-day execution of program implementation and lead small to medium operational enhancement projects, driving continuous improvement and optimizing outcomes. Key job responsibilities Retrieve and analyze data using SQL, Excel, and other data management systems. Monitor existing metrics and create/implement new metrics where needed, partnering with internal teams to identify process and system improvement opportunities. Design and implement reporting solutions to enable stakeholders to manage the business and make effective decisions. Support cross-functional teams on the day-to-day execution of the existing program implementation. Drive small to medium operational enhancement projects. BASIC QUALIFICATIONS Bachelor's degree or equivalent Experience in Excel (macros, index, conditional list, arrays, pivots, lookups) Bachelor's degree in business or analytical discipline Knowledge of SQL Knowledge of Python, VBA, Macros, Selenium scripts PREFERRED QUALIFICATIONS 3+ years of complex Excel VBA macros writing experience Knowledge of Microsoft Excel at an advanced level, including: pivot tables, macros, index/match, vlookup, VBA, data links, etc. Are enrolled in or have completed a Bachelor's degree in business or analytical discipline Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TN, Chennai Business Intelligence

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Role Title: Programming Team Leader Reporting To: Associate Director, Statistical Programming Function: Data Management & Statistics Location:Remote Experience:4-8Years Purpose of the Role To ensure the accurate and timely execution of all statistical programming tasks by providing subject matter expertise and guiding a team of programmers. The role is responsible for generating statistical analyses from clinical databases, external data sources, and other relevant inputs in compliance with study protocols, statistical plans, regulatory guidelines, and internal processes to drive stakeholder satisfaction. Key Responsibilities & KPIs 1. Subject Matter Leadership Provide expert guidance on SDTM and ADaM development, ensuring alignment with the latest regulatory and industry standards (e.g., CDISC). Resolve complex issues related to dataset creation, transformation, and validation. Establish and maintain best practices for programming efficiency, reproducibility, and standardization. KPIs: Quality of programming deliverables (%) Stakeholder satisfaction (%) On-time task completion (%) Regulatory compliance (audit cases) Process improvements (#) Resolution of complex issues without escalation (#) Team training mandays (#) Voluntary attrition rate (%) 360° feedback results Training sessions conducted/attended (#) 2. Project Delivery Lead complex or high-priority programming tasks. Develop SAS programs for clinical trial outputs including TLFs, as per SAP. Conduct peer code reviews and optimize existing programs/macros for performance and efficiency. 3. Reporting Prepare comprehensive and timely management/statistical reports. Monitor data transfers during trials and address any issues proactively. Identify and escalate risks related to programming timelines and implement mitigation strategies. 4. Quality Assurance Establish and lead regular audits to ensure programming output and process quality. Execute data validation checks throughout the study lifecycle. Ensure accurate archiving of datasets, programs, and outputs post-study. 5. Policies, Processes & Procedures Maintain clear documentation of programming activities and dataset specifications. Ensure SOP adherence and correct any identified non-conformances. Implement new projects in line with department policies. Ensure compliance with Quality and Information Security Management Systems and applicable legal standards. 6. People Management Ensure technical and procedural training for all team members. Lead recruitment and foster team engagement and retention. Set team performance objectives, conduct appraisals, and provide feedback. Mentor team members for career growth and development. Recommend and support relevant training programs. Operating Network Internal: Department Heads External: None Role Requirements Education: Bachelor’s degree in Statistics, Mathematics, Biostatistics, Data Analysis, Data Science, or a related field. Master’s degree in the above fields is preferred. Experience: Minimum 8 years of SAS programming experience in clinical trials (CDISC standards), with at least 1–2 years of people management experience within a CRO, pharmaceutical, or related industry. Technical / Functional Competencies: SAS programming and statistical software SDTM/ADaM development Regulatory compliance (e.g., FDA, EMA) Project management Quality assurance Documentation and data interpretation Behavioral Competencies: Collaboration Communication Decision-making Problem-solving Coaching People management

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Lead Computer Vision & AI Engineer (LLM, Robotics, Industrial AI) About the Role We’re looking for a hands-on and passionate Computer Vision & AI Engineer who can lead the development of intelligent systems combining AI, LLMs, robotics, and industrial automation . You will play a key role in building cutting-edge solutions like: AI-powered EOSH detection LLM-integrated robot assistant Image matching & inspection Vision analytics for factories You will work closely with the founder and product teams to take PoCs to production across Jetson, edge devices, and AI models. Responsibilities Build and optimize real-time Computer Vision pipelines (YOLO, Detectron2, etc.) Train and fine-tune models for object detection, similarity matching, defect analysis Integrate LLMs (LangChain, LLAMA etc) for robotics applications Work on Edge AI deployment (Jetson Nano/Xavier, Coral, Raspberry Pi) Guide image dataset creation, labeling, and model evaluation Contribute to architecture, design reviews, and PoC demos Mentor and support junior developers as a technical anchor Requirements 4–8 years of hands-on experience in Computer Vision / Deep Learning Proficiency in Python , PyTorch , OpenCV , and ONNX/TensorRT Experience with image matching , Siamese/Triplet Networks , or retrieval models Comfortable deploying on Jetson , Edge TPUs , or embedded Linux systems Experience integrating LLMs or agents (LangChain, Transformers, etc.) Strong problem-solving mindset and startup adaptability Bonus: ROS basics, Docker, GStreamer, or GPU optimization What You Get Work on high-impact real-world AI products Lead the development of India’s next-gen industrial vision platforms Freedom to explore, experiment, and build IP Exposure to clients across manufacturing, IT / ITES, and security sectors Competitive salary

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position: BI Developer and Data Analyst: Skills: Power BI, Databricks, SQL, Python, ETL, RedShift or Athena, AWS Services (beyond QuickSight) Experience: 4+ Responsibilities Design, develop, and maintain interactive and insightful dashboards using QuickSight. Conduct advanced data analysis to identify trends, patterns, and anomalies, providing meaningful interpretations and recommendations. Collaborate with stakeholders across different departments to understand their data needs and translate them into effective analytical solutions. Write and optimize SQL queries to extract, transform, and load data from various data sources. Utilize Python for data manipulation, automation of tasks, and statistical analysis. Ensure data accuracy, integrity, and consistency across all dashboards and analyses. Document dashboard specifications, data sources, and analytical methodologies. Stay up-to-date with the latest trends and best practices in data visualization and analytics. Qualifications Bachelor's degree in a quantitative field such as Data Science, Statistics, Mathematics, Computer Science, or a related discipline. Required Skills Data visualization best practices. Proven experience in developing advanced dashboards and performing data analysis. Ability to create clear, intuitive, and impactful visualizations (charts, graphs, tables, KPIs) that effectively communicate insights. Extensive experience with AWS QuickSight (or similar BI tool): Hands-on experience in building, publishing, and maintaining interactive dashboards and reports. QuickSight data sources: Experience connecting QuickSight to various data sources, especially those common in AWS environments (e.g., S3, Redshift, Athena, RDS, Glue). QuickSight dataset creation and management: Proficiency in creating, transforming, and optimizing datasets within QuickSight, including calculated fields, parameters, and filters. Performance optimization: Knowledge of how to optimize QuickSight dashboards and data for speed and scalability. Preferred Skills Experience with other data visualization tools. Familiarity with machine learning concepts.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description Alexa Shopping Operations strives to become the most reliable source for dataset generation and annotations. We work in collaboration with Shopping feature teams to enhance customer experience (CX) quality across shopping features, devices, and locales. Our primary focus lies in handling annotations for training, measuring, and improving Artificial Intelligence (AI) and Large Language Models (LLMs), enabling Amazon to deliver a superior shopping experience to customers worldwide. Our mission is to empower Amazon's LLMs through Reinforcement Learning from Human Feedback (RLHF) across various categories at high speed. We aspire to provide an end-to-end data solution for the LLM lifecycle, leveraging technology alongside our operational excellence. By joining us, you will play a pivotal role in shaping the future of the shopping experience for customers worldwide. The candidate will leverage data analysis tools like SQL, Excel, and other data management systems to extract and examine data, driving informed decision-making. By monitoring existing metrics and developing new ones in collaboration with internal teams, identify opportunities to enhance processes and systems. Design and implement reporting solutions that empower stakeholders to effectively manage the business. Additionally, support cross-functional teams in the day-to-day execution of program implementation and lead small to medium operational enhancement projects, driving continuous improvement and optimizing outcomes. Key job responsibilities Retrieve and analyze data using SQL, Excel, and other data management systems. Monitor existing metrics and create/implement new metrics where needed, partnering with internal teams to identify process and system improvement opportunities. Design and implement reporting solutions to enable stakeholders to manage the business and make effective decisions. Support cross-functional teams on the day-to-day execution of the existing program implementation. Drive small to medium operational enhancement projects. Basic Qualifications Bachelor's degree or equivalent Experience in Excel (macros, index, conditional list, arrays, pivots, lookups) Bachelor's degree in business or analytical discipline Knowledge of SQL Knowledge of Python, VBA, Macros, Selenium scripts Preferred Qualifications 3+ years of complex Excel VBA macros writing experience Knowledge of Microsoft Excel at an advanced level, including: pivot tables, macros, index/match, vlookup, VBA, data links, etc. Are enrolled in or have completed a Bachelor's degree in business or analytical discipline Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 12 SEZ Job ID: A3030459

Posted 2 weeks ago

Apply

1.5 - 2.0 years

1 - 2 Lacs

India

On-site

We are looking for a skilled RDLC & SQL Developer to join our team at eDominer. In this role, you will be responsible for designing and developing reports using RDLC and HTML, ensuring they meet business requirements and functional designs. Your work will directly support our EXPAND smERP on the Cloud platform, enhancing the reporting capabilities for our clients. Job Duties and Responsibilities: Design and develop dynamic and visually appealing reports using RDLC and HTML. Understand business and functional requirements, translate them into technical specifications, and develop reports accordingly. Perform thorough testing and QA of reports to ensure accuracy and functionality. Create and optimize SQL queries, views, indexes, functions, and stored procedures to support report development. Design and develop sub-reports and charts in Crystal Reports or RDLC for enhanced data visualization. Utilize ASP.Net and VB coding with the Dataset object to manage data before integrating it into RDLC or Crystal reports. Collaborate with the support team to resolve tickets related to reporting issues, problems, and queries. Optimize database objects and report templates to ensure high performance and efficient report generation. Stay updated and willing to learn additional reporting tools to enhance the capabilities of EXPAND smERP on the Cloud. Requirements: 1.5 to 2 years of experience in report development using RDLC and Crystal Reports. Must have a BE, B Tech, B.Sc. IT, MSc IT, or MCA background. Strong knowledge of T-SQL query writing and optimization. Proficient in creating SQL queries, views, indexes, functions, and stored procedures; experienced in ASP.Net, and VB coding using Dataset object. Experience in designing and developing sub-reports and charts in Crystal Reports or RDLC. Excellent communication and troubleshooting abilities. Willingness to learn additional reporting tools and adapt to new technologies. Ability to work under tight deadlines and deliver high-quality reports on time. Job Location: Kolkata Perks and Benefits: Competitive salary structure with performance-based incentives. Opportunities for professional growth and career advancement. A collaborative and innovative work environment. Contact Us to Apply: If you are passionate about report development and eager to contribute to a dynamic team, we invite you to apply for this position. Please send your updated CV to hr@edominer.com for further processing About eDominer: eDominer has been a pioneer in business software development since 1995, focusing on business automation. Our flagship product, EXPAND smERP, is a cost-effective, reliable, and user-friendly ERP solution catering to various industries, including manufacturing and export businesses. Explore our business units: Parent Company: Our Product: EXPAND smERP: Job Types: Full-time, Permanent, Fresher Pay: ₹10,000.00 - ₹20,000.00 per month Benefits: Leave encashment Paid sick time Paid time off Provident Fund Shift: Fixed shift Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: AI Engineer Intern (Generative AI) Location: Viman nagar, Pune Type: Internship Duration: 6 months Stipend level: 7-10 K Role Overview As a Generative AI Engineer Intern, you will work on projects involving text, audio, image, and video generation. You will explore and train generative AI models, design creative workflows, and contribute to the development of innovative tools and solutions. This internship is an excellent opportunity to gain hands-on experience in one of the most exciting areas of AI and work on impactful projects. Key Responsibilities: AI Model Development & Training: Train and fine-tune generative models such as GPT, Stable Diffusion, DALL·E, and similar architectures for various creative outputs. Experiment with cutting-edge techniques in model training and optimization for specific use cases in text, audio, image, and video generation. Content Generation & Enhancement: Develop tools and pipelines for generating high-quality text, images, videos, and audio assets. Assist in creating datasets for model training and testing, ensuring data diversity and quality. Implement AI-driven techniques for enhancing and editing generated outputs. Research & Development: Explore state-of-the-art research in generative AI, keeping up with trends and advancements. Contribute to internal discussions and brainstorming sessions for new product ideas and features. Integration & Deployment: Collaborate with the engineering team to integrate generative AI models into user-facing applications. Test, debug, and improve generative AI systems to ensure smooth deployment and usability. Collaboration: Work closely with designers, developers, and product managers to understand requirements and deliver tailored generative AI solutions. Provide technical insights and documentation for projects. Qualifications: Technical Skills: Strong programming skills in Python and familiarity with libraries such as TensorFlow, PyTorch, or Keras. Knowledge of NLP, CV, and generative model architectures (e.g., GANs, Transformers, Diffusion Models). Experience with text, audio, image, or video generation tools (e.g., GPT, LLaMa, DALL·E, Stable Diffusion, etc.). Familiarity with dataset preprocessing and augmentation techniques. Basic knowledge of API development in either Node.js or Python Basic knowledge of Version control(Git, SVN) Soft Skills: Analytical mindset with a passion for problem-solving and innovation. Excellent communication skills and a team-oriented approach. Enthusiasm for generative AI and a creative spirit to push the limits of technology. Education: Bachelor of Technology Bachelor of Engineering Bachelor of Science Masters of Science Masters of Engineering Data Science Artificial Intelligence Machine Learning Preferred Qualifications: Experience with model fine-tuning or custom model training. Familiarity with existing generation frameworks (e.g. Midjourney, RunwayML,Luma, ElevenLabs, etc.). Knowledge of API development and integration for deploying AI models. What We Offer Hands-on experience working on exciting AI projects. Mentorship from experienced professionals in AI and technology. Opportunity to contribute to innovative solutions and gain industry exposure. A creative and collaborative work environment.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Shape the Future of Data Analytics at FabricAir At FabricAir, we design and deliver innovative textile-based air distribution solutions to customers in over 100 countries. As we accelerate our digital transformation, we are seeking a Business Intelligence Developer skilled in Power BI to elevate our data analytics capabilities and support data-driven decision-making across our international operations. Join our expanding Business & IT Solutions team and work closely with cross-functional departments worldwide to unlock the value of our business data. You will play an essential role in guiding the company through data visualisation, dashboard creation, and advanced analytics projects that have real business impact. Key Responsibilities – Power BI Developer for Global Manufacturing Data Design, build and maintain Power BI datasets, dashboards and paginated reports that serve Sales, Finance, Operations and Marketing. Own the Power BI architecture roadmap: drive workspace topology, shared/certified datasets, deployment pipelines and capacity planning so dashboards scale cleanly across teams. Partner with Data Engineering to validate pipeline outputs, define data contracts and ensure trusted, well-documented sources. Optimise dataset refresh schedules, incremental loading, partitioning, and advanced RLS/OLS to keep enterprise dashboards fast, secure and consistent. Gather requirements, prototype quickly, iterate to production with Git-based CI/CD, and maintain clear documentation. Monitor usage & performance metrics; apply forecasting, anomaly detection and AI-augmented insights where they add value. Champion BI standards, governance and self-service analytics across the organisation—running demos, brown-bags, and code review. Who You Are – Data Enthusiast with an Analytical Mindset We are looking for a detail-oriented, proactive professional with a passion for business intelligence and data analytics. You have a collaborative approach and are eager to drive business value through data-driven solutions. Your curiosity, problem-solving skills, and structured thinking help you deliver impactful reports and actionable insights. 5 + years building enterprise-grade Power BI solutions, including: end-to-end development in Power BI Desktop & Service advanced DAX / Tabular Editor optimisation deployment pipelines, workspaces, gateways & capacity management dataflows / Fabric Lakehouse connections, incremental refresh, RLS/OLS Architect-level vision for BI platform design: shared semantic models, certified datasets, workspace strategy, version control (Git), and governance that scales to multiple departments. Strong SQL for slicing, optimisation and troubleshooting across cloud/on-prem sources. Proven ability to translate business questions into scalable BI architecture, automated datasets, and actionable dashboards. Excellent communication in a global, cross-functional setting; proactive, detail-oriented, and comfortable leading best-practice workshops. Desirable Skills – Added Advantage in Power BI and Analytics Experience with dbt, Great Expectations or similar frameworks for data-quality validation. Familiarity with Microsoft Fabric (OneLake, Real-Time Intelligence) or other lake-house architectures. Prior work in industrial / manufacturing domains. Experience with AI tools used for data analysis What We Offer – Professional Growth at FabricAir Become part of a global company with a mission to innovate and deliver best-in-class solutions. At FabricAir, you’ll thrive in an environment that champions teamwork, continuous learning, and the use of modern technologies. We offer responsibility, flexibility, and a unique chance to influence our digital and data landscape while working with friendly, skilled colleagues from around the world. About FabricAir With headquarters in Denmark and operations in 16 countries, FabricAir develops leading textile-based air distribution systems for diverse industries—from food production and retail to cleanrooms and sports facilities. We’re committed to performance, energy efficiency, and empowering our people to make a difference. For more on our solutions and culture: www.fabricair.com Ready to Apply? If this sounds like your next career move, please apply via the “Apply for this job” button. For data privacy reasons, this is our preferred application method. If you have any questions, you are welcome to reach out to our Software Development Team Lead, Avinash Pawar, at apa@fabricair.com. We are reviewing applications on a rolling basis and look forward to connecting with you! FabricAir is a global fabric-based air distribution solution manufacturer that originated in Denmark in 1973. Our company has been evolving ever since, opening 16 subsidiaries worldwide with a vast network of distributors and reaching clients in over 120 countries. We value our employees because they are the reason we excel in our industry and contribute to the growth of the company.

Posted 3 weeks ago

Apply

4.0 years

12 - 20 Lacs

Pune, Maharashtra, India

On-site

About Improzo At Improzo (Improve + Zoe; meaning Life in Greek), we believe in improving life by empowering our customers. Founded by seasoned Industry leaders, we are laser focused for delivering quality-led commercial analytical solutions to our clients. Our dedicated team of experts in commercial data, technology, and operations has been evolving and learning together since our inception. Here, you won't find yourself confined to a cubicle; instead, you'll be navigating open waters, collaborating with brilliant minds to shape the future. You will work with leading Life Sciences clients, seasoned leaders and carefully chosen peers like you! People are at the heart of our success, so we have defined our CARE values framework with a lot of effort, and we use it as our guiding light in everything we do. We CARE! Customer-Centric: Client success is our success. Prioritize customer needs and outcomes in every action. Adaptive: Agile and Innovative, with a growth mindset. Pursue bold and disruptive avenues that push the boundaries of possibilities. Respect: Deep respect for our clients & colleagues. Foster a culture of collaboration and act with honesty, transparency, and ethical responsibility. Execution: Laser focused on quality-led execution; we deliver! Strive for the highest quality in our services, solutions, and customer experiences. About The Role We're looking for a Data Scientist in Pune to drive insights for pharma clients using advanced ML, Gen AI, and LLMs on complex healthcare data. You'll optimize Pharma commercial strategies (forecasting, marketing, SFE) and improve patient outcomes (journey mapping, adherence, RWE). Key Responsibilities Data Exploration & Problem Framing: Proactively engage with client/business stakeholders (e.g., Sales, Marketing, Market Access, Commercial Operations, Medical Affairs, Patient Advocacy teams) to deeply understand their challenges and strategic objectives. Explore, clean, and prepare large, complex, and sometimes messy datasets from various sources, including but not limited to: sales data, prescription data, claims data, Electronic Health Records (EHRs), patient support program data, CRM data, and real-world evidence (RWE) datasets. Translate ambiguous business problems into well-defined data science questions and develop appropriate analytical frameworks. Advanced Analytics & Model Development Design, develop, validate, and deploy robust statistical models and machine learning algorithms (e.g., predictive models, classification, clustering, time series analysis, causal inference, natural language processing). Develop models for sales forecasting, marketing mix optimization, customer segmentation (HCPs, payers, pharmacies), sales force effectiveness (SFE) analysis, incentive compensation modelling, and market access analytics (e.g., payer landscape, formulary impact). Analyze promotional effectiveness and patient persistency/adherence. Build models for patient journey mapping, patient segmentation for personalized interventions, treatment adherence prediction, disease progression modelling, and identifying drivers of patient outcomes from RWE. Contribute to understanding patient behavior, unmet needs, and the impact of interventions on patient health. Generative AI & LLM Solutions Extracting insights from unstructured text data (e.g., clinical notes, scientific literature, sales call transcripts, patient forum discussions). Summarization of complex medical or commercial documents. Automated content generation for internal use (e.g., draft reports, competitive intelligence summaries). Enhancing data augmentation or synthetic data generation for model training. Developing intelligent search or Q&A systems for commercial or medical inquiries. Apply techniques like prompt engineering, fine-tuning of LLMs, and retrieval-augmented generation (RAG). Insight Generation & Storytelling Transform complex analytical findings into clear, concise, and compelling narratives and actionable recommendations for both technical and non-technical audiences. Create impactful data visualizations, dashboards, and presentations using tools like Tableau, Power BI, or Python/R/Alteryx visualization libraries. Collaboration & Project Lifecycle Management Collaborate effectively with cross-functional teams including product managers, data engineers, software developers, and other data scientists. Support the entire data science lifecycle, from conceptualization and data acquisition to model development, deployment (MLOps), and ongoing monitoring in production environments. Qualifications Master's or Ph.D. in Data Science, Statistics, Computer Science, Applied Mathematics, Economics, Bioinformatics, Epidemiology, or a related quantitative field. 4+ years progressive experience as a Data Scientist, with demonstrated success in applying advanced analytics to solve business problems, preferably within the healthcare, pharmaceutical, or life sciences industry using pharma dataset extensively (e.g. sales data from Iqvia, Symphony, Komodo, etc., CRM data from Veeva, OCE, etc.) Must-have: Solid understanding of pharmaceutical commercial operations (e.g., sales force effectiveness, marketing, market access, CRM). Must-have: Experience working with real-world patient data (e.g., claims, EHR, pharmacy data, patient registries) and understanding of patient journeys. Strong programming skills in Python (e.g., Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and/or R for data manipulation, statistical analysis, and machine learning. Expertise in SQL for data extraction, manipulation, and analysis from relational databases. Experience with machine learning frameworks and libraries. Proficiency in data visualization tools (e.g., Tableau, Power BI) and/or visualization libraries (e.g., Matplotlib, Seaborn, Plotly). Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Spark, Hadoop) is a significant advantage. Specific experience with Natural Language Processing (NLP) techniques, Generative AI models (e.g., Transformers, diffusion models), Large Language Models (LLMs), and prompt engineering is highly desirable. Experience with fine-tuning LLMs, working with models from Hugging Face, or utilizing major LLM APIs (e.g., OpenAI, Anthropic, Google). Experience with MLOps practices and tools (e.g., MLflow, Kubeflow, Docker, Kubernetes). Knowledge of pharmaceutical or biotech industry regulations and compliance requirements like HIPAA, CCPA, SOC, etc. Excellent communication, presentation, and interpersonal skills, with the ability to effectively interact with both technical and non-technical stakeholders at all levels. Attention to details, biased for quality and client centricity. Ability to work independently and as part of a cross-functional team. Strong leadership, mentoring, and coaching skills. Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge Analytics projects, transforming the life sciences industry Collaborative and supportive work environment. Opportunities for professional development and growth. Skills: data manipulation,analytics,llm,generative ai,commercial pharma,mlops,sql,python,natural language processing,data visualization,models,r,machine learning,statistical analysis,genai,data,patient outcomes

Posted 3 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About The Role Grade Level (for internal use): 07 Department overview S&P Global, EBS is specialist provider of managed and installed data services, delivering world-class data, technology and service solutions focusing on the complex and evolving Index and ETF data needs. Used in the front, middle and back office by the world’s leading Investment Banks, Asset Managers, Fund Administrators, Prime Brokers and Hedge Funds. Position summary The successful candidate will play a key part in maintaining the smooth running of the day-to-day operations of EBS data offering and working with cross functional teams to identify solutions in problem areas to remove operational inefficiencies. On the data enhancement aspect, the candidate will use advanced Excel, VBA and SQL skills translating operational requirements into technical solutions and tools. The team operates 24/7, thus interested candidates will be required to work in all shifts including US hours. Duties & accountabilities New hire needs to be well versed with index concepts and their calculations. Validate the accuracy of data received from various sources. Ensure that this information is stored in databases and is accurately reflected on products by creating or running data quality checks and standards. Ensure the quality and time-efficient production of financial information to respective products. Respond to data queries from both internal and external clients and provide support to stakeholders. Monitor and research market events in order to anticipate changes. Ensure a deep understanding of the markets and business events. Work with and involve cross functional teams to provide Root Cause Analysis to identify solutions in problem areas. Consolidate information around the dataset leading to the establishment of best practices. Perform automated/semi-automated checks to ensure production of high quality content. Ensure MOW’s are documented and maintained. Coordinate and Delegate work as per team requirements Identify data quality improvement projects, and good design practices Intermediate Excel and SQL skills, including being able to write basic SQL queries. Proven ability to utilize data and systems tools available Good verbal, written, and presentation skills. Education And Experience MBA (Finance) / Post Graduate or equivalent in ideally Finance. The candidate should have a good understanding of equities & capital markets. Specific knowledge around Index/ETF and Corporate Actions highly preferred. 0-2 years of business operations experience and must be flexible in addressing dynamic business needs. Commercial Awareness Must have a strong interest in finance and be up to date with current global financial market news. Management requirements : NA Personal competencies Personal Impact The candidate must be a self-starter, able to take on multiple tasks at a time, hardworking, and efficient. Communication Must demonstrate superior communication skills and is expected to interact professionally across business units within the company. Teamwork Being a team player is a vital aspect of the position, and it is expected that the candidate will work well individually, as well as in a global team environment About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf OPRTON203 - Entry Professional (EEO Job Group) Job ID: 316385 Posted On: 2025-07-09 Location: Bangalore, Karnataka, India

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Key Responsibilities Dataset Creation and Management (with Sales & Conversational Focus) : Lead the design, development, and curation of high-quality datasets specifically tailored for building robust AI models. This includes data collection strategies, data cleaning, feature engineering, and ensuring data integrity and suitability for various modeling tasks. Crucially, this will involve creating and annotating conversational datasets for sales interactions, including intent recognition, entity extraction, dialogue states, and effective sales responses. Model Development & Deployment Architect, develop, train, and deploy advanced AI/Machine Learning models for various applications. This will involve selecting appropriate model architectures (e.g., deep learning, traditional ML), optimizing model performance, and ensuring scalability and reliability in production environments. Conversational Sales AI Development (Core Focus) Design, build, and optimize a sophisticated conversational AI agent specifically for selling health insurance products. This includes : Developing dialogue management systems that guide users through the sales funnel. Implementing intent recognition and entity extraction models tailored to health insurance queries and sales interactions. Crafting compelling and compliant sales responses that address customer needs, handle objections, and provide accurate product information. Ensuring the AI can effectively qualify leads, explain complex policy details, and guide users towards conversion. Prompt Engineering & Dialogue Flow Optimization Design and optimize prompts for large language models (LLMs) to achieve desired outcomes for specific business use cases, with a strong emphasis on natural, persuasive, and effective sales conversations. This includes developing robust strategies for managing complex conversation flows, ensuring seamless transitions, and maintaining context throughout the interaction. Evaluation Criteria & Validation Define and implement rigorous evaluation criteria across the entire AI pipeline, from dataset creation to model validation. This includes establishing metrics for data quality, model performance, bias detection, and ensuring robust validation strategies to build trustworthy and reliable AI systems. For conversational AI, this will also include metrics for dialogue success, user satisfaction, conversion rates, and adherence to sales scripts/compliance. Data Science & Engineering Excellence Apply strong data science principles to analyze complex data, extract insights, and formulate data-driven solutions. Demonstrate proficiency in data engineering practices, including building scalable data pipelines, managing data infrastructure, and ensuring efficient data access for AI development. Research & Innovation Stay abreast of the latest advancements in AI, machine learning, and fintech/insurance domains, actively exploring and integrating new technologies and methodologies to enhance our AI capabilities, particularly in the realm of conversational sales. (ref:hirist.tech)

Posted 3 weeks ago

Apply

0 years

15 - 35 Lacs

Gurugram, Haryana, India

On-site

We seek a forward-thinking Senior AI Engineer who has strong domain knowledge of Model development & working to transform and fine-tune existing LLMs . The candidate will work directly with the Founder and the CTO in developing a new-age product in the Insurance space, which has the potential to generate $10B in revenues at scale (a first-of-its-kind, but a much-needed innovation). Key Responsibilities Dataset Creation and Management (with Sales & Conversational Focus): Lead the design, development, and curation of high-quality datasets specifically tailored for building robust AI models. This includes data collection strategies, data cleaning, feature engineering, and ensuring data integrity and suitability for various modeling tasks. Crucially, this will involve creating and annotating conversational datasets for sales interactions, including intent recognition, entity extraction, dialogue states, and effective sales responses. Model Development & Deployment: Architect, develop, train, and deploy advanced AI/Machine Learning models for various applications. This will involve selecting appropriate model architectures (e.g., deep learning, traditional ML), optimizing model performance, and ensuring scalability and reliability in production environments. Conversational Sales AI Development (Core Focus): Design, build, and optimize a sophisticated conversational AI agent specifically for selling health insurance products. This includes: Developing dialogue management systems that guide users through the sales funnel. Implementing intent recognition and entity extraction models tailored to health insurance queries and sales interactions. Crafting compelling and compliant sales responses that address customer needs, handle objections, and provide accurate product information. Ensuring the AI can effectively qualify leads, explain complex policy details, and guide users towards conversion. Prompt Engineering & Dialogue Flow Optimization: Design and optimize prompts for large language models (LLMs) to achieve desired outcomes for specific business use cases, with a strong emphasis on natural, persuasive, and effective sales conversations. This includes developing robust strategies for managing complex conversation flows, ensuring seamless transitions, and maintaining context throughout the interaction. Evaluation Criteria & Validation: Define and implement rigorous evaluation criteria across the entire AI pipeline, from dataset creation to model validation. This includes establishing metrics for data quality, model performance, bias detection, and ensuring robust validation strategies to build trustworthy and reliable AI systems. For conversational AI, this will also include metrics for dialogue success, user satisfaction, conversion rates, and adherence to sales scripts/compliance. Data Science & Engineering Excellence: Apply strong data science principles to analyze complex data, extract insights, and formulate data-driven solutions. Demonstrate proficiency in data engineering practices, including building scalable data pipelines, managing data infrastructure, and ensuring efficient data access for AI development. Research & Innovation: Stay abreast of the latest advancements in AI, machine learning, and fintech/insurance domains, actively exploring and integrating new technologies and methodologies to enhance our AI capabilities, particularly in the realm of conversational sales.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary: We are seeking a highly skilled Computer Vision & AI Engineer to join our dynamic team. The ideal candidate will have expertise in advanced computer vision and AI development, along with a passion for creating innovative and scalable software products and systems. In this role, you will be responsible for designing, developing, and deploying algorithms within software products specifically tailored for the energy sector. How You’ll Make an Impact (key responsibilities of role) Design, implement and deploy computer vision and AI solutions within software products and solutions. Expanding both personal and team expertise in 2D and 3D data analytics. Keeping up to date with the latest trends in computer vision, machine learning, and data science, while identifying practical applications of these technologies in the energy sector. Develop and maintain the necessary data pipelines and infrastructure for AI applications. Perform model evaluation, tuning, and validation to improve accuracy and efficiency. Write clean, maintainable, and efficient code in accordance with industry best practices. What You Bring (required Qualifications And Skills) Bachelor’s or master’s degree in computer science, Artificial Intelligence, Data Science, or a related field. 3+ years of experience in computer vision / AI solution development. Proficiency in Python, OpenCV and machine learning frameworks such as TensorFlow or PyTorch. Strong analytical skills with extensive expertise in modern computer vision techniques, including object detections, semantic and instance segmentation, feature extraction and 3D reconstruction. Proficiency in Design, implementation, and evaluation of algorithms based on specific problem description. Familiar with computer vision, machine learning for large data sets. Knowledge of DevOps tools and CI/CD pipelines. Excellent problem-solving abilities, capable of working both independently and collaboratively within a team. Strong communication skills in English. Demonstrates enthusiasm, creativity in problem-solving, critical thinking, and effective communication in a distributed team environment Preferred Qualifications: Experience with cloud platforms such as AWS, Azure, or Google Cloud Experience with MLOps and model deployment in production environments. Experience with large dataset, 3D data processing and deep learning in 3D data, Familiarity with containerization technologies like Docker and Kubernetes.

Posted 3 weeks ago

Apply

0.0 - 2.0 years

2 - 5 Lacs

Pune, Maharashtra

Remote

AI/ML Engineer – Junior Location: Pune, Maharashtra Experience: 1–2 years ‍Employment: Full-time Role Overview: Join our AI/ML team to build and deploy generative and traditional ML models—from ideation and data preparation to production pipelines and performance optimization. You’ll solve real problems, handle data end-to-end, navigate the AI development lifecycle, and contribute to both model innovation and operational excellence. Key Responsibilities: ● Full AI/ML Lifecycle: Engage from problem scoping through data collection, modeling, deployment, monitoring, and iteration. ● Generative & ML Models: Build and fine-tune transformer-based LLMs (like GPT, BERT) both commercial as well as local, GANs, diffusion models; also develop traditional ML models for classification, regression, etc. Experience with DL models for Computer vision like CNN, R-CNN, etc is a plus. ● Data Engineering: Clean, label, preprocess, augment, and version datasets. Build ETL pipelines and features for model training. Experience with libraries like pandas, numpy, nltk, etc. ● Model Deployment & MLOps: Containerize models (Docker), deploy APIs/microservices, implement CI/CD for ML, monitor performance and drift . ● Troubleshooting & Optimization: Analyze errors, handle overfitting/underfitting, hallucinations, class imbalance, latency concerns; tune model performance. ● Collaboration: Partner with project managers, DevOps, backend engineers, and senior ML staff to integrate AI features. ● Innovation & Research: Stay current with GenAI (prompt techniques, RAG, LangChain, LLM models), test new architectures, contribute insights. ● Documentation: Maintain reproducible experiments, write clear docs, follow best practices Required Skills: ● Bachelor’s in CS, AI, Data Science, or related field. ● 1–2 years in ML/AI roles; hands-on with both generative and traditional models. ● Proficient in Python and ML frameworks (PyTorch, TensorFlow, Hugging Face, scikit-learn). ● Strong understanding of AI project lifecycle and MLOps principles . ● Experience in data workflows: preprocessing, feature engineering, dataset management. ● Familiarity with Docker, REST APIs, Git, and cloud platforms (AWS/GCP/Azure). ● Sharp analytical and problem-solving skills, with ability to debug and iterate models. ● Excellent communication and teamwork abilities. Preferred Skills: ● Projects involving ChatGPT, LLaMA, Stable Diffusion or similar models. ● Experience with prompt engineering, RAG pipelines, vector DBs (FAISS, Pinecone, Weaviate) . ● Exposure to CI/CD pipelines and ML metadata/versioning. ● GitHub portfolio or publications in generative AI. ● Awareness of ethics, bias mitigation, privacy, compliance in AI. Job Type: Full-time Pay: ₹200,000.00 - ₹500,000.00 per year Benefits: Provident Fund Work Location: Hybrid remote in Pune, Maharashtra

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Commercial Finance Analyst · Location: Delhi, India (Full-Time On-Site) · Reporting to: FP&A Manager (UK) · Salary: Competitive/Negotiable DOE About us In line with Surya Food's current growth trajectory and the vision to be a USD500m company by 2030, we are actively seeking to strengthen our 20-25 strong finance function in the UK and India. To support the business needs we are looking for innovative and ambitious accounting and finance professionals with experience in the FMCG sector. Our finance function is responsible for accounting, tax compliance, reporting, financial planning & analysis, treasury management in the UK, Romania and India. The ERP systems we use are off the shelf and integrated with other bespoke systems across the rest of the business. The business is seeking to set up a back office in India to support the growth of the business as well as improve the processes with the aim of efficiently closing our month end accounts within month-end deadline. As part of our ambitious growth strategy, Surya Foods is looking for a Commercial Finance Analyst to join our expanding team. This role is key to delivering financial insight and analysis to support strategic decision-making in a fast-paced, international FMCG environment. About the Role This is a pivotal opportunity for an analytically strong, commercially aware finance professional to support all aspects of financial planning, forecasting, and business performance improvement. You’ll work closely with the FP&A team and commercial stakeholders to deliver clear, actionable insights and develop financial models that support our strategic goals. Key Responsibilities · Support month-end close through reporting, variance analysis, and management commentary · Assist in developing forecasting, budgeting, and strategic planning models (including 5-year business plans) · Partner with FP&A and Commercial teams to reconcile retailer investments and track commercial performance · Produce accurate, timely reports to drive decision-making across business units · Engage with stakeholders to understand financial drivers and align support with business priorities · Deliver ad hoc analysis and participate in cross-functional projects · Contribute to the standardisation and improvement of financial tools and processes · Maintain high accuracy in planning/reporting systems · Develop robust financial models for scenario planning and forecasting accuracy · Champion best practices in financial analysis and promote a performance-driven culture Skills & Competencies · 1–3 years' experience in a Finance, FP&A or Commercial Finance role · Advanced Excel skills and solid understanding of financial systems · Strong analytical mindset, with attention to detail and data integrity · Ability to communicate complex financial insights clearly to non-financial teams · Proactive, organised, and able to manage competing deadlines in a high-growth environment · Collaborative, with strong communication skills across teams and cultures · Familiarity with accounting principles and large dataset analysis is a plus Qualifications · Postgraduate degree in Finance, Accounting, Economics, or a quantitative field · (CIMA/ACCA/CA part-qualified or working toward qualification preferred) If you’re ready to contribute to a growing international business where your input will shape the financial performance of a high-energy FMCG brand, we’d love to hear from you.

Posted 3 weeks ago

Apply

12.0 years

0 Lacs

Chennai

On-site

The Data Engineering Technology Lead is a senior level position responsible for establishing and implementing new or revised data platform eco systems and programs in coordination with the Technology team. The overall objective of this role is to lead data engineering team to implement the business requirements: Responsibilities: Design, build and maintain batch or real-time data pipelines in data platform. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources. Develop ETL (extract, transform, load) processes to help extract and manipulate data from multiple sources. Automate data workflows such as data ingestion, aggregation, and ETL processing. Prepare raw data in Data Warehouses into a consumable dataset for both technical and non-technical stakeholders. Partner with data scientists and functional leaders in sales, marketing, and product to deploy machine learning models in production. Build, maintain, and deploy data products for analytics and data science teams on data platform Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 12+ years of total experience & 7+ years relevant experience in Data engineering role Advanced SQL skills and experience with relational databases and database design. Strong proficiency in object-oriented languages: Python/Pyspark is must Experience working with data ingestion tools such as Talend & Ab Initio. Experience working with data lakehouse architecture such as Iceberg/Starburst Strong proficiency in scripting languages like Bash. Strong proficiency in data pipeline and workflow management tools Strong proficiency in scripting languages like Bash. Strong project management and organizational skills. Excellent problem-solving, communication, and organizational skills. Proven ability to work independently and with a team. Experience in managing and implementing successful projects Ability to adjust priorities quickly as circumstances dictate Demonstrated leadership and project management skills Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience Master’s degree preferred This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Senior Data Engineer You will have the following responsibilities: Design  Analyse relevant internally and externally sourced data (raw data) to generate BI and Advanced Analytics datasets based on your stakeholders’ requirements  Design data pipelines to curate sourced data into the inhouse data warehouse  Design data marts to facilitate dataset consumption out of the inhouse data warehouse by businessand IT internal stakeholders  Design data model changes that align with the inhouse data warehouse standards  Define migration execution activities to move data from existing database solutions to the inhousedata warehouse Engineer  Regular housekeeping of raw data and data stored in the inhouse data warehouse  Build and maintenance of data pipelines and data platforms  Build data solution prototypes  Explore ways to enhance data quality and reliability  Identify and realize opportunities to acquire better data (raw data)  Develop analytical tooling to better support BI and Advanced Data Analytics activities  Execute data migration from existing databases to the inhouse data warehouse  Promote and champion data engineering standards and best-in-class methodology. You will have the following qualifications:  Bachelor’s or master’s degree in Computer Science, Information Technology, Engineering or relatedquantitative discipline from a top tier university.  Certified in AWS Data Engineer Specialty or AWS Solution Architect Associate  Snowflake SnowPro Core Certification  7+ years of experience in data engineering or relevant working experience in a similar role, preferablyin the financial industry  Strong understanding or practical experience of at least one common Enterprise Agile Framework e.g.,Kanban, SAFe, SCRUM, etc.  Strong understanding of ETL, data warehouse and BI(Qlik) and Advanced Data Analytics concepts  Deep knowledge of cloud-enabled technologies – AWS RDS and AWS Fargate, etc.  Experience with databases and data warehouses - Snowflake, PostgreSQL, MS SQL  Strong programming skills with advanced knowledge of Java and/or Python  Practical experience with ETL tools such as AWS Glue, etc.  Strong critical-thinking, analytical and problem-solving skills  Excellent communicator with team-oriented approach Location - Mumbai Mode - Hybrid ( Mon Wed and Fri - WFO) Experience - 7 to 10 years. Notice period - Immediate to 30 days Timing - Mid shift ( 12 - 9 PM IST)

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About the Role We are looking for Sr.AI and Machine Learning engineer who want to help shape the future of Financial Services clients and our company. As part of the team, you will get to · Work directly with our founding team and be a core member. · Apply the latest AI techniques to solve real problems faced by Financial Services clients. · Design, build, and refine datasets to evaluate and continuously improve our solutions. · Participate in strategy and product ideation sessions, influencing our product and solution roadmap. Key Responsibilities · Agentic AI Development : Work on building scalable multi-modal Large Language Model (LLM) based AI agents, leveraging frameworks such as LangGraph, Microsoft Autogen, or Crewai. · AI Research and Innovation : Research and build innovative solutions to relevant AI problems, including Retrieval-Augmented Generation (RAG), semantic search, knowledge representation, tool usage, fine-tuning, and reasoning in LLMs. · Technical Expertise : Proficiency in a technology stack that includes Python, LlamaIndex / LangChain, PyTorch, HuggingFace, FastAPI, Postgres, SQLAlchemy, Alembic, OpenAI, Docker, Azure, Typescript, and React. · LLM and NLP Experience : Hands-on experience working with LLMs, RAG architectures, Natural Language Processing (NLP), or applying Machine Learning to solve real-world problems. · Dataset Development : Strong track record of building datasets for training and/or evaluating machine learning models. · Customer Focus : Enjoy diving deep into the domain, understanding the problem, and focusing on delivering value to the customer. · Adaptability : Thrive in a fast-paced environment and are excited about joining an early-stage venture. · Model Deployment and Management : Automate model deployment, monitoring, and retraining processes. · Collaboration and Optimization : Collaborate with data scientists to review, refactor, and optimize machine learning code. · Version Control and Governance : Implement version control and governance for models and data. Required Qualifications: · Bachelor's degree in computer science, Software Engineering, or a related field · 4-8 years of experience in MLOps, DevOps, or related roles Have strong programming experience and familiarity with Python based deep learning frameworks like Pytorch, JAX, Tensorflow Have strong familiarity and knowledge of machine learning concepts · Proficiency in cloud platforms (AWS, Azure, or GCP) and infrastructure-as-code tools like Terraform Desired Skills: · Experience with experiment tracking and model versioning tools You have experience with technology stack: Python, LlamaIndex / LangChain, PyTorch, HuggingFace, FastAPI, Postgres, SQLAlchemy, Alembic, OpenAI, Docker, Azure, Typescript, React. · Knowledge of data pipeline orchestration tools like Apache Airflow or Prefect · Familiarity with software testing and test automation practices · Understanding of ethical considerations in machine learning deployments · Strong problem-solving skills and ability to work in a fast-paced environment

Posted 3 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

Delhi, India

On-site

Overview The Quality Control (QC) Specialist will play a critical role in ensuring the accuracy, consistency and quality of annotated legal documents. This role involves reviewing annotations created by annotators, validating metadata, and ensuring that each document meets the defined standards for tagging and categorisation. The QC Specialist will collaborate closely with annotators, tech team and project leads to maintain high standards across annotations for judgments, legal provisions, and opinions. Key Expectations from Role Quality Assurance and Accuracy Control Develop and implement quality metrics to ensure high standards for each annotated document, measuring accuracy rates, consistency, and adherence to guidelines. Identify and document any deviations from annotation standards, ensuring that each error is flagged, documented, and tracked to facilitate corrective actions. Perform random sampling and focused audits to verify annotation quality and reduce error rates across the project, providing a continuous check on data integrity. Annotation Review and Verification Conduct thorough reviews of annotations of judgments, legal provisions, and opinions to verify that annotations adhere to established standards. Cross-check each annotation for core legal elements, including case name, date, court name, statutory references, legal principles, and issues to ensure completeness. Ensure that metadata such as case identifiers, statutes cited, and legal doctrines are accurately and consistently tagged according to project guidelines. Validate that complex legal relationships are accurately identified in each document and recorded in standardized manner for ease of cross linking (e.g., judgments overruled, followed, referred, distinguished). Validate documents are timestamped correctly by annotators (e.g.: Date of pronouncement, date of publication of a notification in official gazette, date when provisions from the document came in effect from, etc.) Holistic Metadata Validation and Standardization Validate the accuracy and relevance of metadata extracted by annotators or automated tools, ensuring all extracted information meets required standards. Ensure uniformity across metadata fields such as case numbers, legal provisions, key legal outcomes, and cross-references. Confirm that all metadata follows the standardized structure and is formatted correctly for integration into the AI tool. Segmentation and Categorization Review Verify that judgments and legal provisions are segmented into appropriate categories, aligning with project requirements for legal classification (e.g., civil, criminal, appeal, petition). Ensure accurate tagging for legal relationships, including overruled, upheld, distinguished, or followed cases, to improve search functionality within the AI tool. Assess each document for proper categorization, ensuring uniform application of legal classifications across the dataset to enhance the AI tool’s contextual understanding. Feedback and Annotator Support Provide timely, constructive feedback to annotators on identified errors, suggesting corrective actions and reinforcing annotation standards. Document recurring errors or misunderstandings to guide annotator training, working with project leads to enhance guidelines and training materials. Quality Metrics Tracking and Reporting Maintain detailed records of quality metrics, documenting error rates, accuracy scores, and the frequency of specific annotation issues. Prepare regular quality reports summarizing review outcomes, highlighting any recurring issues and proposing corrective actions. Develop insights from quality control data to recommend improvements in annotation processes and guidelines, enhancing overall project quality. Collaboration and Process Improvement Collaborate with the Project Manager, Annotation Team Lead, and Technical Support teams to continually improve annotation standards and quality control processes. Share findings with cross-functional teams, contributing to the refinement of annotation guidelines, project workflows, and validation protocols. Actively participate in project meetings, providing insights on annotation quality, challenges, and recommendations for improvement. Documentation and Compliance Maintain meticulous records of all quality checks, ensuring that each document review is traceable, and that feedback is consistently documented. Adhere to data handling and confidentiality protocols, ensuring that all judgments, legal provisions, and opinions are reviewed in compliance with the firm’s standards for data security and privacy. Contribute to the development and refinement of QC documentation, including quality checklists, review protocols, and feedback guidelines. Recommended Qualifications Education : Bachelor’s degree in law, or a related field. Additional certifications in legal research or legal analytics are advantageous. Experience : Minimum of 2-3 years in a quality control, legal research, or data management role. Preferred : Experience in a legal knowledge management team with experience in document tagging, legal publishing house, or as a legal editor. Proven familiarity with legal document management, annotation standards, and legal taxonomy. Knowledge : Strong understanding of legal terminology and principles. Familiarity with annotation standards for legal texts. Knowledge of metadata standards and legal research databases is a plus. Skills : Exceptional attention to detail and analytical skills. Familiarity with legal research tools and annotation software. Holistic understanding of how legal data points interconnect and form a part of larger datasets Strong communication skills for providing feedback and training to annotators. Ability to work independently and collaboratively in a fast-paced project environment. Attributes : High degree of accuracy, ability to meet deadlines, and a commitment to maintaining confidentiality of legal documents.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies