Home
Jobs

38 Textract Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

India

On-site

Linkedin logo

Welcome to Radin Health A premier Healthcare IT Software as a Service (SaaS) provider specializing in revolutionizing radiology workflow processes. Our cloud-based solutions encompass Radiology Information Systems (RIS), Picture Archiving and Communication Systems (PACS), Voice Dictation (Dictation AI) and Radiologist Workflow Management (RADIN Select), all powered by Artificial Intelligence. We are an innovative, forward-thinking Company with AI-Powered Solutions. Join Our Team! We Are Looking for Talent We are seeking a highly skilled AI Engineer with proven experience in healthcare document intelligence. You will lead the development and optimization of machine learning models for document classification and OCR-based data extraction , helping us extract structured data from prescriptions, insurance cards, consent forms, orders, and other medical records. You will be part of a fast-paced, cross-functional team working to integrate AI seamlessly into healthcare operations while maintaining the highest standards of accuracy, security, and compliance. Key Responsibilities Model Development: Design, train, and deploy ML/DL models for classifying healthcare documents and extracting structured data (e.g., patient info, insurance details, physician names, procedures). OCR Integration & Tuning: Work with OCR engines like Tesseract, AWS Textract, or Google Vision to extract text from scanned images and PDFs, enhancing accuracy via post-processing and pre-processing techniques. Document Classification: Build and refine document classification models using supervised learning and NLP techniques, with real-world noisy healthcare data. Data Labeling & Annotation: Create tools and workflows for large-scale labeling; collaborate with clinical experts and data annotators to improve model precision. Model Evaluation & Improvement: Measure model performance using precision, recall, F1 scores, and deploy improvements based on real-world production feedback. Pipeline Development: Build scalable ML pipelines for training, validation, inference, and monitoring using frameworks like PyTorch, TensorFlow, and MLFlow. Collaboration: Work closely with backend engineers, product managers, and QA teams to integrate models into healthcare products and workflows. Required Skills & Qualifications Bachelor's or Master’s in Computer Science, AI, Data Science, or related field. 7+ years experience in machine learning, with at least 3 years in healthcare AI applications. Strong experience with OCR technologies (Tesseract, AWS Textract, Azure Form Recognizer, Google Vision API). Proven track record in training and deploying classification models for healthcare documents. Experience with Python (NumPy, Pandas, Scikit-learn), deep learning frameworks (PyTorch, TensorFlow), and NLP libraries (spaCy, Hugging Face, etc.). Understanding of HIPAA-compliant data handling and healthcare terminology. Familiarity with real-world document types such as referrals, AOBs, insurance cards, and physician notes. Preferred Qualifications Experience working with noisy scanned documents and handwritten text. Exposure to EHR/EMR systems and HL7/FHIR integration. Knowledge of labeling tools like Label Studio or Prodigy. Experience with active learning or human-in-the-loop systems. Contributions to healthcare AI research or open-source projects.

Posted 23 hours ago

Apply

3.0 years

6 - 8 Lacs

Noida

On-site

GlassDoor logo

Position: Software Developer Experience: 3+ years No. of Positions: 1 Qualification: B.E / B.Tech / MCA / BCA or equivalent Joining: Immediate or within 15 days CTC: 6LPA TO 8LPA Job Location: Noida Sec-63, A block UttarPradesh Technical Skills Required: Core PHP (v5–8) – strong hands-on experience required. Frameworks: Laravel and CodeIgniter (both mandatory). Databases: MySQL (strong preferred), PostgreSQL, SQLite, MongoDB. Frontend & Scripting: JavaScript, JQuery, AngularJS, NodeJS, HTML, CSS, XML, AJAX. Web Services: SOAP and REST API integration. Shell scripting in PHP and experience with tools like Ghostscript, Imagemagick. Third-party integration: AWS S3, Textract, JavaScript plugins, etc. CMS knowledge: Drupal or Magento (at least one). 3–5 years of experience in NodeJS development. Frameworks: Express.js, NestJS, Koa.js. API Development: RESTful APIs, GraphQL, WebSockets. Authentication & Security: JWT, OAuth, Passport.js, Bcrypt. Messaging Systems: RabbitMQ, Kafka. Version Control: Git, GitHub, GitLab. Cloud & Deployment: Docker, Kubernetes. Testing & Documentation: Swagger (OpenAPI), Mocha/Chai, Jest. Job Type: Full-time Pay: ₹600,000.00 - ₹800,000.00 per year Schedule: Day shift Application Question(s): Are You a Immediate Joiner ? (1st July 2025) Work Location: In person Expected Start Date: 01/07/2025

Posted 2 days ago

Apply

3.0 years

6 - 10 Lacs

Pune

On-site

GlassDoor logo

Description highly skilled UIPath Engineer with deep expertise in OCR (Optical Character Recognition) and document processing using UiPath. The ideal candidate will have a strong background in designing, developing, and deploying end-to-end automation solutions focused on intelligent document understanding and processing. This role requires a solid understanding of RPA frameworks, best practices, and integration with OCR engines to deliver scalable, high-quality automation. Key Responsibilities: Design, develop, test, and deploy RPA workflows leveraging UiPath to automate document processing and OCR tasks. Implement intelligent document processing solutions including data extraction, classification, validation, and exception handling. Collaborate with business analysts and stakeholders to gather and analyze requirements for document automation projects. Integrate UiPath workflows with OCR technologies (e.g., UiPath Document Understanding, ABBYY, Google Vision, etc.) and other third-party tools. Optimize automation processes for efficiency, accuracy, and scalability. Troubleshoot, debug, and resolve issues in RPA bots and OCR pipelines. Develop reusable components, libraries, and frameworks to support rapid development and deployment. Maintain documentation of design, development, and operational procedures. Stay updated with the latest trends and advancements in RPA, OCR, and AI-based document processing technologies. Qualifications: Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. Proven experience (3+ years) in RPA development using UiPath with a focus on OCR and document processing. Strong understanding of OCR technologies and intelligent document processing frameworks. Hands-on experience with UiPath Document Understanding or equivalent OCR tools. Proficient in designing workflows, activities, and components in UiPath Studio and Orchestrator. Experience with scripting languages such as Python, VB.NET, or C# is a plus. Familiarity with AI/ML concepts applied to document classification and data extraction. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work independently and in a team environment. Preferred Skills: Experience integrating UiPath with cloud OCR services (Azure Cognitive Services, AWS Textract, Google Cloud Vision). Knowledge of business process management and process optimization. Understanding of enterprise IT environments, security, and compliance standards. Exposure to Agentic AI , autopilots and Intelligent document processing enhancements

Posted 3 days ago

Apply

5.0 years

5 - 9 Lacs

Bengaluru

On-site

GlassDoor logo

Company Description Version 1 are a true global leader in business transformation. For nearly three decades, we have been strategically partnering with customers to go beyond expectations through the power of cutting-edge technology and expert teams. Our deep expertise in cloud, data and AI, application modernisation, and service delivery management has redefined businesses globally, helping shape the future for large public sector organisations and major global, private brands. We put users and user-centric design at the heart of everything we do, enabling our customers to exceed expectations for their customers. Our approach is underpinned by the Version 1 Strength in Balance model – a balanced focus across our customers, our people and a strong organisation. This model is guided by core values that are embedded in every aspect of what we do. Our customers’ need for transformation is our driving force. We enable them to accelerate their journey to their digital future with our deep expertise and innovative approach. Our global technology partners – Oracle, Microsoft, AWS, Red Hat, and Snowflake – help us tackle any challenge by leveraging a technology-driven approach. Our people unlock our potential. They immerse themselves into the world of our customers to truly understand the unique challenges they face. Our teams, made up of highly skilled, passionate individuals, act with agility and integrity. We continually invest in their development and foster a culture that encourages collaboration and innovation. This is a reflection of our Strength in Balance model, which emphasises a balanced focus on our customers, our people, and a strong organisation. Through our comprehensive range of Managed Service offerings, we take ownership of the tasks that distract Customers from what really matters; driving their business objectives and strategic initiatives. We enable them to save time, and reduce costs and risk, by continually improving your technology estates, ensuring they drive value for their business. Go beyond simply ‘keeping the lights on’ and embrace the potential of our ASPIRE Managed Services that place AI, continuous improvement and business innovation at the heart of everything we do. From operational maintenance through to optimisation, we are trusted managed service experts with a sustainable, value-led approach and a wealth of industry sector expertise and experience. Job Description Onsite role, India Delivery Centre / Belfast / Dublin Full time position, 3-5 days per week in office (not shift) Department: ASPIRE Managed Services Practice: Services Reliability Group Vetting Requirements: N/A Role Summary: Our ASPIRE Global Service Centre is the central hub of our Service Management operations. Beyond a traditional Service Desk, it stands as the central authority and shared service delivery hub, orchestrating all operational workflows, processes, procedures, and tooling. It’s a core delivery component of the Version 1 ASPIRE Managed Services offering that places AI, continuous improvement and business innovation at the heart of everything Version 1 does. With a focus on supporting self-service and automation, we utilise the best digital capabilities of the ServiceNow ITSM tooling product to provide the very best Experience to our Customers. We are seeking an experienced and results-driven AI and Automation Lead who will be responsible for driving the strategic implementation and operational excellence of automation and artificial intelligence initiatives for ASPIRE Managed Services. This role leads the identification, design, and deployment of intelligent automation solutions to improve operational efficiency and productivity, enhance decision making, scale operations and deliver a competitive advantage in the market. Key Responsibilities: Develop and execute the ASPIRE Managed Services automation and AI strategy aligned with SRG and EA Practice goals Identify opportunities for AI and automation across all Managed Service functions, tooling and processes Champion a culture of innovation and continuous improvement through emerging technologies Lead end-to-end delivery of automation and AI projects, including planning, development, testing, deployment, and monitoring Establish governance frameworks and best practices for AI and automation initiatives Oversee the design and implementation of AI models, RPA (Robotic Process Automation), and intelligent workflows Ensure solutions are scalable, secure, and compliant with data privacy and ethical standards Evaluate and select appropriate tools, platforms, and vendors Collaborate with business units to understand pain points and co-create solutions Communicate complex technical concepts to non-technical stakeholders Monitor performance and continuously optimise solutions. Delivery of measurable business value through automation and AI Development of internal capabilities and knowledge sharing across teams Qualifications Skills, Education & Qualifications: Proven experience (5 years +) leading automation and AI projects in a complex, multi-client or enterprise-scale managed services environment, with demonstrable delivery of measurable business outcomes Strong technical expertise in Artificial Intelligence and Machine Learning, including: Supervised/unsupervised learning, deep learning, and natural language processing (NLP) Model development using frameworks such as TensorFlow, PyTorch, or scikit-learn Experience deploying AI models in production environments using MLOps principles (e.g., MLflow, Azure ML, SageMaker). Hands-on experience with automation and orchestration technologies, such as: Robotic Process Automation (RPA) platforms: UiPath, Blue Prism, Automation Anywhere IT process automation (ITPA) tools: ServiceNow Workflow/Orchestration, Microsoft Power Automate, Ansible, Terraform Integration using APIs and event-driven architectures (e.g., Kafka, Azure Event Grid) Proficiency in cloud-native AI and automation services in one of or more of public cloud platforms: Azure (Cognitive Services, Synapse, Logic Apps, Azure OpenAI) AWS (SageMaker, Lambda, Textract, Step Functions) GCP (Vertex AI, AutoML, Cloud Functions) Strong project delivery experience using modern methodologies: Agile/Scrum and DevOps for iterative development and deployment CI/CD pipeline integration for automation and ML model lifecycle management Use of tools like Git, Jenkins, and Azure DevOps In-depth knowledge of data architecture, governance, and AI ethics, including: Data privacy and security principles (e.g., GDPR, ISO 27001) Responsible AI practices: bias detection, explainability (e.g., SHAP, LIME), model drift monitoring Excellent stakeholder engagement and communication skills, with the ability to: Translate complex AI and automation concepts into business value Influence cross-functional teams and executive leadership Promote a culture of innovation, experimentation, and continuous learning Excellent leadership and team management skills Strong communication, interpersonal, and problem-solving abilities Strategic thinking and decision-making Adaptability to evolving technologies and processes Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent experience Additional Information At Version 1, we believe in providing our employees with a comprehensive benefits package that prioritises their well-being, professional growth, and financial stability. One of our standout advantages is the ability to work with a hybrid schedule along with business travel, allowing our employees to strike a balance between work and life. We prioritise the health and safety of our employees, providing private medical and life insurance coverage, as well as free eye tests and contributions towards glasses. Our team members can also stay ahead of the curve with incentivized certifications and accreditations, including AWS, Microsoft, Oracle, and Red Hat. Our employee-designed Profit Share scheme divides a portion of our company's profits each quarter amongst employees. We are dedicated to helping our employees reach their full potential, offering Pathways Career Development Quarterly, a programme designed to support professional growth.

Posted 3 days ago

Apply

2.0 years

5 - 9 Lacs

Bengaluru

On-site

GlassDoor logo

Company Description Version 1 are a true global leader in business transformation. For nearly three decades, we have been strategically partnering with customers to go beyond expectations through the power of cutting-edge technology and expert teams. Our deep expertise in cloud, data and AI, application modernisation, and service delivery management has redefined businesses globally, helping shape the future for large public sector organisations and major global, private brands. We put users and user-centric design at the heart of everything we do, enabling our customers to exceed expectations for their customers. Our approach is underpinned by the Version 1 Strength in Balance model – a balanced focus across our customers, our people and a strong organisation. This model is guided by core values that are embedded in every aspect of what we do. Our customers’ need for transformation is our driving force. We enable them to accelerate their journey to their digital future with our deep expertise and innovative approach. Our global technology partners – Oracle, Microsoft, AWS, Red Hat, and Snowflake – help us tackle any challenge by leveraging a technology-driven approach. Our people unlock our potential. They immerse themselves into the world of our customers to truly understand the unique challenges they face. Our teams, made up of highly skilled, passionate individuals, act with agility and integrity. We continually invest in their development and foster a culture that encourages collaboration and innovation. This is a reflection of our Strength in Balance model, which emphasises a balanced focus on our customers, our people, and a strong organisation. Through our comprehensive range of Managed Service offerings, we take ownership of the tasks that distract Customers from what really matters; driving their business objectives and strategic initiatives. We enable them to save time, and reduce costs and risk, by continually improving your technology estates, ensuring they drive value for their business. Go beyond simply ‘keeping the lights on’ and embrace the potential of our ASPIRE Managed Services that place AI, continuous improvement and business innovation at the heart of everything we do. From operational maintenance through to optimisation, we are trusted managed service experts with a sustainable, value-led approach and a wealth of industry sector expertise and experience. Job Description Onsite role, India Delivery Centre / Belfast / Dublin Full time position, 3-5 days per week in office (not shift) Department: ASPIRE Managed Services Practice: Services Reliability Group Vetting Requirements: N/A Role Summary: Our ASPIRE Global Service Centre is the central hub of our Service Management operations. Beyond a traditional Service Desk, it stands as the central authority and shared service delivery hub, orchestrating all operational workflows, processes, procedures, and tooling. It’s a core delivery component of the Version 1 ASPIRE Managed Services offering that places AI, continuous improvement and business innovation at the heart of everything Version 1 does. With a focus on supporting self-service and automation, we utilise the best digital capabilities of the ServiceNow ITSM tooling product to provide the very best Experience to our Customers. We are seeking an experienced and results-driven AI and Automation Technician who will be responsible for the delivery and ongoing management of automation and artificial intelligence initiatives for ASPIRE Managed Services. This role will primarily be responsible for the design, and deployment of intelligent automation solutions to improve operational efficiency and productivity, enhance decision making, scale operations and deliver a competitive advantage in the market. Key Responsibilities: Identify opportunities for AI and automation across all Managed Service functions, tooling and processes Delivery and technical implementation of automation and AI projects, including development, testing, deployment, and monitoring Ensure solutions are scalable, secure, and compliant with data privacy and ethical standards Evaluate and select appropriate tools, platforms, and vendors Collaborate with business units to understand pain points and co-create solutions Monitor performance and continuously optimise solutions Development of internal capabilities and knowledge sharing across teams Qualifications Skills, Education & Qualifications: Proven experience (2 years +) delivering automation and AI projects in a complex, multi-client or enterprise-scale managed services environment Technical expertise in Artificial Intelligence and Machine Learning, including: Supervised/unsupervised learning, deep learning, and natural language processing (NLP) Model development using frameworks such as TensorFlow, PyTorch, or scikit-learn Experience deploying AI models in production environments using MLOps principles (e.g., MLflow, Azure ML, SageMaker). Hands-on experience with automation and orchestration technologies, such as: Robotic Process Automation (RPA) platforms: UiPath, Blue Prism, Automation Anywhere IT process automation (ITPA) tools: ServiceNow Workflow/Orchestration, Microsoft Power Automate, Ansible, Terraform Integration using APIs and event-driven architectures (e.g., Kafka, Azure Event Grid) Proficiency in cloud-native AI and automation services in one of or more of public cloud platforms: Azure (Cognitive Services, Synapse, Logic Apps, Azure OpenAI) AWS (SageMaker, Lambda, Textract, Step Functions) GCP (Vertex AI, AutoML, Cloud Functions) Good knowledge of data architecture, governance, and AI ethics Excellent stakeholder engagement and communication skills, with the ability to: Translate complex AI and automation concepts into business value Promote a culture of innovation, experimentation, and continuous learning Strong communication, interpersonal, and problem-solving abilities Adaptability to evolving technologies and processes Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent experience Additional Information At Version 1, we believe in providing our employees with a comprehensive benefits package that prioritises their well-being, professional growth, and financial stability. One of our standout advantages is the ability to work with a hybrid schedule along with business travel, allowing our employees to strike a balance between work and life. We prioritise the health and safety of our employees, providing private medical and life insurance coverage, as well as free eye tests and contributions towards glasses. Our team members can also stay ahead of the curve with incentivized certifications and accreditations, including AWS, Microsoft, Oracle, and Red Hat. Our employee-designed Profit Share scheme divides a portion of our company's profits each quarter amongst employees. We are dedicated to helping our employees reach their full potential, offering Pathways Career Development Quarterly, a programme designed to support professional growth.

Posted 3 days ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

About Us We’re an early-stage startup building LLM-native products that turn unstructured documents into intelligent, usable insights. We work with RAG pipelines, multi-cloud LLMs, and fast data processing — and we’re looking for someone who can build, deploy, and own these systems end-to-end. Key Responsibilities: RAG Application Development: Design and build end-to-end Retrieval-Augmented Generation (RAG) pipelines using LLMs deployed on Vertex AI and AWS Bedrock , integrated with Quadrant for vector search. OCR & Multimodal Data Extraction: Use OCR tools (e.g., Textract) and vision-language models (VLMs) to extract structured and unstructured data from PDFs, images, and multimodal content. LLM Orchestration & Agent Design: Build and optimize workflows using LangChain , LlamaIndex , and custom agent frameworks. Implement autonomous task execution using agent strategies like ReAct , Function Calling , and tool-use APIs . API & Streaming Interfaces: Build and expose production-ready APIs (e.g., with FastAPI) for LLM services, and implement streaming outputs for real-time response generation and latency optimization. Data Pipelines & Retrieval: Develop pipelines for ingestion, chunking, embedding, and storage using Quadrant and PostgreSQL , applying hybrid retrieval techniques (dense + keyword search), rerankers, GraphRAG. Serverless AI Workflows: Deploy serverless ML components (e.g., AWS Lambda, GCP Cloud Functions) for scalable inference and data processing. MLOps & Model Evaluation: Deploy, monitor, and iterate on AI systems with lightweight MLOps workflows (Docker, MLflow, CI/CD). Benchmark and evaluate embeddings, retrieval strategies, and model performance. Qualifications: Strong Python development skills (must-have). LLMs: Claude and Gemini models Experience building AI agents and LLM-powered reasoning pipelines. Deep understanding of embeddings, vector search, and hybrid retrieval techniques. Experience with Quadrant DB Experience designing multi-step task automation and execution chains. Streaming: Ability to implement and debug LLM streaming and async flows Knowledge of memory and context management strategies for LLM agents (e.g., vector memory, scratchpad memory, episodic memory). Experience with AWS Lambda for serverless AI workflows and API integrations. Bonus: LLM fine-tuning, multimodal data processing, knowledge graph integration, or advanced AI planning techniques. Prior experience at startups only ( not IT services or Enterprises) and short notice period Who You Are 2–4 years of real-world AI/ML experience, ideally with production LLM apps Startup-ready: fast, hands-on, comfortable with ambiguity Clear communicator who can take ownership and push features end-to-end Available to join immediately Why Join Us? Founding-level role with high ownership Build systems from scratch using the latest AI stack Fully remote, async-friendly, fast-paced team

Posted 5 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field. 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA

Posted 6 days ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

About Our Company: Aerocraft Engineering India PVT Ltd based in Ahmedabad, provides services to US based Architecture, Engineering and Construction group of companies: Russell and Dawson – An Architecture/Engineering/Construction firm (www.rdaep.com) United-BIM – BIM Modeling Services Firm (www.united-bim.com) AORBIS – Procurement as a Service Provider (www.aorbis.com) For AORBIS business, We are seeking a passionate and skilled AI/ML Engineer with hands-on experience in Computer Vision, Document Processing Automation (PDFs), and LLMs . The ideal candidate will contribute to designing and deploying scalable AI solutions that extract, interpret, and act on unstructured and semi-structured data from documents using cutting-edge ML models. Familiarity with development tools like Python, GitHub, Jira, and Azure cloud is essential. Position: Senior Python & AI/ML Developer Timings: 12pm to 9pm - Monday to Friday Experience: Minimum 2-5 years Job Location: Ahmedabad (Siddhivinayak Towers, Makarba) Key Responsibilities: 1. Computer Vision & Image Processing Develop and optimize computer vision algorithms for document image processing (e.g., skew correction, OCR, layout detection). Implement models for object detection, segmentation, and keypoint detection in scanned or photographed documents. Apply pre-trained models and fine-tune them for use cases like table extraction or form understanding. 2. Machine Learning for PDF Automation Design and train models to extract structured data from unstructured PDFs (invoices, contracts, etc.). Use techniques like NLP, layout analysis, and supervised learning for content classification and entity extraction. Integrate tools such as Tesseract, PaddleOCR, Amazon Textract, or Azure Form Recognizer . 3. LLM (Large Language Models) Integration Fine-tune or prompt-engineer LLMs (e.g., OpenAI GPT, LLaMA, Mistral, Claude) for document Q&A, summarization, or data enrichment. Build pipelines that blend OCR + LLM to automate document understanding and decision-making workflows. Evaluate and deploy open-source or commercial LLMs in a secure, scalable manner. 4. DevOps & Tooling Use GitHub for code version control, CI/CD pipelines, and collaborative development. Track and manage tasks and sprints using Jira in an Agile development setup. Docker and Kubernetes for smooth orchestration with server and client. Deploy and monitor ML models and APIs in the Azure cloud environment , leveraging Azure ML, Functions, or Containers. Required Skills and Experience: Bachelor's or Master’s degree in Computer Science, AI/ML, Data Science, or a related field. 2+ years of hands-on experience with Python, OpenCV, TensorFlow/PyTorch, and ML frameworks. Strong grasp of NLP, OCR, and computer vision workflows. Experience working with PDF processing libraries (PDFMiner, PyMuPDF, PDFPlumber, etc.). Proficient in using and deploying models on Azure , with knowledge of Azure AI services. Understanding of version control with GitHub and task management via Jira . Exposure to prompt engineering and fine-tuning LLMs for domain-specific applications. Preferred Skills: Experience with open-source LLMs like LLaMA, Mistral, Falcon , or commercial APIs like OpenAI GPT-4 . Familiarity with vector databases (e.g., Qdrant, FAISS, Weaviate, ChromaDB ) and RAG-based systems. Knowledge of document standards like PDF/A, XFA, etc. Comfortable working in a fast-paced, research-oriented environment. 🎯 Key Responsibilities Build computer vision models for document layout analysis, object detection, and OCR enhancement. Automate data extraction from complex PDF documents (invoices, contracts, forms). Work with LLMs (e.g., GPT, LLaMA) for summarization, Q&A, and document intelligence tasks. Integrate AI pipelines with tools like Tesseract, Azure Form Recognizer, PDFMiner, PaddleOCR, etc. Deploy and manage ML solutions using Azure cloud services. Collaborate using GitHub (version control), Jira (Agile task tracking), and CI/CD workflows. ✅ Requirements 2–5 years of hands-on experience in AI/ML, preferably in document intelligence or vision-based systems. Strong Python skills with frameworks like TensorFlow, PyTorch, OpenCV. Proven experience in OCR, NLP, and automating document workflows. Experience with LLMs and prompt engineering or fine-tuning. Comfortable with DevOps tools: Azure, GitHub, Docker, Kubernetes and Jira. Benefits: Exposure to US Projects/Design/Standards Company provides Dinner/Snacks/Tea/Coffee Reimbursable Health Insurance 15 paid leave annually & 10 Public Holidays

Posted 6 days ago

Apply

6.0 - 9.0 years

14 - 22 Lacs

Pune, Chennai

Work from Office

Naukri logo

Hiring For Top IT Company- Designation: Python Developer Skills: AWS SDK +AI services integration Location :Pune/Chennai Exp: 6-8 yrs Best CTC Surbhi:9887580624 Anchal:9772061749 Gitika:8696868124 Shivani:7375861257 Team Converse

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities Build robust document data extraction pipelines using NLP and OCR techniques Develop and optimize end-to-end workflows for parsing scanned/image-based documents (PDFs, JPGs, TIFFs) and structured files (MS Excel, MS Word). Leverage LLM models (OpenAI GPT, Claude, Gemini etc.) for advanced entity extraction, summarization, and classification tasks. Design and implement Python-based scripts for parsing, cleaning, and transforming data. Integrate with Azure Services for document storage, compute, and secured API hosting (e.g., Azure Blob, Azure Functions, Key Vault, Azure Cognitive Services). Deploy and orchestrate workflows in Azure Databricks (including Spark and ML pipelines). Build and manage API calls for model integration, rate-limiting, and token control using AI gateways. Automate results export into SQL/Oracle databases and enable downstream access for analytics/reporting. Handle diverse metadata requirements, and create reusable, modular code for different document types. Optionally visualize and report data using Power BI and export data into Excel for stakeholder review. Technical Skills Required Skills & Qualifications: Strong programming skills in Python (Pandas, Regex, Pytesseract, spaCy, LangChain, Transformers, etc.) Experience with Azure Cloud (Blob Storage, Function Apps, Key Vaults, Logic Apps) Hands-on with Azure Databricks (PySpark, Delta Lake, MLFlow) Familiarity with OCR tools like Tesseract, Azure OCR, AWS textract, or Google Vision API Proficient in SQL and experience with Oracle Database integration (using cx_Oracle, SQLAlchemy, etc.) Experience working with LLM APIs (OpenAI, Anthropic, Google, or Hugging Face models) Knowledge of API development and integration (REST, JSON, API rate limits, authentication handling) Excel data manipulation using Python (e.g., openpyxl, pandas, xlrd) Understanding of Power BI dashboards and integration with structured data sources Nice To Have Experience with LangChain, LlamaIndex, or similar frameworks for document Q&A and retrieval-augmented generation (RAG) Background in data science or machine learning CI/CD and version control (Git, Azure DevOps) Familiarity with Data Governance and PII handling in document processing Soft Skills Strong problem-solving skills and an analytical mindset Attention to detail and ability to work with messy/unstructured data Excellent communication skills to interact with technical and non-technical stakeholders Ability to work independently and manage priorities in a fast-paced environment Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field. 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA Show more Show less

Posted 1 week ago

Apply

0 years

5 - 15 Lacs

Ahmedabad

On-site

GlassDoor logo

Proficient in Python, Node.js (or Java), and React (preferred). Experience with AWS Services: S3, Lambda, DynamoDB, Bedrock, Textract, RDS, Fargate. Experience in LLM-based application development (LangChain, Bedrock, or OpenAI APIs). Strong in NLP and embeddings (via SageMaker or third-party APIs like Cohere, Hugging Face). Knowledge of vector databases (Pinecone, ChromaDB, OpenSearch, etc.). Familiar with containerization (Docker, ECS/Fargate). Excellent understanding of REST API design and security. Experience handling PDF/image-based document classification. Good SQL and NoSQL skills (MS SQL, MongoDB). Preferred Qualifications: AWS Certified – especially in AI/ML or Developer Associate. Job Types: Full-time, Fresher, Internship Pay: ₹554,144.65 - ₹1,500,000.00 per year Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 1 week ago

Apply

0.0 years

0 Lacs

Ahmedabad, Gujarat

On-site

Indeed logo

Proficient in Python, Node.js (or Java), and React (preferred). Experience with AWS Services: S3, Lambda, DynamoDB, Bedrock, Textract, RDS, Fargate. Experience in LLM-based application development (LangChain, Bedrock, or OpenAI APIs). Strong in NLP and embeddings (via SageMaker or third-party APIs like Cohere, Hugging Face). Knowledge of vector databases (Pinecone, ChromaDB, OpenSearch, etc.). Familiar with containerization (Docker, ECS/Fargate). Excellent understanding of REST API design and security. Experience handling PDF/image-based document classification. Good SQL and NoSQL skills (MS SQL, MongoDB). Preferred Qualifications: AWS Certified – especially in AI/ML or Developer Associate. Job Types: Full-time, Fresher, Internship Pay: ₹554,144.65 - ₹1,500,000.00 per year Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 1 week ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Description About Ness Digital Engineering: Ness Digital Engineering is a global provider of software engineering and digital transformation services. We help enterprises accelerate innovation and drive business outcomes using cutting-edge technologies and agile methodologies. Join our dynamic team and be part of shaping the future of automation and intelligent solutions. Position Overview We are seeking a highly skilled UIPath Engineer with deep expertise in OCR (Optical Character Recognition) and document processing using UiPath. The ideal candidate will have a strong background in designing, developing, and deploying end-to-end automation solutions focused on intelligent document understanding and processing. This role requires a solid understanding of RPA frameworks, best practices, and integration with OCR engines to deliver scalable, high-quality automation. Key Responsibilities Design, develop, test, and deploy RPA workflows leveraging UiPath to automate document processing and OCR tasks. Implement intelligent document processing solutions including data extraction, classification, validation, and exception handling. Collaborate with business analysts and stakeholders to gather and analyze requirements for document automation projects. Integrate UiPath workflows with OCR technologies (e.g., UiPath Document Understanding, ABBYY, Google Vision, etc.) and other third-party tools. Optimize automation processes for efficiency, accuracy, and scalability. Troubleshoot, debug, and resolve issues in RPA bots and OCR pipelines. Develop reusable components, libraries, and frameworks to support rapid development and deployment. Maintain documentation of design, development, and operational procedures. Stay updated with the latest trends and advancements in RPA, OCR, and AI-based document processing technologies. Qualifications Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. Proven experience (3+ years) in RPA development using UiPath with a focus on OCR and document processing. Strong understanding of OCR technologies and intelligent document processing frameworks. Hands-on experience with UiPath Document Understanding or equivalent OCR tools. Proficient in designing workflows, activities, and components in UiPath Studio and Orchestrator. Experience with scripting languages such as Python, VB.NET, or C# is a plus. Familiarity with AI/ML concepts applied to document classification and data extraction. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work independently and in a team environment. Preferred Skills Experience integrating UiPath with cloud OCR services (Azure Cognitive Services, AWS Textract, Google Cloud Vision). Knowledge of business process management and process optimization. Understanding of enterprise IT environments, security, and compliance standards. Exposure to Agentic AI , autopilots and Intelligent document processing enhancements What We Offer Competitive compensation and benefits package. Opportunity to work with cutting-edge automation technologies. Collaborative and innovative work culture. Professional development and career growth opportunities. Show more Show less

Posted 1 week ago

Apply

3.0 years

7 - 10 Lacs

Hyderābād

On-site

GlassDoor logo

About Ness Digital Engineering: Ness Digital Engineering is a global provider of software engineering and digital transformation services. We help enterprises accelerate innovation and drive business outcomes using cutting-edge technologies and agile methodologies. Join our dynamic team and be part of shaping the future of automation and intelligent solutions. Position Overview: We are seeking a highly skilled UIPath Engineer with deep expertise in OCR (Optical Character Recognition) and document processing using UiPath. The ideal candidate will have a strong background in designing, developing, and deploying end-to-end automation solutions focused on intelligent document understanding and processing. This role requires a solid understanding of RPA frameworks, best practices, and integration with OCR engines to deliver scalable, high-quality automation. Key Responsibilities: Design, develop, test, and deploy RPA workflows leveraging UiPath to automate document processing and OCR tasks. Implement intelligent document processing solutions including data extraction, classification, validation, and exception handling. Collaborate with business analysts and stakeholders to gather and analyze requirements for document automation projects. Integrate UiPath workflows with OCR technologies (e.g., UiPath Document Understanding, ABBYY, Google Vision, etc.) and other third-party tools. Optimize automation processes for efficiency, accuracy, and scalability. Troubleshoot, debug, and resolve issues in RPA bots and OCR pipelines. Develop reusable components, libraries, and frameworks to support rapid development and deployment. Maintain documentation of design, development, and operational procedures. Stay updated with the latest trends and advancements in RPA, OCR, and AI-based document processing technologies. Qualifications: Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. Proven experience (3+ years) in RPA development using UiPath with a focus on OCR and document processing. Strong understanding of OCR technologies and intelligent document processing frameworks. Hands-on experience with UiPath Document Understanding or equivalent OCR tools. Proficient in designing workflows, activities, and components in UiPath Studio and Orchestrator. Experience with scripting languages such as Python, VB.NET, or C# is a plus. Familiarity with AI/ML concepts applied to document classification and data extraction. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work independently and in a team environment. Preferred Skills: Experience integrating UiPath with cloud OCR services (Azure Cognitive Services, AWS Textract, Google Cloud Vision). Knowledge of business process management and process optimization. Understanding of enterprise IT environments, security, and compliance standards. Exposure to Agentic AI , autopilots and Intelligent document processing enhancements What We Offer: Competitive compensation and benefits package. Opportunity to work with cutting-edge automation technologies. Collaborative and innovative work culture. Professional development and career growth opportunities.

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based data Exposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn) Familiarity with big data technologie s like Apache Spark, Kafka Experience with data visualization tool s (Tableau, Power BI, AWS QuickSight) Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databrew AWS Certifications (Data Analytics, Solutions Architect Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Description About Ness Digital Engineering: Ness Digital Engineering is a global provider of software engineering and digital transformation services. We help enterprises accelerate innovation and drive business outcomes using cutting-edge technologies and agile methodologies. Join our dynamic team and be part of shaping the future of automation and intelligent solutions. Position Overview We are seeking a highly skilled UIPath Engineer with deep expertise in OCR (Optical Character Recognition) and document processing using UiPath. The ideal candidate will have a strong background in designing, developing, and deploying end-to-end automation solutions focused on intelligent document understanding and processing. This role requires a solid understanding of RPA frameworks, best practices, and integration with OCR engines to deliver scalable, high-quality automation. Key Responsibilities Design, develop, test, and deploy RPA workflows leveraging UiPath to automate document processing and OCR tasks. Implement intelligent document processing solutions including data extraction, classification, validation, and exception handling. Collaborate with business analysts and stakeholders to gather and analyze requirements for document automation projects. Integrate UiPath workflows with OCR technologies (e.g., UiPath Document Understanding, ABBYY, Google Vision, etc.) and other third-party tools. Optimize automation processes for efficiency, accuracy, and scalability. Troubleshoot, debug, and resolve issues in RPA bots and OCR pipelines. Develop reusable components, libraries, and frameworks to support rapid development and deployment. Maintain documentation of design, development, and operational procedures. Stay updated with the latest trends and advancements in RPA, OCR, and AI-based document processing technologies. Qualifications Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. Proven experience (3+ years) in RPA development using UiPath with a focus on OCR and document processing. Strong understanding of OCR technologies and intelligent document processing frameworks. Hands-on experience with UiPath Document Understanding or equivalent OCR tools. Proficient in designing workflows, activities, and components in UiPath Studio and Orchestrator. Experience with scripting languages such as Python, VB.NET, or C# is a plus. Familiarity with AI/ML concepts applied to document classification and data extraction. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work independently and in a team environment. Preferred Skills Experience integrating UiPath with cloud OCR services (Azure Cognitive Services, AWS Textract, Google Cloud Vision). Knowledge of business process management and process optimization. Understanding of enterprise IT environments, security, and compliance standards. Exposure to Agentic AI , autopilots and Intelligent document processing enhancements What We Offer Competitive compensation and benefits package. Opportunity to work with cutting-edge automation technologies. Collaborative and innovative work culture. Professional development and career growth opportunities. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

About the Role: We are looking for a hands-on Data Engineer to join our team and take full ownership of scraping pipelines and data quality. You'll be working on data from 60+ websites involving PDFs, processed via OCR and stored in MySQL/PostgreSQL. You’ll build robust, self-healing pipelines and fix common data issues (missing fields, duplication, formatting errors). Responsibilities: Own and optimize Airflow scraping DAGs for 60+ sites Implement validation checks, retry logic, and error alerts Build pre-processing routines to clean OCR'd text Create data normalization and deduplication workflows Maintain data integrity across MySQL and PostgreSQL Collaborate with ML team for downstream AI use cases Requirements: 2–5 years of experience in Python-based data engineering Experience with Airflow, Pandas, OCR (Tesseract or AWS Textract) Solid SQL and schema design skills (MySQL/PostgreSQL) Familiarity with CSV processing and data pipelines Bonus: Experience with scraping using Scrapy or Selenium Location: Delhi (in-office only) Salary Range : 50-80k/Month Show more Show less

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Overview We don’t simply hire employees. We invest in them. When you work at Chatham, we empower you — offering professional development opportunities to help you grow in your career, no matter if you've been here for five months or 15 years. Chatham has worked hard to create a distinct work environment that values people, teamwork, integrity, and client service. You will have immediate opportunities to partner with talented subject matter experts, work on complex projects, and contribute to the value Chatham delivers every day. As a Manager of the Loan Data Extraction team specializing in institutional real estate clients, your primary responsibility will be to manage the team who will review and extract data from various types of real estate source documents, such as loan agreements, promissory notes, and guarantees, as a pivotal process in modeling debt portfolios for our clients. You will use your expertise to ensure data is complete, accurate, and timely. You should have a background in real estate investment or data management. You should also have exceptional attention to detail, with the ability to identify and resolve discrepancies or errors in data as well as strong analytical skills with the ability to review and extract data from various types of real estate source documents. You will report to Managing Director – India. In This Role You Will Lead the Loan Data Extraction team who will review and extract information from various types of real estate source documents, such as loan agreements and promissory notes, to model loan cashflows, extension details, and prepayment optionality. Collaborate with internal team members and other stakeholders to ensure that projects and deliverables are completed on time and to the satisfaction of clients. Communicate effectively with internal team members and other stakeholders, using strong verbal and written communication skills to convey complex ideas and information associated with the data extraction and quality assurance process. Complete internal training modules to gain critical skills and knowledge needed to complete extraction responsibilities efficiently and effectively. Create and monitor Quality metrics and ensure employee feedback is objective based on SMART goals. Create and maintain updated documentation: Standard Operating Procedures, Process Maps, Defect Definition, and Training Materials. Focus on process improvement and automation initiatives. Your Impact As Manager, you will oversee the Loan Data Extraction process for a client or multiple clients, ensuring that institutional real estate investors receive high-quality, accurate, and timely data solutions. Your leadership will be critical in managing the team’s performance, driving improvements in processes, and ensuring that all deliverables meet the high standards expected by our clients. Contributors To Your Success Post Graduate degree in Commerce, Accounting, Finance, or related fields. 10+ years of experience in financial document processing, credit analysis, loan operations, or a similar field. Proven experience leading a team and managing extraction or operations projects. Strong understanding of loan structures, credit agreements, and key financial covenants. Familiarity with AI/ML tools used for data extraction (e.g., AWS Textract, Google Document AI, Kira, Hyperscience) is a strong advantage. Leadership: Ability to lead and mentor a team while ensuring quality and adherence to processes. Attention to Detail – Precision is critical when extracting loan terms, interest rates, borrower details, and covenants to avoid costly errors. Understanding of Loan Documents – Familiarity with credit agreements, promissory notes, and term sheets helps in accurately identifying and interpreting relevant data. Data Entry Speed and Accuracy – Efficiently inputting data into systems without mistakes ensures smooth downstream processing and compliance. Critical Thinking & Pattern Recognition – Spotting inconsistencies, missing information, or potential red flags requires an analytical mindset. Effective communication skills – Ability to convey complex ideas and information (verbally or in writing) to internal team members and other stakeholders. Real estate familiarity – Experience working with institutional real estate data or clients is a plus. About Chatham Financial Chatham Financial is the largest independent financial risk management advisory and technology firm. A leader in debt and derivative solutions, Chatham provides clients with access to in-depth knowledge, innovative tools, and an incomparable team of over 750 employees to help mitigate risks associated with interest rate, foreign currency, and commodity exposures. Founded in 1991, Chatham serves more than 3,500 companies across a wide range of industries — handling over $1.5 trillion in transaction volume annually and helping businesses maximize their value in the capital markets, every day. To learn more, visit chathamfinancial.com. Chatham Financial is an equal opportunity employer. #LA-onsite #LA Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based dat aExposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn )Familiarity with big data technologie s like Apache Spark, Kafk aExperience with data visualization tool s (Tableau, Power BI, AWS QuickSight )Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databre wAWS Certifications (Data Analytics, Solutions Architect ) Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

POSITION / TITLE: Data Science Lead Location: Offshore – Hyderabad/Bangalore/Pune Who are we looking for? Looking for individuals with 6+ years of experience implementing and managing Data science projects . Working knowledge of Machine and Deep learning based client projects, MVPs, and POCs. Should have expert level experience with machine learning frameworks like scikit-learn, tensorflow, keras and deeplearning architectures like RNNs and LSTM. Should have worked with cognitive services from major cloud platforms like AWS (Textract, Comprehend) or Azure cognitive services etc. and have a working knowledge of SQL and no-SQL databases and microservices. Should be adapt at Python Scripting. Experience on NLP and Text Analytics is preferred Responsibilities Technical Skills – Must have:  Knowledge of Natural Language Processing(NLP)techniques and frameworks like Spacy, NLTK, etc. and good knowledge of Text Analytics  Should have strong understanding & hands on experience with machine learning frameworks like scikit-learn, tensorflow, keras and deep learning architectures like RNNs and LSTM , BERT  Should have worked with cognitive services from major cloud platforms like AWS and have a working knowledge of SQL and no-SQL databases.  Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles  Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization  Strong understanding of evaluation and monitoring metrics for machine learning projects  Strong understanding of containerization using docker and Kubernetes to get the models into production  Ability to Translate complex machine learning problem statements into specific deliverables and requirements  Adept at Python Scripting Technical Skills – Good To Have  Knowledge of distributed computing frameworks and cloud ML frameworks including AWS.  Experience in natural language processing, computer vision, or deep learning.  Certifications or courses in data science, analytics, or related fields.  Should exhibit diligence and meticulousness in working with data Other Skills We'd Appreciate  4+ years of experience in the Data Science and Machine Learning techniques  Proven track record of getting ML models into production  Hands-on experience with writing ML models with Python.  Prior experience in ML platforms and tools such as Dataiku, DataBricks, etc. would be a plus Education Qualification  Bachelor's degree in Computer Science, Information Technology, or related field (Master's degree preferred). Process Skills  General SDLC processes  Understanding of utilizing Agile and Scrum software development methodologies  Skill in gathering and documenting user requirements and writing technical specifications. Behavioral Skills  Good Attitude and Quick learner.  Well-developed design, analytical & problem-solving skills  Strong oral and written communication skills  Excellent team player, able to work with virtual teams.  Self-motivated and capable of working independently with minimal management supervision. Certification  Having Machine Learning or AI certifications would be an added advantage. Show more Show less

Posted 2 weeks ago

Apply

12.0 years

5 - 6 Lacs

Indore

On-site

GlassDoor logo

Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895

Posted 2 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Job Description – Digital Transformation and Automation Lead About the Role - Drive the digital backbone of a growing commercial real-estate group. - You’ll prototype, test and ship automations that save our teams > 10 hours/week in the first 90 days Total Experience - 2-3 years Availability ~40 hrs/week, 4 days on-site, 1 day remote Core Responsibilities 1. Systems Audit & Consolidation – unify Google Workspace tenants, rationalise shared drives. 2. Database & CRM Build-out – design, deploy, and maintain occupant tracker and a lightweight CRM; migrate legacy data. 3. Automation & Integration – link CRM, Google Sheets, and Tally using Apps Script/Zoho Flow/Zapier. 4. Process Documentation – own the internal wiki; keep SOPs and RACI charts current. 5. Dashboards & Reporting – craft Looker Studio boards for collections, projects, facility KPIs. 6. User Training & Support – deliver monthly clinics; teach teams how to use G Suite, ChatGPT to improve productivity 7. Security & Compliance – enforce 2FA, backup policies, basic network hygiene. 8. Vendor Co-ordination – liaise with Zoho, Tally consultants, ISP/MSP vendors; manage small capex items. Required Skills & Experience Domain Skill Level Workspace & Security ★ LAN/Wi-Fi basics & device hardening Core Automation & Low-Code ★ Apps Script or Zoho Creator/Flow; REST APIs & webhooks Core ★ Workflow bridges (Zapier / Make / n8n) Core • Cursor, Loveable, or similar AI-driven low-code tools Bonus Data Extraction & Integrations ★ Document AI / OCR stack for PDF leases (Google DocAI, Textract, etc.) Core ★ Tally Prime ODBC/API Core CRM & Customer-360 ★ End-to-end rollout of a CRM (Zoho/Freshsales) (migration, custom modules) Core • Help-desk tooling (Zoho Desk, Freshdesk) Bonus Analytics & Reporting ★ Advanced Google Sheets (ARRAYFORMULA, QUERY, IMPORTRANGE) and Looker Studio dashboards Core • Data-warehouse concepts (BigQuery/Redshift) for unified customer view Bonus Programming & Scripting ★ Python or Node.js for lightweight cloud functions / ETL Core ★ Prompt-engineering & Gen-AI APIs (OpenAI, Claude) for copilots Core Project & Knowledge Management • Trello (or equivalent Kanban) Bonus ★Notion / Google Sites for wiki & SOPs Core Soft Skills ★ Clear documentation & bilingual (English/Hindi) training; stakeholder comms Core Compensation - 40 – 50 k p.m Show more Show less

Posted 2 weeks ago

Apply

0.0 - 18.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indeed logo

Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. Our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset-based securitization Spocto - Debt recovery & risk mitigation platform Accumn - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have onboarded more than 17000 enterprises, 6200 investors, and lenders and facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come join the club to be a part of our epic growth story. Requirements Key Responsibilities: Lead and mentor a dynamic Data Science team in developing scalable, reusable tools and capabilities to advance machine learning models, specializing in computer vision, natural language processing, API development and Product building. Drive innovative solutions for complex CV-NLP challenges, including tasks like image classification, data extraction, text classification, and summarization, leveraging a diverse set of data inputs such as images, documents, and text. Collaborate with cross-functional teams, including DevOps and Data Engineering, to design and implement efficient ML pipelines that facilitate seamless model integration and deployment in production environments. Spearhead the optimization of the model development lifecycle, focusing on scalability for training and production scoring to manage significant data volumes and user traffic. Implement cutting-edge technologies and techniques to enhance model training throughput and response times. Required Experience & Expertise: 3+ years of experience in developing computer vision models and applications. Extensive knowledge and experience in Data Science and Machine Learning techniques, with a proven track record in leading and executing complex projects. Deep understanding of the entire ML model development lifecycle, including design, development, training, testing/evaluation, and deployment, with the ability to guide best practices. Expertise in writing high-quality, reusable code for various stages of model development, including training, testing, and deployment. Advanced proficiency in Python programming, with extensive experience in ML frameworks such as Scikit-learn, TensorFlow, and Keras and API development frameworks such as Django, Fast API. Demonstrated success in overcoming OCR challenges using advanced methodologies and libraries like Tesseract, Keras-OCR, EasyOCR, etc. Proven experience in architecting reusable APIs to integrate OCR capabilities across diverse applications and use cases. Proficiency with public cloud OCR services like AWS Textract, GCP Vision, and Document AI. History of integrating OCR solutions into production systems for efficient text extraction from various media, including images and PDFs. Comprehensive understanding of convolutional neural networks (CNNs) and hands-on experience with deep learning models, such as YOLO. Strong capability to prototype, evaluate, and implement state-of-the-art ML advancements, particularly in OCR and CV-NLP. Extensive experience in NLP tasks, such as Named Entity Recognition (NER), text classification, and on finetuning of Large Language Models (LLMs). This senior role is tailored for visionary professionals eager to push the boundaries of CV-NLP and drive impactful data-driven innovations using both well-established methods and the latest technological advancements. Benefits We are committed to creating a diverse environment and are proud to be an equal-opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, or age. Show more Show less

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies