Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
22 - 25 Lacs
India
On-site
TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Location Type: In-person Work Location: In person
Posted 9 hours ago
2.0 - 3.0 years
2 - 8 Lacs
Bengaluru
On-site
Description for Internal Candidates The Risk division is responsible for credit, market and operational risk, model risk, independent liquidity risk, and insurance throughout the firm. The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments, and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world. We commit people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Our people are our greatest asset – we say it often and with good reason. It is only with the determination and dedication of our people that we can serve our clients, generate long-term value for our shareholders and contribute to the broader public. We take pride in supporting each colleague both professionally and personally. From collaborative workspaces and ergonomic services to wellbeing and resilience offerings, we offer our people the flexibility and support they need to reach their goals in and outside the office. RISK BUSINESS The Risk Business identifies, monitors, evaluates, and manages the firm’s financial and non-financial risks in support of the firm’s Risk Appetite Statement and the firm’s strategic plan. Operating in a fast paced and dynamic environment and utilizing the best in class risk tools and frameworks, Risk teams are analytically curious, have an aptitude to challenge, and an unwavering commitment to excellence. Overview To ensure uncompromising accuracy and timeliness in the delivery of the risk metrics, our platform is continuously growing and evolving market, Risk Engineering combines the principles of Computer Science, Mathematics and Finance to produce large scale, computationally intensive calculations of risk Goldman Sachs faces with each transaction we engage in. Market Risk Engineering has an opportunity for an Associate level Software Engineer to work across a broad range of applications and extremely diverse set of technologies to keep the suite operating at peak efficiency. As an Engineer in the Risk Engineering organization, you will have the opportunity to impact one or more aspects of risk management. You will work with a team of talented engineers to drive the build & adoption of common tools, platforms, and applications. The team builds solutions that are offered as a software product or as a hosted service. We are a dynamic team of talented developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing risk computations in limited amount of time using distributed computing, and making data available to enable actionable risk insights through analytical and response user interfaces. WHAT WE LOOK FOR Senior Developer in large projects across a global team of developers and risk managers Performance tune applications to improve memory and CPU utilization. Perform statistical analyses to identify trends and exceptions related Market Risk metrics. Build internal and external reporting for the output of risk metric calculation using data extraction tools, such as SQL, and data visualization tools, such as Tableau. Utilize web development technologies to facilitate application development for front end UI used for risk management actions Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. Interact with business users for resolving issues with applications. Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. Oversee junior technical team members in all aspects of Software Development Life Cycle (SDLC) including design, code review and production migrations. Skills And Experience Bachelor’s degree in Computer Science, Mathematics, Electrical Engineering or related technical discipline 2 -3years’ experience is working risk technology team in another bank, financial institution. Experience in market risk technology is a plus. Experience with one or more major relational / object databases. Experience in software development, including a clear understanding of data structures, algorithms, software design and core programming concepts Comfortable multi-tasking, managing multiple stakeholders and working as part of a team Comfortable with working with multiple languages Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache Airflow. Should be ready to work in GS proprietary technology like Slang/SECDB An understanding of compute resources and the ability to interpret performance metrics (e.g., CPU, memory, threads, file handles). Knowledge and experience in distributed computing – parallel computation on a single machine like DASK, Distributed processing on Public Cloud. Knowledge of SDLC and experience in working through entire life cycle of the project from start to end
Posted 9 hours ago
2.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Description for Internal Candidates The Risk division is responsible for credit, market and operational risk, model risk, independent liquidity risk, and insurance throughout the firm. The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments, and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world. We commit people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Our people are our greatest asset – we say it often and with good reason. It is only with the determination and dedication of our people that we can serve our clients, generate long-term value for our shareholders and contribute to the broader public. We take pride in supporting each colleague both professionally and personally. From collaborative workspaces and ergonomic services to wellbeing and resilience offerings, we offer our people the flexibility and support they need to reach their goals in and outside the office. RISK BUSINESS The Risk Business identifies, monitors, evaluates, and manages the firm’s financial and non-financial risks in support of the firm’s Risk Appetite Statement and the firm’s strategic plan. Operating in a fast paced and dynamic environment and utilizing the best in class risk tools and frameworks, Risk teams are analytically curious, have an aptitude to challenge, and an unwavering commitment to excellence. Overview To ensure uncompromising accuracy and timeliness in the delivery of the risk metrics, our platform is continuously growing and evolving market, Risk Engineering combines the principles of Computer Science, Mathematics and Finance to produce large scale, computationally intensive calculations of risk Goldman Sachs faces with each transaction we engage in. Market Risk Engineering has an opportunity for an Associate level Software Engineer to work across a broad range of applications and extremely diverse set of technologies to keep the suite operating at peak efficiency. As an Engineer in the Risk Engineering organization, you will have the opportunity to impact one or more aspects of risk management. You will work with a team of talented engineers to drive the build & adoption of common tools, platforms, and applications. The team builds solutions that are offered as a software product or as a hosted service. We are a dynamic team of talented developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing risk computations in limited amount of time using distributed computing, and making data available to enable actionable risk insights through analytical and response user interfaces. What We Look For Senior Developer in large projects across a global team of developers and risk managers Performance tune applications to improve memory and CPU utilization. Perform statistical analyses to identify trends and exceptions related Market Risk metrics. Build internal and external reporting for the output of risk metric calculation using data extraction tools, such as SQL, and data visualization tools, such as Tableau. Utilize web development technologies to facilitate application development for front end UI used for risk management actions Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. Interact with business users for resolving issues with applications. Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. Oversee junior technical team members in all aspects of Software Development Life Cycle (SDLC) including design, code review and production migrations. Skills And Experience Bachelor’s degree in Computer Science, Mathematics, Electrical Engineering or related technical discipline 2 -3years’ experience is working risk technology team in another bank, financial institution. Experience in market risk technology is a plus. Experience with one or more major relational / object databases. Experience in software development, including a clear understanding of data structures, algorithms, software design and core programming concepts Comfortable multi-tasking, managing multiple stakeholders and working as part of a team Comfortable with working with multiple languages Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache Airflow. Should be ready to work in GS proprietary technology like Slang/SECDB An understanding of compute resources and the ability to interpret performance metrics (e.g., CPU, memory, threads, file handles). Knowledge and experience in distributed computing – parallel computation on a single machine like DASK, Distributed processing on Public Cloud. Knowledge of SDLC and experience in working through entire life cycle of the project from start to end
Posted 14 hours ago
0.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Ito’s Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning) Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C++/Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical l). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Location: Gurugram, Work hours will be aligned to APAC Markets.
Posted 3 days ago
7.0 - 12.0 years
25 - 30 Lacs
Pune
Work from Office
At BNY, our culture empowers you to grow and succeed As a leading global financial services company at the center of the worlds financial system we touch nearly 20% of the worlds investible assets Every day around the globe, our 50,000+ employees bring the power of their perspective to the table to create solutions with our clients that benefit businesses, communities and people everywhere, We continue to be a leader in the industry, awarded as a top home for innovators and for creating an inclusive workplace Through our unique ideas and talents, together we help make money work for the world This is what is all about, Were seeking a future team member for the role of Vice President I to join our Data Management & Quantitative Analysis team This role is located in Pune, MH or Chennai, TN (Hybrid), In this role, youll make an impact in the following ways: BNY Data Analytics Reporting and Transformation (?DART?) has grown rapidly and today it represents a highly motivated and engaged team of skilled professionals with expertise in financial industry practices, reporting, analytics, and regulation The team works closely with various groups across BNY to support the firms Capital Adequacy, Counterparty Credit as well as Enterprise Risk modelling and data analytics; alongside support for the annual Comprehensive Capital Analysis and Review (CCAR) Stress Test, The Counterparty Credit Risk Data Analytics Team within DART designs and develops data-driven solutions aimed at strengthening the control framework around our risk metrics and reporting For the Counterparty Credit Risk Data Analytics Team, we are looking for a Counterparty Risk Analytics Developer to support our Counterparty Credit Risk control framework, Develop analytical tools using SQL & Python to drive business insights Utilize outlier detection methodologies to identify data anomalies in the financial risk space, ensuring proactive risk management Analyze business requirements and translate them into practical solutions, developing data-driven controls to mitigate potential risks Plan and execute projects from concept to final implementation, demonstrating strong project management skills Present solutions to senior stakeholders, effectively communicating technical concepts and results Collaborate with internal and external auditors and regulators to ensure compliance with prescribed standards, maintaining the highest level of integrity and transparency, To be successful in this role, were seeking the following: A Bachelor's degree in Engineering, Computer Science, Data Science, or a related discipline (Master's degree preferred) At least 3 years of experience in a similar role or in Python development/data analytics Strong proficiency in Python (including data analytics, data visualization libraries) and SQL, basic knowledge of HTML and Flask, Ability to partner with technology and other stakeholders to ensure effective functional requirements, design, construction, and testing Knowledge of financial risk concepts and financial markets is strongly preferred Familiarity with outlier detection techniques (including Autoencoder method, random forest, etc ), clustering (k-means, etc ), and time series analysis (ARIMA, EWMA, GARCH, etc ) is a plus, Practical experience working with Python (Pandas, NumPy, Matplolib, Plotly, Dash, Scikit-learn, TensorFlow, Torch, Dask, Cuda) Intermediate SQL skills (including querying data, joins, table creation, and basic performance optimization techniques) Knowledge of financial risk concepts and financial markets Knowledge of outlier detection techniques, clustering, and time series analysis Strong project management skills
Posted 4 days ago
12.0 - 18.0 years
0 Lacs
Tamil Nadu, India
Remote
Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. This position requires expertise in designing, developing, debugging, and maintaining AI-powered applications and data engineering workflows for both local and cloud environments. The role involves working on large-scale projects, optimizing AI/ML pipelines, and ensuring scalable data infrastructure. As a PMTS, you will be responsible for integrating Generative AI (GenAI) capabilities, building data pipelines for AI model training, and deploying scalable AI-powered microservices. You will collaborate with AI/ML, Data Engineering, DevOps, and Product teams to deliver impactful solutions that enhance our products and services. Additionally, it would be desirable if the candidate has experience in retrieval-augmented generation (RAG), fine-tuning pre-trained LLMs, AI model evaluation, data pipeline automation, and optimizing cloud-based AI deployments. Responsibilities AI-Powered Software Development & API Integration Develop AI-driven applications, microservices, and automation workflows using FastAPI, Flask, or Django, ensuring cloud-native deployment and performance optimization. Integrate OpenAI APIs (GPT models, Embeddings, Function Calling) and Retrieval-Augmented Generation (RAG) techniques to enhance AI-powered document retrieval, classification, and decision-making. Data Engineering & AI Model Performance Optimization Design, build, and optimize scalable data pipelines for AI/ML workflows using Pandas, PySpark, and Dask, integrating data sources such as Kafka, AWS S3, Azure Data Lake, and Snowflake. Enhance AI model inference efficiency by implementing vector retrieval using FAISS, Pinecone, or ChromaDB, and optimize API latency with tuning techniques (temperature, top-k sampling, max tokens settings). Microservices, APIs & Security Develop scalable RESTful APIs for AI models and data services, ensuring integration with internal and external systems while securing API endpoints using OAuth, JWT, and API Key Authentication. Implement AI-powered logging, observability, and monitoring to track data pipelines, model drift, and inference accuracy, ensuring compliance with AI governance and security best practices. AI & Data Engineering Collaboration Work with AI/ML, Data Engineering, and DevOps teams to optimize AI model deployments, data pipelines, and real-time/batch processing for AI-driven solutions. Engage in Agile ceremonies, backlog refinement, and collaborative problem-solving to scale AI-powered workflows in areas like fraud detection, claims processing, and intelligent automation. Cross-Functional Coordination and Communication Collaborate with Product, UX, and Compliance teams to align AI-powered features with user needs, security policies, and regulatory frameworks (HIPAA, GDPR, SOC2). Ensure seamless integration of structured and unstructured data sources (SQL, NoSQL, vector databases) to improve AI model accuracy and retrieval efficiency. Mentorship & Knowledge Sharing Mentor junior engineers on AI model integration, API development, and scalable data engineering best practices, and conduct knowledge-sharing sessions. Education & Experience Required 12-18 years of experience in software engineering or AI/ML development, preferably in AI-driven solutions. Hands-on experience with Agile development, SDLC, CI/CD pipelines, and AI model deployment lifecycles. Bachelor’s Degree or equivalent in Computer Science, Engineering, Data Science, or a related field. Proficiency in full-stack development with expertise in Python (preferred for AI), Java Experience with structured & unstructured data: SQL (PostgreSQL, MySQL, SQL Server) NoSQL (OpenSearch, Redis, Elasticsearch) Vector Databases (FAISS, Pinecone, ChromaDB) Cloud & AI Infrastructure AWS: Lambda, SageMaker, ECS, S3 Azure: Azure OpenAI, ML Studio GenAI Frameworks & Tools: OpenAI API, Hugging Face Transformers, LangChain, LlamaIndex, AutoGPT, CrewAI. Experience in LLM deployment, retrieval-augmented generation (RAG), and AI search optimization. Proficiency in AI model evaluation (BLEU, ROUGE, BERT Score, cosine similarity) and responsible AI deployment. Strong problem-solving skills, AI ethics awareness, and the ability to collaborate across AI, DevOps, and data engineering teams. Curiosity and eagerness to explore new AI models, tools, and best practices for scalable GenAI adoption. About Athenahealth Here’s our vision: To create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. What’s unique about our locations? From an historic, 19th century arsenal to a converted, landmark power plant, all of athenahealth’s offices were carefully chosen to represent our innovative spirit and promote the most positive and productive work environment for our teams. Our 10 offices across the United States and India — plus numerous remote employees — all work to modernize the healthcare experience, together. Our Company Culture Might Be Our Best Feature. We don't take ourselves too seriously. But our work? That’s another story. athenahealth develops and implements products and services that support US healthcare: It’s our chance to create healthier futures for ourselves, for our family and friends, for everyone. Our vibrant and talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our goal. We continue to expand our workforce with amazing people who bring diverse backgrounds, experiences, and perspectives at every level, and foster an environment where every athenista feels comfortable bringing their best selves to work. Our size makes a difference, too: We are small enough that your individual contributions will stand out — but large enough to grow your career with our resources and established business stability. Giving back is integral to our culture. Our athenaGives platform strives to support food security, expand access to high-quality healthcare for all, and support STEM education to develop providers and technologists who will provide access to high-quality healthcare for all in the future. As part of the evolution of athenahealth’s Corporate Social Responsibility (CSR) program, we’ve selected nonprofit partners that align with our purpose and let us foster long-term partnerships for charitable giving, employee volunteerism, insight sharing, collaboration, and cross-team engagement. What can we do for you? Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. And we provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation.
Posted 4 days ago
7.0 - 12.0 years
22 - 25 Lacs
India
On-site
TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Experience: total work: 1 year (Preferred) Work Location: In person
Posted 4 days ago
5.0 - 7.0 years
10 - 14 Lacs
Bengaluru
Work from Office
As an Applied AI Engineer, you will work at the intersection of AI research and practical implementation. You will develop machine learning (ML) and deep learning (DL) models, integrate them into our SaaS platform, and optimize them for scalability, performance, and business impact. Key Responsibilities: Data Engineering & Feature Engineering: Work with structured and unstructured data to build high-quality datasets. Develop robust feature engineering pipelines to improve model accuracy. Implement data preprocessing and augmentation techniques. MLOps & AI Infrastructure: Build and maintain ML pipelines for continuous integration and deployment (CI/CD). Implement model monitoring, retraining, and performance tracking frameworks. Work with cloud platforms (AWS, GCP, Azure) for AI model deployment and scaling. AI Integration in SaaS Applications: Collaborate with software engineers to integrate AI models into customer-facing SaaS products. Develop APIs and microservices for seamless AI-powered functionalities. Optimize inference performance for real-time and batch processing scenarios. Collaboration & Research: Stay updated with the latest AI research and bring innovative solutions to production. Work closely with product managers, designers, and engineers to align AI capabilities with business goals. Participate in code reviews, knowledge sharing, and AI/ML best practices. Prompt Engineering: Design, develop, and refine AI-generated text prompts for various applications to ensure accuracy, engagement, and relevance Craft and optimize prompts that guide our AI systems to generate accurate, informative, and creative outputs Build accessible libraries of prompts, keywords, and syntax guidelines for optimal query results Develop robust evaluation frameworks to assess AI model performance and prompt effectiveness Test and analyze outputs by experimenting with different prompts and measuring against defined metrics Create dashboards and reporting tools to track AI performance across multiple dimensions Apply human judgment to identify gaps in AI-generated output and refine prompts accordingly Implement continuous improvement processes based on evaluation insights What Were Looking For: Experience: 5-6 years in AI/ML engineering, with hands-on experience in Generative AI, NLP and deploying AI solutions at scale. Technical Expertise: Strong background in machine learning, deep learning, and NLP. Proficiency in Python Help assess performance metrics, develop novel agent frameworks, create and oversee data workflows, and conduct extensive testing to deploy innovative capabilities. Ship with high intent and work with the team to improve your ability to iterate and ship AI-powered features over time. Experience with data processing tools (Pandas, NumPy, Spark, Dask). Familiarity with MLOps, CI/CD, and cloud platforms (AWS SageMaker, GCP Vertex AI, Azure ML). AI Deployment & Optimization: Experience with optimizing models for performance, interpretability, and real-time applications. Strong Problem-Solving Skills: Ability to translate business problems into AI-driven solutions. SaaS Experience (Preferred): Understanding of how AI enhances SaaS applications and workflows.
Posted 4 days ago
0.0 years
4 - 5 Lacs
Gurgaon
On-site
Location Gurugram, India Share Position Summary We are seeking a highly motivated and analytical Quant Analyst to join Futures First. The role involves supporting development and execution of quantitative strategies across financial markets. Job Profile Statistical Arbitrage & Strategy Development Design and implement pairs, mean-reversion, and relative value strategies in fixed income (govvies, corporate bonds, IRS). Apply cointegration tests (Engle-Granger, Johansen), Kalman filters, and machine learning techniques for signal generation. Optimize execution using transaction cost analysis (TCA). Correlation & Volatility Analysis Model dynamic correlations between bonds, rates, and macro variables using PCA, copulas, and rolling regressions. Forecast yield curve volatility using GARCH, stochastic volatility models, and implied-vol surfaces for swaptions. Identify regime shifts (e.g., monetary policy impacts) and adjust strategies accordingly. Seasonality & Pattern Recognition Analyse calendar effects (quarter-end rebalancing, liquidity patterns) in sovereign bond futures and repo markets. Develop time-series models (SARIMA, Fourier transforms) to detect cyclical trends. Back testing & Automation Build Python-based back testing frameworks (Backtrader, Qlib) to validate strategies. Automate Excel-based reporting (VBA, xlwings) for P&L attribution and risk dashboards. Integrate Bloomberg/Refinitiv APIs for real-time data feeds. Requirements Education Qualifications B.Tech Work Experience 0-3 years Skill Set Must have: Strong grasp of probability theory, stochastic calculus (Ito’s Lemma, SDEs), and time-series econometrics (ARIMA, VAR, GARCH). Must have: Expertise in linear algebra (PCA, eigenvalue decomposition), numerical methods (Monte Carlo, PDE solvers), and optimization techniques. Preferred: Knowledge of Bayesian statistics, Markov Chain Monte Carlo (MCMC), and machine learning (supervised/unsupervised learning) Libraries: NumPy, Pandas, statsmodels, scikit-learn, arch (GARCH models). Back testing: Backtrader, Zipline, or custom event-driven frameworks. Data handling: SQL, Dask (for large datasets). Power Query, pivot tables, Bloomberg Excel functions (BDP, BDH). VBA scripting for various tools and automation. Experience with C++/Java (low-latency systems), QuantLib (fixed income pricing), or R (statistical l). Yield curve modelling (Nelson-Siegel, Svensson), duration/convexity, OIS pricing. Credit spreads, CDS pricing, and bond-CDS basis arbitrage. Familiarity with VaR, CVaR, stress testing, and liquidity risk metrics. Understanding of CCIL, NDS-OM (Indian market infrastructure). Ability to translate intuition and patterns into quant models. Strong problem-solving and communication skills (must explain complex models to non-quants). Comfortable working in a fast-paced work environment. Location: Gurugram, Work hours will be aligned to APAC Markets.
Posted 5 days ago
0 years
0 Lacs
Andhra Pradesh
On-site
Combine interface design concepts with digital design and establish milestones to encourage cooperation and teamwork. Develop overall concepts for improving the user experience within a business webpage or product, ensuring all interactions are intuitive and convenient for customers. Collaborate with back-end web developers and programmers to improve usability. Conduct thorough testing of user interfaces in multiple platforms to ensure all designs render correctly and systems function properly. Converting the jobs from Talend ETL to Python and convert Lead SQLS to Snowflake. Developers with Python and SQL Skills. Developers should be proficient in Python (especially Pandas, PySpark, or Dask) for ETL scripting, with strong SQL skills to translate complex queries. They need expertise in Snowflake SQL for migrating and optimizing queries, as well as experience with data pipeline orchestration (e.g., Airflow) and cloud integration for automation and data loading. Familiarity with data transformation, error handling, and logging is also essential. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 5 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Senior Data Scientist — Gen AI/ML Expert Location: Hybrid — Gurugram Company: Mechademy – Industrial Reliability & Predictive Analytics About Mechademy At Mechademy, we are redefining the future of reliability in rotating machinery with our flagship product, Turbomechanica . Built at the intersection of physics-based models, AI, and machine learning , Turbomechanica delivers prescriptive analytics that detect potential equipment issues before they escalate, maximizing uptime, extending asset life, and reducing operational risks for our industrial clients. The Role We are seeking a talented and driven Senior Data Scientist (AI/ML) with 3+ years of experience to join our AI team. You will play a critical role in building scalable ML pipelines, integrating cutting-edge language models, and developing autonomous agent-based systems that transform predictive maintenance is done for industrial equipment. This is a highly technical and hands-on role, with strong emphasis on real-world AI deployments — working directly with sensor data, time-series analytics, anomaly detection, distributed ML, and LLM-powered agentic workflows . What Makes This Role Unique Work on real-world industrial AI problems , combining physics-based models with modern ML/LLM systems. Collaborate with domain experts, engineers, and product leaders to directly impact critical industrial operations. Freedom to experiment with new tools, models, and techniques — with full ownership of your work. Help shape our technical roadmap as we scale our AI-first predictive analytics platform. Flexible hybrid work culture with high-impact visibility. Key Responsibilities Design & Develop ML Pipelines: Build scalable, production-grade ML pipelines for predictive maintenance, anomaly detection, and time-series analysis. Distributed Model Training: Leverage distributed computing frameworks (e.g. Ray, Dask, Spark, Horovod) for large-scale model training. LLM Integration & Optimization: Fine-tune, optimize, and deploy large language models (Llama, GPT, Mistral, Falcon, etc.) for applications like summarization, RAG (Retrieval-Augmented Generation), and knowledge extraction. Agent-Based AI Pipelines: Build intelligent multi-agent systems capable of reasoning, planning, and executing complex tasks via tool usage, memory, and coordination. End-to-End MLOps: Own the full ML lifecycle — from research, experimentation, deployment, monitoring to production optimization. Algorithm Development: Research, evaluate, and implement state-of-the-art ML/DL/statistical algorithms for real-world sensor data. Collaborative Development: Work closely with cross-functional teams including software engineers, domain experts, product managers, and leadership. Core Requirements 3+ years of professional experience in AI/ML, data science, or applied ML engineering. Strong hands-on experience with modern LLMs (Llama, GPT series, Mistral, Falcon, etc.), fine-tuning, prompt engineering, and RAG techniques. Familiarity with frameworks like LangChain, LlamaIndex , or equivalent for LLM application development. Practical experience in agentic AI pipelines : tool use, sequential reasoning, and multi-agent orchestration. Strong proficiency in Python (Pandas, NumPy, Scikit-learn) and at least one deep learning framework (TensorFlow, PyTorch, or JAX). Exposure to distributed ML frameworks (Ray, Dask, Horovod, Spark ML, etc.). Experience with containerization and orchestration (Docker, Kubernetes). Strong problem-solving ability, ownership mindset, and ability to work in fast-paced startup environments. Excellent written and verbal communication skills. Bonus / Good to Have Experience with time-series data, sensor data processing, and anomaly detection. Familiarity with CI/CD pipelines and MLOps best practices. Knowledge of cloud deployment, real-time system optimization, and industrial data security standards. Prior open-source contributions or active GitHub projects. What We Offer Opportunity to work on cutting-edge technology transforming industrial AI. Direct ownership, autonomy, and visibility into product impact. Flexible hybrid work culture. Professional development budget and continuous learning opportunities. Collaborative, fast-moving, and growth-oriented team culture. Health benefits and performance-linked rewards. Potential for equity participation for high-impact contributors. Note: Title and compensation will be aligned with the candidate’s experience and potential impact. Show more Show less
Posted 5 days ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Data Science @Dream Sports: Data Science at Dream Sports comprises seasoned data scientists thriving to drive value with data across all our initiatives. The team has developed state-of-the-art solutions for forecasting and optimization, data-driven risk prevention systems, Causal Inference and Recommender Systems to enhance product and user experience. We are a team of Machine Learning Scientists and Research Scientists with a portfolio of projects ranges from production ML systems that we conceptualize, build, support and innovate upon, to longer term research projects with potential game-changing impact for Dream Sports. This is a unique opportunity for highly motivated candidates to work on real-world applications of machine learning in the sports industry, with access to state-of-the-art resources, infrastructure, and data from multiple sources streaming from 250 million users and contributing to our collaboration with Columbia Dream Sports AI Innovation Center. Your Role: Executing clean experiments rigorously against pertinent performance guardrails and analysing performance metrics to infer actionable findings Developing and maintaining services with proactive monitoring and can incorporate best industry practices for optimal service quality and risk mitigation Breaking down complex projects into actionable tasks that adhere to set management practices and ensure stakeholder visibility Managing end-to-end lifecycle of large scale ML projects from data preparation, model training, deployment, monitoring, and upgradation of experiments Leveraging a strong foundation in ML, statistics, and deep learning to adeptly implement research-backed techniques for model development Staying abreast of the best ML practices and developments of the industry to mentor and guide team members Qualifiers: 3-5 years of experience in building, deploying and maintaining ML solutions Extensive experience with Python, Sql, Tensorflow/Pytorch and atleast one distributed data framework (Spark/Ray/Dask ) Working knowledge of Machine Learning, probability & statistics and Deep Learning Fundamentals Experience in designing end to end machine learning systems that work at scale About Dream Sports: Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the world’s largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers. Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
Erode, Tamil Nadu, India
Remote
Job Title: Senior Data Scientist (Advanced Modeling & Machine Learning) Location: Remote Job Type: Full-time About the role We are seeking a highly motivated and experienced Senior Data Scientist with a strong background in statistical modeling, machine learning, and natural language processing (NLP). This individual will work on advanced attribution models and predictive algorithms that power strategic decision-making across the business. The ideal candidate will have a Master’s degree in a quantitative field, 4–6 years of hands-on experience, and demonstrated expertise in building models from linear regression to cutting-edge deep learning and large language models (LLMs). A Ph.D. is strongly preferred. Responsibilities Responsible for analyzing the data, identifying patterns, and do a detailed EDA. Build and refine predictive models using techniques such as linear/logistic regression, XGBoost, and neural networks. Leverage machine learning and NLP methods to analyze large-scale structured and unstructured datasets. Apply LLMs and transformers to develop solutions in content understanding, summarization, classification, and retrieval. Collaborate with data engineers and product teams to deploy scalable data pipelines and model production systems. Interpret model results, generate actionable insights, and present findings to technical and non-technical stakeholders. Stay abreast of the latest research and integrate cutting-edge techniques into ongoing projects Required Qualifications Master’s degree in Computer Science, Statistics, Applied Mathematics, or a related field. 4–6 years of industry experience in data science or machine learning roles. Strong statistical foundation, with practical experience in regression modeling, hypothesis testing, and A/B testing. Hands-on knowledge of: > Programming languages : Python (primary), SQL, R (optional) > Libraries : pandas, NumPy, scikit-learn, TensorFlow, PyTorch, XGBoost, LightGBM, spaCy, Hugging Face Transformers > Distributed computing : PySpark, Dask > Big Data and Cloud Platforms : Databricks, AWS Sagemaker, Google Vertex AI, Azure ML > Data Engineering Tools : Apache Spark, Delta Lake, Airflow > ML Workflow & Visualization : MLflow, Weights & Biases, Plotly, Seaborn, Matplotlib > Version control and collaboration : Git, GitHub, Jupyter, VSCode Preferred Qualifications Masters or Ph.D. in a quantitative or technical field. Experience with deploying machine learning pipelines in production using CI/CD tools. Familiarity with containerization (Docker) and orchestration (Kubernetes) in ML workloads. Understanding of MLOps and model lifecycle management best practices. Experience in real-time data processing (Kafka, Flink) and high-throughput ML systems. What We Offer Competitive salary and performance bonuses Flexible working hours and remote options Opportunities for continued learning and research Collaborative, high-impact team environment Access to cutting-edge technology and compute resources To apply, send your resume to jobs@megovation.io to be part of a team pushing the boundaries of data-driven innovation. Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bengaluru, Karnataka Factspan Overview: Factspan is a pure-play data and analytics services organization. We partner with Fortune 500 enterprises to build an analytics center of excellence, generating insights and solutions from raw data to solve business challenges, make strategic recommendations, and implement new processes that help them succeed. With offices in Seattle, Washington, and Bengaluru, India, we use a global delivery model to service our customers. Our customers include industry leaders from the retail, financial services, hospitality, and technology sectors. Job Overview (Primary Skills - GCP, Kubeflow, Python, Vertex.ai) As a Machine Learning Engineer, you will oversee the entire lifecycle of machine learning models. Your role involves collaborating with cross-functional teams, including data scientists, data engineers, software engineers, and DevOps specialists, to bridge the gap between experimental model development and reliable production systems. You will be responsible for automating ML pipelines, optimizing model training and serving, ensuring model governance, and maintaining the stability of deployed systems. This position requires a blend of experience in software engineering, data engineering, and machine learning systems, along with a strong understanding of DevOps practices to enable faster experimentation, consistent performance, and scalable ML operations. What You Will Do Work with data science leadership and stakeholders to understand business objectives, map the scope of work, and support colleagues in achieving technical deliverables. Invest in strong relationships with colleagues and build a successful followership around a common goal. Build and optimize ML pipelines for feature engineering, model training, and inference. Develop low-latency, high-throughput model endpoints for distributed environments. Maintain cloud infrastructure for ML workloads, including GPUs/TPUs, across platforms like GCP, AWS, or Azure Troubleshoot, debug, and validate ML systems for performance and reliability. Write and maintain automated tests (unit and integration). Supports discussions with Data Engineers to work on data collection, storage, and retrieval processes. Collaborate with Data Governance to identify data issues and propose data cleansing or enhancement solutions. Drive continuous improvement efforts in enhancing performance and providing increased functionality, including developing processes for automation. Skills You Will Need Group Work Lead: Ability to lead portions of pod iteratives; can clearly communicate priorities and play an effective technical support role for colleagues. Communication: Maintaining timely communication with management and stakeholders on project progress, issues, and concerns. Developing effective communication plans tailored to diverse audiences. Consultive Mindset: Go beyond just providing analytics and actively engage stakeholders to understand their challenges and goals. Ability to have a business-first viewpoint when developing solutions. Cloud & ML Ops: Expertise in managing cloud-based ML infrastructures (GCP, AWS, or Azure), coupled with DevOps practices, ensures seamless model deployment, scalability, and system reliability. This includes containerization, CI/CD pipelines, and infrastructure-as-code tools. Proficiency in programming languages such as Python, SQL, and Java. Who You Are 5+ years of industry experience working with machine learning tools and technologies. Familiarity with agile development frameworks and collaboration tools (e.g., JIRA, Confluence). Experience using Tensorflow, PyTorch, scikit-learn, Kubeflow, pandas and numpy. and frameworks like Ray, Dask preferred. Expertise in data engineering, object-oriented programming, and familiarity with microservices and cloud technologies. An ongoing learner who seeks out emerging technology and can influence others to think innovatively. Gets energized by fast-paced environments and capable of supporting multiple projects - can identify primary and secondary objectives, prioritize time, and communicate timelines to team members. Dedicated to fulfilling ideals of diversity, inclusion, and respect that the client aspire to achieve every day in every way. Regularly required to sit, talk, hear; use hands/fingers to touch, handle, and feel. Occasionally required to move about the workplace and reach with hands and arms. Requires close vision. If you are passionate about leveraging technology to drive business innovation, possess excellent problem-solving skills, and thrive in a dynamic environment, we encourage you to apply for this exciting opportunity. Join us in shaping the future of data analytics and making a meaningful impact in the industry Why Should You Apply? People: Join hands with the talented, Buoyant Culture: Embark on an Grow with Us: Be part of a hyper- growth warm, collaborative team and highly exciting journey with a team startup with great opportunities to Learn that innovates solutions, & Innovate. accomplished leadership. tackles challenges head-on and crafts a vibrant work environment .
Posted 5 days ago
5.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
About Loti AI, Inc Loti AI specializes in protecting major celebrities, public figures, and corporate IP from online threats, focusing on deepfake and impersonation detection. Founded in 2022, Loti offers likeness protection, content location and removal, and contract enforcement across various online platforms including social media and adult sites. The company's mission is to empower individuals to control their digital identities and privacy effectively. We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications And Skills Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. Show more Show less
Posted 6 days ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Combine interface design concepts with digital design and establish milestones to encourage cooperation and teamwork. Develop overall concepts for improving the user experience within a business webpage or product, ensuring all interactions are intuitive and convenient for customers. Collaborate with back-end web developers and programmers to improve usability. Conduct thorough testing of user interfaces in multiple platforms to ensure all designs render correctly and systems function properly. Converting the jobs from Talend ETL to Python and convert Lead SQLS to Snowflake. Developers with Python and SQL Skills. Developers should be proficient in Python (especially Pandas, PySpark, or Dask) for ETL scripting, with strong SQL skills to translate complex queries. They need expertise in Snowflake SQL for migrating and optimizing queries, as well as experience with data pipeline orchestration (e.g., Airflow) and cloud integration for automation and data loading. Familiarity with data transformation, error handling, and logging is also essential. Show more Show less
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Req ID: 327890 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Python Developer - Digital Engineering Sr. Engineer to join our team in Hyderabad, Telangana (IN-TG), India (IN). PYTHON Data Engineer Exposure to retrieval-augmented generation (RAG) systems and vector databases. Strong programming skills in Python (and optionally Scala or Java). Hands-on experience with data storage solutions (e.g., Delta Lake, Parquet, S3, BigQuery) Experience with data preparation for transformer-based models or LLMs Expertise in working with large-scale data frameworks (e.g., Spark, Kafka, Dask) Familiarity with MLOps tools (e.g., MLflow, Weights & Biases, SageMaker Pipelines) About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 6 days ago
7.0 - 10.0 years
45 - 50 Lacs
Noida, Kolkata, Chennai
Work from Office
Dear Candidate, We are hiring a Python Developer to build scalable backend systems, data pipelines, and automation tools. This role requires strong expertise in Python frameworks and a deep understanding of software engineering principles. Key Responsibilities: Develop backend services, APIs, and automation scripts using Python. Work with frameworks like Django, Flask, or FastAPI. Collaborate with DevOps and data teams for end-to-end solution delivery. Write clean, testable, and efficient code. Troubleshoot and debug applications in production environments. Required Skills & Qualifications: Proficient in Python 3.x , OOP, and design patterns Experience with Django, Flask, FastAPI, Celery Knowledge of REST APIs, SQL/NoSQL databases (PostgreSQL, MongoDB) Familiar with Docker, Git, CI/CD, and cloud platforms (AWS/GCP/Azure) Experience in data processing, scripting, or automation is a plus Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 6 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Proficiency in AI tools used to prepare and automate data pipelines and ingestion Apache Spark, especially with MLlib PySpark and Dask for distributed data processing Pandas and NumPy for local data wrangling Apache Airflow – schedule and orchestrate ETL/ELT jobs Google Cloud (BigQuery, Vertex AI) Python (most popular for AI and data tasks) Show more Show less
Posted 6 days ago
7.0 years
0 Lacs
Greater Kolkata Area
Remote
Omni's team is passionate about Commerce and Digital Transformation. We've been successfully delivering Commerce solutions for clients across North America, Europe, Asia, and Australia. The team has experience executing and delivering projects in B2B and B2C solutions. Job Description This is a remote position. We are seeking a Senior Data Engineer to architect and build robust, scalable, and efficient data systems that power AI and Analytics solutions. You will design end-to-end data pipelines, optimize data storage, and ensure seamless data availability for machine learning and business analytics use cases. This role demands deep engineering excellence balancing performance, reliability, security, and cost to support real-world AI applications. Key Responsibilities Architect, design, and implement high-throughput ETL/ELT pipelines for batch and real-time data processing. Build cloud-native data platforms : data lakes, data warehouses, feature stores. Work with structured, semi-structured, and unstructured data at petabyte scale. Optimize data pipelines for latency, throughput, cost-efficiency, and fault tolerance. Implement data governance, lineage, quality checks, and metadata management. Collaborate closely with Data Scientists and ML Engineers to prepare data pipelines for model training and inference. Implement streaming data architectures using Kafka, Spark Streaming, or AWS Kinesis. Automate infrastructure deployment using Terraform, CloudFormation, or Kubernetes operators. Requirements 7+ years in Data Engineering, Big Data, or Cloud Data Platform roles. Strong proficiency in Python and SQL. Deep expertise in distributed data systems (Spark, Hive, Presto, Dask). Cloud-native engineering experience (AWS, GCP, Azure) : BigQuery, Redshift, EMR, Databricks, etc. Experience designing event-driven architectures and streaming systems (Kafka, Pub/Sub, Flink). Strong background in data modeling (star schema, OLAP cubes, graph databases). Proven experience with data security, encryption, compliance standards (e.g., GDPR, HIPAA). Preferred Skills Experience in MLOps enablement : creating feature stores, versioned datasets. Familiarity with real-time analytics platforms (Clickhouse, Apache Pinot). Exposure to data observability tools like Monte Carlo, Databand, or similar. Passionate about building high-scale, resilient, and secure data systems. Excited to support AI/ML innovation with state-of-the-art data infrastructure. Obsessed with automation, scalability, and best engineering practices. (ref:hirist.tech) Show more Show less
Posted 6 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Title : AI/ML Engineer. Company : Cyfuture India Pvt.Ltd. Industry : IT Services and IT Consulting. Location : Sector 81, NSEZ, Noida (5 Days Work From Office). About Cyfuture Cyfuture is a trusted name in IT services and cloud infrastructure, offering state-of-the-art data center solutions and managed services across platforms like AWS, Azure, and VMWare. We are expanding rapidly in system integration and managed services, building strong alliances with global OEMs like VMWare, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. Position Overview We are hiring an experienced AI/ML Engineer to lead and shape our AI/ML initiatives. The ideal candidate will have hands-on experience in machine learning and artificial intelligence, with strong leadership capabilities and a passion for delivering production-ready solutions. This role involves end-to-end ownership of AI/ML projects, from strategy development to deployment and optimization of large-scale systems. Key Responsibilities Lead and mentor a high-performing AI/ML team. Design and execute AI/ML strategies aligned with business goals. Collaborate with product and engineering teams to identify impactful AI opportunities. Build, train, fine-tune, and deploy ML models in production environments. Manage operations of LLMs and other AI models using modern cloud and MLOps tools. Implement scalable and automated ML pipelines (e., with Kubeflow or MLRun). Handle containerization and orchestration using Docker and Kubernetes. Optimize GPU/TPU resources for training and inference tasks. Develop efficient RAG pipelines with low latency and high retrieval accuracy. Automate CI/CD workflows for continuous integration and delivery of ML systems. Key Skills & Expertise Cloud Computing & Deployment : Proficiency in AWS, Google Cloud, or Azure for scalable model deployment. Familiarity with cloud-native services like AWS SageMaker, Google Vertex AI, or Azure ML. Expertise in Docker and Kubernetes for containerized deployments. Experience with Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. Machine Learning & Deep Learning Strong command of frameworks : TensorFlow, PyTorch, Scikit-learn, XGBoost. Experience with MLOps tools for integration, monitoring, and automation. Expertise in pre-trained models, transfer learning, and designing custom architectures. Programming & Software Engineering Strong skills in Python (NumPy, Pandas, Matplotlib, SciPy) for ML development. Backend/API development with FastAPI, Flask, or Django. Database handling with SQL and NoSQL (PostgreSQL, MongoDB, BigQuery). Familiarity with CI/CD pipelines (GitHub Actions, Jenkins). Scalable AI Systems Proven ability to build AI-driven applications at scale. Handle large datasets, high-throughput requests, and real-time inference. Knowledge of distributed computing : Apache Spark, Dask, Ray. Model Monitoring & Optimization Hands-on with model compression, quantization, and pruning. A/B testing and performance tracking in production. Knowledge of model retraining pipelines for continuous learning. Resource Optimization Efficient use of compute resources : GPUs, TPUs, CPUs. Experience with serverless architectures to reduce cost. Auto-scaling and load balancing for high-traffic systems. Problem-Solving & Collaboration Translate complex ML models into user-friendly applications. Work effectively with data scientists, engineers, and product teams. Write clear technical documentation and architecture reports. (ref:hirist.tech) Show more Show less
Posted 1 week ago
12.0 - 18.0 years
0 Lacs
Tamil Nadu, India
Remote
Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. This position requires expertise in designing, developing, debugging, and maintaining AI-powered applications and data engineering workflows for both local and cloud environments. The role involves working on large-scale projects, optimizing AI/ML pipelines, and ensuring scalable data infrastructure. As a PMTS, you will be responsible for integrating Generative AI (GenAI) capabilities, building data pipelines for AI model training, and deploying scalable AI-powered microservices. You will collaborate with AI/ML, Data Engineering, DevOps, and Product teams to deliver impactful solutions that enhance our products and services. Additionally, it would be desirable if the candidate has experience in retrieval-augmented generation (RAG), fine-tuning pre-trained LLMs, AI model evaluation, data pipeline automation, and optimizing cloud-based AI deployments. Responsibilities AI-Powered Software Development & API Integration Develop AI-driven applications, microservices, and automation workflows using FastAPI, Flask, or Django, ensuring cloud-native deployment and performance optimization. Integrate OpenAI APIs (GPT models, Embeddings, Function Calling) and Retrieval-Augmented Generation (RAG) techniques to enhance AI-powered document retrieval, classification, and decision-making. Data Engineering & AI Model Performance Optimization Design, build, and optimize scalable data pipelines for AI/ML workflows using Pandas, PySpark, and Dask, integrating data sources such as Kafka, AWS S3, Azure Data Lake, and Snowflake. Enhance AI model inference efficiency by implementing vector retrieval using FAISS, Pinecone, or ChromaDB, and optimize API latency with tuning techniques (temperature, top-k sampling, max tokens settings). Microservices, APIs & Security Develop scalable RESTful APIs for AI models and data services, ensuring integration with internal and external systems while securing API endpoints using OAuth, JWT, and API Key Authentication. Implement AI-powered logging, observability, and monitoring to track data pipelines, model drift, and inference accuracy, ensuring compliance with AI governance and security best practices. AI & Data Engineering Collaboration Work with AI/ML, Data Engineering, and DevOps teams to optimize AI model deployments, data pipelines, and real-time/batch processing for AI-driven solutions. Engage in Agile ceremonies, backlog refinement, and collaborative problem-solving to scale AI-powered workflows in areas like fraud detection, claims processing, and intelligent automation. Cross-Functional Coordination and Communication Collaborate with Product, UX, and Compliance teams to align AI-powered features with user needs, security policies, and regulatory frameworks (HIPAA, GDPR, SOC2). Ensure seamless integration of structured and unstructured data sources (SQL, NoSQL, vector databases) to improve AI model accuracy and retrieval efficiency. Mentorship & Knowledge Sharing Mentor junior engineers on AI model integration, API development, and scalable data engineering best practices, and conduct knowledge-sharing sessions. Education & Experience Required 12-18 years of experience in software engineering or AI/ML development, preferably in AI-driven solutions. Hands-on experience with Agile development, SDLC, CI/CD pipelines, and AI model deployment lifecycles. Bachelor’s Degree or equivalent in Computer Science, Engineering, Data Science, or a related field. Proficiency in full-stack development with expertise in Python (preferred for AI), Java Experience with structured & unstructured data: SQL (PostgreSQL, MySQL, SQL Server) NoSQL (OpenSearch, Redis, Elasticsearch) Vector Databases (FAISS, Pinecone, ChromaDB) Cloud & AI Infrastructure AWS: Lambda, SageMaker, ECS, S3 Azure: Azure OpenAI, ML Studio GenAI Frameworks & Tools: OpenAI API, Hugging Face Transformers, LangChain, LlamaIndex, AutoGPT, CrewAI. Experience in LLM deployment, retrieval-augmented generation (RAG), and AI search optimization. Proficiency in AI model evaluation (BLEU, ROUGE, BERT Score, cosine similarity) and responsible AI deployment. Strong problem-solving skills, AI ethics awareness, and the ability to collaborate across AI, DevOps, and data engineering teams. Curiosity and eagerness to explore new AI models, tools, and best practices for scalable GenAI adoption. About Athenahealth Here’s our vision: To create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. What’s unique about our locations? From an historic, 19th century arsenal to a converted, landmark power plant, all of athenahealth’s offices were carefully chosen to represent our innovative spirit and promote the most positive and productive work environment for our teams. Our 10 offices across the United States and India — plus numerous remote employees — all work to modernize the healthcare experience, together. Our Company Culture Might Be Our Best Feature. We don't take ourselves too seriously. But our work? That’s another story. athenahealth develops and implements products and services that support US healthcare: It’s our chance to create healthier futures for ourselves, for our family and friends, for everyone. Our vibrant and talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our goal. We continue to expand our workforce with amazing people who bring diverse backgrounds, experiences, and perspectives at every level, and foster an environment where every athenista feels comfortable bringing their best selves to work. Our size makes a difference, too: We are small enough that your individual contributions will stand out — but large enough to grow your career with our resources and established business stability. Giving back is integral to our culture. Our athenaGives platform strives to support food security, expand access to high-quality healthcare for all, and support STEM education to develop providers and technologists who will provide access to high-quality healthcare for all in the future. As part of the evolution of athenahealth’s Corporate Social Responsibility (CSR) program, we’ve selected nonprofit partners that align with our purpose and let us foster long-term partnerships for charitable giving, employee volunteerism, insight sharing, collaboration, and cross-team engagement. What can we do for you? Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. And we provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation. Show more Show less
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Built systems that power B2B SaaS products? Want to scale them for real-world impact? Our client is solving some of the toughest data problems in India powering fintech intelligence, risk engines, and decision-making platforms where structured data is often missing. Their systems are used by leading institutions to make sense of complex, high-velocity datasets in real time. We’re looking for a Senior Data Engineer who has helped scale B2B SaaS platforms, built pipelines from scratch, and wants to take complete ownership of data architecture and infrastructure decisions. What You'll Do: Design, build, and maintain scalable ETL pipelines using Python , PySpark , and Airflow Architect ingestion and transformation workflows using AWS services like S3 , Lambda , Glue , and EMR Handle large volumes of structured and unstructured data with a focus on performance and reliability Lead data warehouse and schema design across Postgres , MongoDB , DynamoDB , and Elasticsearch Collaborate cross-functionally to ensure data infrastructure aligns with product and analytics goals Build systems from the ground up and contribute to key architectural decisions Mentor junior team members and guide implementation best practices You’re a Great Fit If You Have: 3 to 7 years of experience in data engineering , preferably within B2B SaaS/AI environments ( mandatory ) Strong programming skills in Python and experience with PySpark , and Airflow Strong expertise in designing, building and deploying data pipelines in product environments Mandatory experience in NoSQL databases Hands-on with AWS data services and distributed data processing tools like Spark or Dask Understanding of data modeling , performance tuning , and database design Experience working in fast-paced, product-driven teams and have seen the 0 to 1 journey Awareness of async programming and how it applies in real-world risk/fraud use cases Experience mentoring or guiding junior engineers is preferred Role Details: Location: Mumbai (On-site WFO) Experience: 3 to 7 years Budget: 20 to 30 LPA (Max) Notice Period: 30 days or less If you're from a B2B SaaS background and looking to solve meaningful, large-scale data problems we’d love to talk. Apply now or reach out directly to learn more. Show more Show less
Posted 1 week ago
1.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Role : Data Scientist Experience : 1 to 4 Years Work Mode : WFO / Hybrid /Remote if applicable Immediate Joiners Preferred Required Skills & Qualification : An ideal candidate will have experience, as we are building an AI-powered workforce intelligence platform that helps businesses optimize talent strategies, enhance decision making, and drive operational efficiency. Our software leverages cutting-edge AI, NLP, and data science to extract meaningful insights from vast amounts of structured and unstructured workforce data. As part of our new AI team, you will have the opportunity to work on real-world AI applications, contribute to innovative NLP solutions, and gain hands on experience in building AI-driven products from the ground up. Required Skills & Qualification Strong experience in Python programming 1-3 years of experience in Data Science/NLP (Freshers with strong NLP projects are welcome). Proficiency in Python, PyTorch, Scikit-learn, and NLP libraries (NLTK, SpaCy, Hugging Face). Basic knowledge of cloud platforms (AWS, GCP, or Azure). Experience with SQL for data manipulation and analysis. Assist in designing, training, and optimizing ML/NLP models using PyTorch, NLTK, Scikit- learn, and Transformer models (BERT, GPT, etc.). Familiarity with MLOps tools like Airflow, MLflow, or similar. Experience with Big Data processing (Spark, Pandas, or Dask). Help deploy AI/ML solutions on AWS, GCP, or Azure. Collaborate with engineers to integrate AI models into production systems. Expertise in using SQL and Python to clean, preprocess, and analyze large datasets. Learn & Innovate Stay updated with the latest advancements in NLP, AI, and ML frameworks. Strong analytical and problem-solving skills. Willingness to learn, experiment, and take ownership in a fast-paced startup environment. Nice To Have Requirements For The Candidate Desire to grow within the company Team player and Quicker learner Performance-driven Strong networking and outreach skills Exploring aptitude & killer attitude Ability to communicate and collaborate with the team at ease. Drive to get the results and not let anything get in your way. Critical and analytical thinking skills, with a keen attention to detail. Demonstrate ownership and strive for excellence in everything you do. Demonstrate a high level of curiosity and keep abreast of the latest technologies & tools Ability to pick up new software easily and represent yourself peers and co-ordinate during meetings with Customers. What We Offer We offer a market-leading salary along with a comprehensive benefits package to support your well-being. Enjoy a hybrid or remote work setup that prioritizes work-life balance and personal well being. We invest in your career through continuous learning and internal growth opportunities. Be part of a dynamic, inclusive, and vibrant workplace where your contributions are recognized and rewarded. We believe in straightforward policies, open communication, and a supportive work environment where everyone thrives. (ref:hirist.tech) Show more Show less
Posted 1 week ago
0.0 - 5.0 years
5 - 9 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Write effective, scalable code Develop back-end components to improve responsiveness and overall performance Integrate user-facing elements into applications Test and debug programs Improve functionality of existing systems Required Candidate profile Expertise in at least one popular Python framework (like Django, Flask or Pyramid) Familiarity with front-end technologies (like JavaScript and HTML5) Team spirit Good problem-solving skills Perks and benefits Free meals and snacks. Bonus. Vision insurance.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane