Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
10 - 13 Lacs
pune, mumbai (all areas)
Work from Office
Responsibilities/ Duties •Product collaboration: Translate product needs into clear problem statements, evaluation plans, and straightforward solution flows; contribute to UI/UX considerations for model inputs/outputs and edge cases. •Applied ML: Build and iterate on models for logistics use cases (e.g., chatbot, forecasting, ETA, anomaly detection, recommendations, document parsing), favoring simple, reliable approaches first. •Data preparation: Explore and prepare data from MongoDB Atlas, Kafka/MSK streams, and S3; implement reproducible feature pipelines and basic data quality checks. •Evaluation & rollout: Define metrics and baselines; run offline evaluations and support phased/A-B rollouts with clear success criteria. •Integration: Package models as small services or batch jobs; collaborate with Backend/Frontend to design APIs/GraphQL contracts, payloads, and error handling that fit our core platform. •DevOps collaboration: Request/provision suitable compute (EC2; occasional GPU), containerize jobs (Docker), set up light scheduling (EventBridge/Step Functions or Airflow if used), and add basic monitoring/alerts. JD Data Scientist (AI_ML & Analytics) •Analytics & insights: Perform focused analyses to support product decisions and present findings with clear visuals and concise summaries. •Quality & governance: Follow coding standards; version datasets/models where practical; respect access controls for customer data; keep documentation current (problem briefs, data lineage, model cards, runbooks). Criteria for the Role! •3–5 years in applied machine learning with at least one end-to-end production delivery. •Strong Python (pandas, numpy, scikit-learn; plus XGBoost/LightGBM). •Comfortable with SQL and working with MongoDB/event data (Kafka/MSK). •Experience deploying models as APIs or batch jobs (Docker; Git-based workflows); familiarity with CI/CD concepts. •Understanding of evaluation design for classification/regression/time-series; pragmatic feature engineering. •Clear communication and a collaborative, documentation-friendly approach.
Posted 15 hours ago
0 years
0 Lacs
india
Remote
Job Title: Full-Stack Data Scientist – BizVidya Type: Equity Only (No fixed compensation until funding) Commitment: 20–25 hours/week, Remote & Flexible About BizVidya BizVidya is an education technology startup building end-to-end IT solutions for colleges and universities . We aim to transform higher education by capturing and analyzing behavioral, academic, and operational data using apps, IoT devices, and advanced analytics. Our vision is to provide deep insights to institutions and create strong industry–academia collaborations , aligning MSMEs with colleges for innovation and employment. We are building a data-first company , and your role will be central in turning raw institutional and student data into actionable intelligence . Role Overview We are looking for a Full-Stack Data Scientist who can design, build, and deploy end-to-end data pipelines, models, and insights . Unlike a traditional data science role that stops at modeling, this role demands ownership across the full stack — from data engineering to analytics to ML model deployment . You will work closely with the founder, product, and tech teams to create intelligent dashboards, student success prediction models, and institutional performance insights . Key Responsibilities Data Engineering & Management Design and implement data pipelines for ingestion from apps, IoT devices, and institutional ERP systems. Build and maintain ETL workflows for cleaning, transforming, and storing large education datasets. Architect and manage data warehouses (BigQuery, Snowflake, Redshift, etc.). Analytics & Insights Develop dashboards and reporting systems for institutions (faculty, admin, management). Provide behavioral, academic, and operational insights to drive decision-making. Work with business teams to translate raw data into meaningful KPIs. Machine Learning & AI Build predictive models for student performance, dropout risks, placement readiness, and engagement levels. Experiment with AI-based personalization engines for student learning journeys. Deploy models into production-ready APIs or applications. Collaboration & Strategy Collaborate with product, tech, and operations teams to integrate insights into BizVidya’s solutions. Provide technical support in fundraising discussions (showcasing data capability to investors). Mentor interns/junior analysts as the team scales. What We’re Looking For Strong experience in data science + data engineering + ML deployment. Hands-on with: Programming: Python, R, SQL, PySpark Data Engineering: Airflow, Kafka, DBT, ETL frameworks ML/AI: Scikit-learn, TensorFlow, PyTorch Data Viz: Power BI, Tableau, or Plotly/Dash Cloud: AWS/GCP/Azure (data pipelines, ML deployment) Ability to take ownership from raw data → models → deployed insights. Prior experience in EdTech, IoT data, or MSME analytics is a strong plus. Entrepreneurial mindset — comfortable with ambiguity and working purely on equity. What You’ll Gain Equity ownership in BizVidya, a data-driven EdTech startup. Opportunity to build the data backbone from ground zero. Hands-on work at the intersection of education + IoT + analytics. Long-term growth into Head of Data / Chief Data Officer (CDO) as BizVidya scales. Direct collaboration with the founder and early team on strategy, product, and impact. 📌 This role is for someone who doesn’t just want to “do data science” but wants to own the entire data lifecycle — engineering, analytics, and deployment — and shape the future of education with insights. Skills: data engineering,data,data science,iot,models,aiml
Posted 1 day ago
0 years
0 Lacs
mumbai, maharashtra, india
On-site
Position Overview Job Title: RPM Portfolio Manager, AVP Location: Mumbai, India Role Description Risk and Portfolio Management (RPM) is looking for extremely bright candidates with a Finance/Risk and Coding background, to work in a new 1st Line of defense Distressed Asset Management team. The role is categorized as Risk & Portfolio Manager and would suit a well-organized and collaborative individual looking to further develop their credit risk and portfolio management skills in a challenging, fast paced environment, where the team and individual can make significant contribution for the global Corporate Bank - Trade Finance and Lending Business. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Develop strategic analytical tools, data driven applications and reporting in direct alignment with requirements from various revenue-generating desks within TF&L Research & modelling: Lead quantitative research into the portfolio to steer proactive portfolio management of the book Automation & innovation: Translate ideas into machine-assisted solutions and discover automation potential that improves efficiency and control within the bank’s internal environment Creating a Centralized Database for distressed assets among others that consolidates fragmented data sources into a reliable database with many automated data pipelines Facilitate effective use of capital and resources via the establishment of new working tools (platforms) across TF&L portfolios Monitoring key financial and regulatory metrics such as Total Capital Demand, RWA, CRD4, Return on Equity, SVA to ensure alignment with defined targets and strategic objectives for TF&L Increase transparency and “real time” capital impact simulation at Sector, Country, Client or even at transaction level for Non-Performing and Sub performing exposures Ensure compliance with relevant and applicable local and global regulatory and policy requirements Your Skills And Experience Technical Skills: Advanced degree (or equivalent experience) in a quantitative field - finance, math, physics, computer science, econometrics, statistics or engineering Strong programming skills with experience in Python & SQL, demonstrably gained in the financial services industry Solid grasp of Statistic/Econometric concepts including machine learning (standard regression, classification models and time-series techniques) Proficiency in code management via git and standard IDEs/editors and modern development best practices Data engineering, willingness to interact with APIs and large Databases (SQL/Clickhouse) Web & visualisation: Experience building lightweight UIs, analytics dashboards using frameworks such as Flask/FastAPI and Plotly-Dash or comparable packages (e.g. ReportLab) Experience in HTML, as well es experience with networking in flask or experience in packages like with plotly/dash/reportlab Behavioral Skills: (e.g. communication skills) Team spirit and willingness to work in a dynamic environment Openness to adapt new technologies and to find a way to add value with further progressing automation levels Ability to handle multiple and often competing tasks under tight deadline with a focus on the detail Ability to explain complex ideas clearly to both technical and non-technical stakeholders Able to think and work independently while supporting team goals and objectives Decisiveness and performance oriented Demonstrated flexibility and willingness to work for a team which is based in Frankfurt How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 day ago
3.0 years
0 Lacs
india
On-site
Machine Learning Engineer About Astreya: Astreya offers comprehensive IT support and managed services. These services include Data Center and Network Management, Digital Workplace Services (like Service Desk, Audio Visual, and IT Asset Management), as well as Next-Gen Digital Engineering services encompassing Software Engineering, Data Engineering, and cybersecurity solutions. Astreya's expertise lies in creating seamless interactions between people and technology to help organizations achieve operational excellence and growth. Responsibilities: Design, develop, and deploy machine learning models and algorithms to solve complex business problems and drive data-driven decision making Collaborate with cross-functional teams including data engineers, software engineers, and business stakeholders to understand requirements and translate business objectives into scalable ML solutions Build and maintain robust ML pipelines for data preprocessing, feature engineering, model training, validation, and deployment Optimize model performance through hyperparameter tuning, feature selection, and algorithmic improvements Implement MLOps best practices including model versioning, monitoring, and automated retraining workflows Ensure data quality, integrity, and security throughout the ML lifecycle Scale ML systems to handle large datasets and high-throughput inference requirements Conduct A/B testing and experimentation to validate model performance and business impact Monitor deployed models for drift, performance degradation, and bias detection Create comprehensive documentation including technical specifications, model cards, and deployment guides for stakeholders Professional & Technical Skills: Must have strong proficiency in Python and experience with core ML libraries such as scikit-learn, XGBoost, LightGBM, and CatBoost Must have hands-on experience with deep learning frameworks including TensorFlow, PyTorch Must have solid understanding of machine learning algorithms, statistical methods, and model evaluation techniques Must have experience with cloud platforms (AWS, GCP, or Azure) and containerization technologies Docker or Kubernetes Need to have proficiency in SQL and experience working with both structured and unstructured datasets Need to have knowledge of big data technologies such as Spark, Hadoop, or distributed computing frameworks Must be familiar with MLOps tools and platforms such as MLflow, Kubeflow, or similar model lifecycle management systems Must have experience with version control systems (Git) and CI/CD pipelines for ML workflows Need to have knowledge of data visualization tools and libraries such as Matplotlib, Seaborn, Plotly, or Tableau Must be familiar with software engineering best practices, including code testing, debugging, and performance optimization Need to have understanding of software development methodologies such as Agile or Scrum Possess excellent analytical and problem-solving skills with ability to work independently and in team environments Additional Information: Must have Bachelor's or Master's degree in Computer Science, Machine Learning, Statistics, Mathematics, or related quantitative field Preferred 3+ years of experience in machine learning engineering or related roles Preferred experience with real-time inference systems and model serving architectures Preferred knowledge of specialized domains such as computer vision, natural language processing, or recommendation systems
Posted 1 day ago
0 years
3 - 5 Lacs
vadodara
On-site
Job Information Date Opened 08/20/2025 Job Type Full time Relevant Work Experience 6+ months (Python) Timings Rotational Shifts Starting 9:00 AM / 4:00 PM Work Place Work From Office; Off-hours Communication From Home Engagement Type Full Time Industry IT Services City Vadodara State/Province Gujarat Country India Zip/Postal Code 391101 About Us We elevate businesses with Technology, Services and Industry-Specific Solutions. www.rigelnetworks.com Job Description We’re hiring fast learners to build and operate our backtesting, paper, and live execution stack in Python. You’ll turn strategy specs into code, run rigorous backtests, route orders to brokers in paper/live, and enforce risk guardrails. You’ll work from a clear architecture, use AI tools to accelerate delivery, and ship end-to-end features under senior review. We value clean Python, quantitative problem-solving, and practical market awareness (order types, futures/options basics). Exposure to NumPy/Pandas, APIs, and Excel/CSV reporting is useful. Key Responsibilities Backtesting engine: Implement strategy interfaces, signal order flow, fills/slippage/fees, P&L and risk metrics; avoid look-ahead/survivorship bias. Data pipelines: Ingest/normalize historical datasets (futures/options), calendars & timezones, contract rolls; cache & validate data. Paper & live execution: Build/extend broker adapters (REST/WebSocket), place/modify/cancel orders with idempotency, retries, and reconciliation (positions, cash, fills). Risk controls & audit: Max loss, quantity caps, circuit breakers; full audit trails and run artifacts. Config-driven runs: JSON/YAML strategy configs; .env for environments; clean debug logs. Analytics & reporting: Use NumPy/Pandas for metrics; export CSV/Excel summaries when needed. Quality: Tests with pytest , reproducible runs, deterministic seeds; structured logging and basic metrics. Dev workflow: Git branches + PRs, meaningful commits; Docker for local runs; AI-assisted development documented in PRs. Training Program Week 1: Env setup run a sample backtest; add one rule; write 2–3 pytest cases; mock broker adapter; PR with AI prompt notes. Week 2: Deliver a feature slice: config backtest metrics paper-trade path (mock/sandbox) + risk guardrail + reproducibility checklist. Selection process (please read before applyi ng): Take-home assignment (mandatory): Estimated effort 12–18 hours, with a 72-hour calendar window to submit. The task will align with the key responsibilities of this role. Review & presentation: 15–20 minute demo of your solution, code walkthrough, and a small live change.Team interview discussion on testing, debugging, risk/edge cases, and collaboration. Team interview: Discussion of testing, debugging approach, risk/edge cases, collaboration, and trade-offs. AI usage: Allowed and encouraged (ChatGPT/Copilot/etc.), but you must cite key prompts and verify all outputs. Keep commits clean and ensure the project runs from the README.If you’re not able to commit to the assignment and presentation, please do not apply. Apply only if you can: Complete a 12–18 hour assignment within 3 days, Present your own code confidently (demo + brief walkthrough). Use Git and run a Docker/WSL/venv setup. (Linux users may skip Docker if a native setup works reliably). If you can’t commit to the assignment and presentation, please do not apply. Requirements Python 3.x proficiency (OOP, typing), with NumPy/Pandas basics. API skills: Build/consume REST; WebSocket fundamentals; requests/httpx familiarity. Testing & debugging: pytest + fixtures; log-driven troubleshooting. Data & SQL: Joins, indices; comfort with Postgres/MySQL (basic). Time handling: Timezones, trading calendars, intraday timestamps. Git & Docker (basics): Branch/PR workflow; run services with Docker Compose. AI fluency: Use ChatGPT/Copilot to scaffold code/tests; explain what was AI-generated vs. hand-written. Market basics: Order types, futures/options terminology, margins/fees (we’ll deepen this in Week 1). Mindset: Self-motivated, fast learner, follows patterns, writes clear README/notes. Market Knowledge: Read and understand Zerodha Varsity: Intro to Stock Market, Technical Analysis, Futures Trading, Options Theory (Modules 1,2,4,5). Good-to-Have Broker APIs (any): Schwab / IBKR / Zerodha, etc. Task runners/queues (Celery/Redis or APScheduler); basic asyncio. Plotting/reporting (Matplotlib/Plotly); Excel automation. Tooling: black/ruff/isort, mypy/pyright; Linux basics. Technical analysis familiarity (for strategy prototyping). Benefits Please refer www.rigelnetworks.com/careers for benefits.
Posted 1 day ago
3.0 - 6.0 years
0 Lacs
hyderabad, telangana, india
Remote
Role: Data Scientist Experience: 3-6 Years4 Location: HYD (WFO) Mandatory Skills: Python, Machine Learning Frameworks(e.g., TensorFlow, PyTorch is a plus), SQL, data visualization tools. JD: Role Overview: We are looking for a skilled Data Scientist with3 –6 years of experience who can turn data into actionable insights using machine learning, statistical analysis, and data modeling techniques. The candidate will work closely with cross-functional teams to solve business problems using data and analytics. This is an onsite role in Hyderabad with working hours from 2:00 PM to 11:00 PM IST. Key Responsibilities: Analyze large datasets to extract insights, build predictive models, and support data-driven decisions. Design and implement machine learning algorithms for classification, regression, clustering, and recommendation systems. Work with structured and unstructured data, performing data cleaning, preprocessing, and feature engineering. Present findings clearly using data visualizations and dashboards (e.g., Power BI, Tableau, or Matplotlib/Seaborn). Collaborate with engineering and product teams to deploy models into production. Continuously improve models based on performance metrics and feedback. Required Skills & Qualifications: 3–6 years of experience in data science, analytics, or applied machine learning. Strong programming skills in Python (pandas, scikit-learn, NumPy, etc.). Experience with machine learning frameworks (e.g., TensorFlow, PyTorch is a plus). Proficient in working with SQL and relational databases. Experience in data visualization tools like Tableau, Power BI, or libraries like Plotly/Matplotlib. Understanding of statistical techniques and hypothesis testing. Experience with cloud platforms (AWS/GCP/Azure) is a plus. Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, or related field. Other Details: Work Location: Hyderabad (Onsite only) Working Hours: 2:00 PM – 11:00 PM IST Remote Option: Not available
Posted 1 day ago
0 years
0 Lacs
pune, maharashtra, india
On-site
Position Overview Job Title: Senior Engineer Full Stack Corporate Title: Vice President Location: Pune, India Role Description Deutsche Bank is looking to expand its internal Technology capability in Pune to provide best in class technology solutions for the Banking industry. You will work as part of a cross-functional agile delivery team, including analysts, developers and testers. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to take a leading role in all stages of software delivery, from initial analysis right through to production support. We will ask a lot of you, but we will offer a lot in return. You will have an opportunity to work in an environment that provides continuous growth and learning, with an emphasis excellence and mutual respect. This will require the Technology Lead to help execute the following transformations with our global teams: Technology Transformation Our move to our target technology stack & architectural blueprint i.e.Micro services, Kubernetes, PostgreSQL,Bigdata,AI/ML on GCP. One copy of the truth, automated workflow, reduce h/w, decommission systems and build out the strategic platform around the tech stack listed above Operating Model Transformation SAFE Agile, DevOps, automated testing, cycle times approaching 1 day! Drive Agile collaboration with the Business and the broader Risk Technology team globally Workforce Transformation Build Capability around the tech stack, operating model, and risk transformation with employees while reducing vendor sprawl and footprint What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Lead the feature team, collaborating with others to understand requirements, analyse and refine stories, design solutions, implement them, test them and support them in production Use BDD techniques, collaborating closely with users, analysts, developers and other testers. Make sure we are building the right thing. Write code and write it well. Be proud to call yourself a programmer. Use test driven development, write clean code and refactor constantly. Make sure we are building the thing right. Be ready to work on a range of technologies and components, including user interfaces, services and databases. Act as a generalizing specialist. Define and evolve the architecture of the components you are working on and contribute to architectural decisions at a department and bank-wide level. Ensure that the software you build is reliable and easy to support in production. Be prepared to take your turn on call providing 3rd line support when it’s needed Help your team to build, test and release software within short lead times and with minimum of waste. Work to develop and maintain a highly automated Continuous Delivery pipeline. Help create a culture of learning and continuous improvement within your team and beyond People Management As a Vice President, your role will include management and leadership responsibilities, such as: Leading and collaborating across teams Team management Mentoring and teaching Discovering new techniques and helping others to adopt them Leading by example Your Skills And Experience You will need: Deep knowledge of Java , Database & Agular understanding of both object oriented and functional programming language. Knowledge of angular-plotly is added advantage . Hand on experience working with Google Cloud platform. Exposure to configuring Google Kubernetes Engine, Service mesh, writing terraform script for infrastructure creation, Access management using IAM, Google Operation Suite for Application monitoring. Practical experience of test driven development and constant refactoring in continuous integration environment. An understanding of web technologies, frameworks and tools, for example: HTML, CSS, Javascript, Angular, Bootstrap, Node.js Experience or Exposure to BigData Hadoop technologies / BI tools will be an added advantage Knowledge of SQL and relational databases. Experience in PostgreSQL PL/SQL programming Experience working in an agile team, practicing Scrum, Kanban or XP Experience of performing Functional Analysis is highly desirable Experience of Automated Testing is highly desirable Some form of Agile certification would be highly desirable The ideal candidate will also have: Behaviour Driven Development, particularly experience of how it can be used to define requirements in a collaborative manner to ensure the team builds the right thing and create a system of living documentation Experience with a range of technologies that store, transport and manipulate data, for example: relational databases, nosql, document databases, graph databases, Hadoop/HDFS, streaming and messaging Architecture and design approaches that support rapid, incremental and interative delivery, such as Domain Driven Design, CQRS, Event Sourcing and microservices We are looking for great Techologists first. Useful but not essential would be knowledge gained in Financial Services environments, for example products, instruments, trade lifecycles, regulation, risk, financial reporting or accounting, Education/ Qualifications Degree from an accredited college or university with a concentration in Engineering or Computer Science How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 days ago
0 years
0 Lacs
pune, maharashtra, india
On-site
Position Overview Job Title: Senior Engineer Full Stack Corporate Title: Vice President Location: Pune, India Role Description Deutsche Bank is looking to expand its internal Technology capability in Pune to provide best in class technology solutions for the Banking industry. You will work as part of a cross-functional agile delivery team, including analysts, developers and testers. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to take a leading role in all stages of software delivery, from initial analysis right through to production support. We will ask a lot of you, but we will offer a lot in return. You will have an opportunity to work in an environment that provides continuous growth and learning, with an emphasis excellence and mutual respect. This will require the Technology Lead to help execute the following transformations with our global teams: Technology Transformation Our move to our target technology stack & architectural blueprint i.e.Micro services, Kubernetes, PostgreSQL,Bigdata,AI/ML on GCP. One copy of the truth, automated workflow, reduce h/w, decommission systems and build out the strategic platform around the tech stack listed above Operating Model Transformation SAFE Agile, DevOps, automated testing, cycle times approaching 1 day! Drive Agile collaboration with the Business and the broader Risk Technology team globally Workforce Transformation Build Capability around the tech stack, operating model, and risk transformation with employees while reducing vendor sprawl and footprint What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Lead the feature team, collaborating with others to understand requirements, analyse and refine stories, design solutions, implement them, test them and support them in production Use BDD techniques, collaborating closely with users, analysts, developers and other testers. Make sure we are building the right thing. Write code and write it well. Be proud to call yourself a programmer. Use test driven development, write clean code and refactor constantly. Make sure we are building the thing right. Be ready to work on a range of technologies and components, including user interfaces, services and databases. Act as a generalizing specialist. Define and evolve the architecture of the components you are working on and contribute to architectural decisions at a department and bank-wide level. Ensure that the software you build is reliable and easy to support in production. Be prepared to take your turn on call providing 3rd line support when it’s needed Help your team to build, test and release software within short lead times and with minimum of waste. Work to develop and maintain a highly automated Continuous Delivery pipeline. Help create a culture of learning and continuous improvement within your team and beyond People Management As a Vice President, your role will include management and leadership responsibilities, such as: Leading and collaborating across teams Team management Mentoring and teaching Discovering new techniques and helping others to adopt them Leading by example Your Skills And Experience You will need: Deep knowledge of Java , Database & Agular understanding of both object oriented and functional programming language. Knowledge of angular-plotly is added advantage . Hand on experience working with Google Cloud platform. Exposure to configuring Google Kubernetes Engine, Service mesh, writing terraform script for infrastructure creation, Access management using IAM, Google Operation Suite for Application monitoring. Practical experience of test driven development and constant refactoring in continuous integration environment. An understanding of web technologies, frameworks and tools, for example: HTML, CSS, Javascript, Angular, Bootstrap, Node.js Experience or Exposure to BigData Hadoop technologies / BI tools will be an added advantage Knowledge of SQL and relational databases. Experience in PostgreSQL PL/SQL programming Experience working in an agile team, practicing Scrum, Kanban or XP Experience of performing Functional Analysis is highly desirable Experience of Automated Testing is highly desirable Some form of Agile certification would be highly desirable The ideal candidate will also have: Behaviour Driven Development, particularly experience of how it can be used to define requirements in a collaborative manner to ensure the team builds the right thing and create a system of living documentation Experience with a range of technologies that store, transport and manipulate data, for example: relational databases, nosql, document databases, graph databases, Hadoop/HDFS, streaming and messaging Architecture and design approaches that support rapid, incremental and interative delivery, such as Domain Driven Design, CQRS, Event Sourcing and microservices We are looking for great Techologists first. Useful but not essential would be knowledge gained in Financial Services environments, for example products, instruments, trade lifecycles, regulation, risk, financial reporting or accounting, Education/ Qualifications Degree from an accredited college or university with a concentration in Engineering or Computer Science How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 days ago
6.0 - 8.0 years
0 Lacs
gurgaon, haryana, india
On-site
Who we are The next step of your career starts here, where you can bring your own unique mix of skills and perspectives to a fast-growing team. Metyis is a global and forward-thinking firm operating across a wide range of industries, developing and delivering AI & Data, Digital Commerce, Marketing & Design solutions and Advisory services. At Metyis, our long-term partnership model brings long-lasting impact and growth to our business partners and clients through extensive execution capabilities. With our team, you can experience a collaborative environment with highly skilled multidisciplinary experts, where everyone has room to build bigger and bolder ideas. Being part of Metyis means you can speak your mind and be creative with your knowledge. Imagine the things you can achieve with a team that encourages you to be the best version of yourself. We are Metyis. Partners for Impact. What we offer Interact with C-level at our clients on regular basis to drive their business towards impactful change Lead your team in creating new business solutions Seize opportunities at the client and at Metyis in our entrepreneurial environment Become part of a fast growing international and diverse team What you will do Lead and manage the delivery of complex data science projects, ensuring quality and timelines. Engage with clients and business stakeholders to understand business challenges and translate them into analytical solutions. Design solution architectures and guide the technical approach across projects. Align technical deliverables with business goals, ensuring data products create measurable business value. Communicate insights clearly through presentations, visualizations, and storytelling for both technical and non-technical audiences. Promote best practices in coding, model validation, documentation, and reproducibility across the data science lifecycle. Collaborate with cross functional teams to ensure smooth integration and deployment of solutions. Drive experimentation and innovation in AI/ML techniques, including newer fields - Generative AI. What you'll bring 6+ years of experience in delivering full-lifecycle data science projects. Proven ability to lead cross-functional teams and manage client interactions independently. Strong business understanding with the ability to connect data science outputs to strategic business outcomes. Experience with stakeholder management, translating business questions into data science solutions. Track record of mentoring junior team members and creating a collaborative learning environment. Familiarity with data productization and ML systems in production, including pipelines, monitoring, and scalability. Experience managing project roadmaps, resourcing, and client communication. . Tools & Technologies : o Strong hands-on experience in Python/R and SQL. o Good understanding and Experience with cloud platforms such as Azure, AWS, or GCP. o Experience with data visualization tools in python like - Seaborn, Plotly. o Good understanding of Git concepts. o Good experience with data manipulation tools in python like Pandas and Numpy. o Must have worked with scikit learn, NLTK, Spacy, transformers. o Experience with dashboarding tools such as Power BI and Tableau to create interactive and insightful visualizations. o Proficient in using deployment and containerization tools like Docker and Kubernetes for building and managing scalable applications. . Core Competencies: o Strong foundation in machine learning algorithms, predictive modeling, and statistical analysis. o Good understanding of deep learning concepts, especially in NLP and Computer Vision applications. o Proficiency in time-series forecasting and business analytics for functions like marketing, sales, operations, and CRM. o Exposure to tools like - Mlflow, model deployment, API integration, and CI/CD pipelines. o Hands on experience with MLOps and model governance best practices in production environments. o Experience in developing optimization and recommendation system solutions to enhance decision-making, user personalization, and operational efficiency across business functions. Good to have: Generative AI Experience with text and Image data. Familiarity with LLM frameworks such as LangChain and hubs like Hugging Face. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate) for semantic search or retrieval-augmented generation (RAG). In a changing world, diversity and inclusion are core values for team well-being and performance. At Metyis, we want to welcome and retain all talents, regardless of gender, age, origin or sexual orientation, and irrespective of whether or not they are living with a disability, as each of them has their own experience and identity.
Posted 2 days ago
6.0 - 8.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Who we are The next step of your career starts here, where you can bring your own unique mix of skills and perspectives to a fast-growing team. Metyis is a global and forward-thinking firm operating across a wide range of industries, developing and delivering AI & Data, Digital Commerce, Marketing & Design solutions and Advisory services. At Metyis, our long-term partnership model brings long-lasting impact and growth to our business partners and clients through extensive execution capabilities. With our team, you can experience a collaborative environment with highly skilled multidisciplinary experts, where everyone has room to build bigger and bolder ideas. Being part of Metyis means you can speak your mind and be creative with your knowledge. Imagine the things you can achieve with a team that encourages you to be the best version of yourself. We are Metyis. Partners for Impact. What we offer Interact with C-level at our clients on regular basis to drive their business towards impactful change Lead your team in creating new business solutions Seize opportunities at the client and at Metyis in our entrepreneurial environment Become part of a fast growing international and diverse team What you will do Lead and manage the delivery of complex data science projects, ensuring quality and timelines. Engage with clients and business stakeholders to understand business challenges and translate them into analytical solutions. Design solution architectures and guide the technical approach across projects. Align technical deliverables with business goals, ensuring data products create measurable business value. Communicate insights clearly through presentations, visualizations, and storytelling for both technical and non-technical audiences. Promote best practices in coding, model validation, documentation, and reproducibility across the data science lifecycle. Collaborate with cross functional teams to ensure smooth integration and deployment of solutions. Drive experimentation and innovation in AI/ML techniques, including newer fields - Generative AI. What you'll bring 6+ years of experience in delivering full-lifecycle data science projects. Proven ability to lead cross-functional teams and manage client interactions independently. Strong business understanding with the ability to connect data science outputs to strategic business outcomes. Experience with stakeholder management, translating business questions into data science solutions. Track record of mentoring junior team members and creating a collaborative learning environment. Familiarity with data productization and ML systems in production, including pipelines, monitoring, and scalability. Experience managing project roadmaps, resourcing, and client communication. . Tools & Technologies: o Strong hands-on experience in Python/R and SQL. o Good understanding and Experience with cloud platforms such as Azure, AWS, or GCP. o Experience with data visualization tools in python like - Seaborn, Plotly. o Good understanding of Git concepts. o Good experience with data manipulation tools in python like Pandas and Numpy. o Must have worked with scikit learn, NLTK, Spacy, transformers. o Experience with dashboarding tools such as Power BI and Tableau to create interactive and insightful visualizations. o Proficient in using deployment and containerization tools like Docker and Kubernetes for building and managing scalable applications. . Core Competencies: o Strong foundation in machine learning algorithms, predictive modeling, and statistical analysis. o Good understanding of deep learning concepts, especially in NLP and Computer Vision applications. o Proficiency in time-series forecasting and business analytics for functions like marketing, sales, operations, and CRM. o Exposure to tools like - Mlflow, model deployment, API integration, and CI/CD pipelines. o Hands on experience with MLOps and model governance best practices in production environments. o Experience in developing optimization and recommendation system solutions to enhance decision-making, user personalization, and operational efficiency across business functions. Good to have: Generative AI Experience with text and Image data. Familiarity with LLM frameworks such as LangChain and hubs like Hugging Face. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate) for semantic search or retrieval-augmented generation (RAG). In a changing world, diversity and inclusion are core values for team well-being and performance. At Metyis, we want to welcome and retain all talents, regardless of gender, age, origin or sexual orientation, and irrespective of whether or not they are living with a disability, as each of them has their own experience and identity.
Posted 2 days ago
5.0 years
0 Lacs
india
Remote
Freelancer Data Scientist / Developer – Battery Cell Technology Job Title: Data Scientist / Developer – Battery Cell Technology (Automotive Industry) Location: Remote Employment Type: Freelancer Experience: 5+ years About the Role: We are seeking an experienced Data Scientist / Developer with a strong background in battery cell technology and automotive applications (electric mobility, charging infrastructure, energy storage). The ideal candidate will bring hands-on expertise in data science, analytics, and programming while leveraging industry experience in the automotive sector. You will work closely with cross-functional teams to analyze complex datasets, design models, and support decision-making in areas critical to the future of electric mobility. Key Responsibilities: Analyze and interpret complex datasets related to battery cells, charging infrastructure, and electric mobility applications. Develop and implement data models, statistical methods, and visualization dashboards to support R&D and business objectives. Work with tools such as MATLAB, Python (numpy, pandas, scipy), PySpark, Databricks, SQL, PowerBI, Plotly, Bokeh for data analysis and visualization. Support optimization of battery performance, charging patterns, and related automotive applications through advanced analytics. Collaborate with engineering, R&D, and product teams to translate data-driven insights into actionable outcomes. Organize, clean, and manage large datasets, ensuring data integrity and structure for analysis. Drive projects independently while coordinating with multiple stakeholders across domains. Required Skills & Qualifications: Mandatory experience in the automotive industry (battery cell technology, electric mobility, or charging infrastructure). Several years of professional experience in data science, analytics, or software development. Strong programming skills: Python, MATLAB, and experience with data processing, modeling, and visualization tools. Proficiency in SQL (queries, data structuring); hands-on experience with PySpark and/or Databricks. Strong analytical and problem-solving skills with the ability to work on multiple projects simultaneously. Excellent communication, organizational, and project management skills. High level of initiative, adaptability, and ability to grasp new concepts quickly. Fluent in English (written and spoken). Nice-to-Have: Experience with energy storage systems or EV charging networks. Exposure to AI/ML applications in automotive battery research. To Apply: Send your updated resume, along with your expected hourly rate to: Sridevi.k@hexad.in Additionally, if you happen to know of anyone in your network who may be interested in this opportunity, please feel free to refer them. They can reach out directly to the above provided email.
Posted 2 days ago
0 years
0 Lacs
chennai, tamil nadu, india
On-site
AI Intern – Natural Language Reporting & Insights Project Duration: 2 Months Location: Chennai – On Site About the Project: We are enhancing our artwork management platform, used by FMCG and pharma clients for workflow management, copy management, and proofing. The project aims to enable customized report generation from natural language queries , automatically converting them into SQL queries, fetching relevant data, and generating actionable insights. These insights will help clients pinpoint workflow bottlenecks, delays, and rework patterns, while mapping them directly to their KPIs. Key Responsibilities: Design and implement a Natural Language to SQL query engine for dynamic, user-driven reporting Build pipelines to fetch, process, and visualize database results based on user queries Leverage Large Language Models (LLMs) for domain-specific query understanding and optimization Apply data analytics and machine learning techniques to identify workflow bottlenecks and trends Integrate AI models into Flask/Django-based APIs for seamless platform integration Develop KPI-focused dashboards and visualizations for business insights Experiment with prompt engineering for accuracy in query interpretation Document the architecture, workflows, and findings for technical and client-facing use Must-Have Skills: Strong programming skills in Python Machine learning, Deep Learning and Natural Language Processing (NLP) Solid understanding of SQL and relational databases Experience with LLMs (e.g., OpenAI, Hugging Face, Azure OpenAI) Prompt engineering and fine-tuning for LLMs Familiarity with NL-to-SQL frameworks or semantic search methods Knowledge of data visualization (e.g., Plotly, Matplotlib, Dash, Streamlit) Strong debugging and rapid prototyping skills Text-to-Structure transformations (Natural Language → SQL, JSON, etc.) Outcome: A functional AI-powered natural language reporting system integrated into our platform Real-time dashboards mapping performance metrics to client KPIs Technical proof-of-concepts for advanced reporting and insights generation Recommendations for scaling and productizing the solution
Posted 2 days ago
5.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Transport is at the core of modern society. Imagine using your expertise to shape sustainable transport and infrastructure solutions for the future? If you seek to make a difference on a global scale, working with next-gen technologies and the sharpest collaborative teams, then we could be a perfect match. About Excelher program: Are you looking for an opportunity to restart your career? Do you want to work with an organization that would value your experience no matter when you gained it? How about working with the best minds in the transportation industry where we need more women power? We are pleased to launch the ExcelHer program – the career returnship program at Volvo Group in India. The program is for women who have been on a career break for a year or more. This is our step towards empowering women to relaunch their professional journey after their absence from the workplace due to personal commitments. Exciting work assignments have been identified which you can refer to in the list below. The assignments are for a tenure of 9 months. The participant of this program would have access to professional development programs, mentoring assistance by a business leader, apart from the experience of working with people from different functions/technologies/culture. Go ahead and apply if you find the opportunities in line with your experience and career interest. Data Scientist: Digital Operations BLR About The Team We, at GTT Digital Operations, enable data availability for vehicle development across all domains by fostering close collaboration between various engineering teams. We are a driving force for developing the way we use and handle our data. We are seeking a Data Engineer with experience in Python and SQL , and advanced data handling techniques. In this role, you will design, build, and optimize data pipelines, manage large datasets efficiently, and create interactive data applications and visualizations to support data-driven decision-making. Skills Required Proficiency in Python for data processing, automation, and visualization. Strong knowledge of SQL for database design, querying, and optimization. Familiarity with Apache Airflow will be an added advantage Education & Experience Required Bachelor’s/Master’s degree in electronics, software, computer engineering, or related field. 5+ years of experience in working with Data Engineering Projects using Python Good understanding and hands-on experience in working with GitHub or similar Excellent leadership, communication, and interpersonal skills, with the ability to collaborate effectively with cross-functional teams and stakeholders. Strong analytical and problem-solving abilities, with a proactive approach to identifying and addressing product risks and challenges. Key Responsibilities Manage and process large datasets efficiently using data chunking and memory-optimized data manipulation techniques in Python with libraries such as Pandas, Polars, Dask, PyArrow, PySpark etc. Work with PostgreSQL databases and write efficient and scalable queries for data extraction, manipulation, and analysis. Automate data workflows and orchestrate processes using Apache Airflow. Handle various file formats including Parquet, JSON, CSV etc for efficient data storage, processing, and sharing. Utilize Amazon S3 Buckets for scalable data storage and efficient data retrieval. Develop and maintain interactive, user-friendly data applications for internal stakeholders. Create insightful visualizations and plots using Python libraries such as Matplotlib, Plotly etc. to communicate data trends and insights. Collaborate with Data Analysts, Data Scientists, and Business Stakeholders to gather and fulfil analytical requirements. We value your data privacy and therefore do not accept applications via mail. Who We Are And What We Believe In We are committed to shaping the future landscape of efficient, safe, and sustainable transport solutions. Fulfilling our mission creates countless career opportunities for talents across the group’s leading brands and entities. Applying to this job offers you the opportunity to join Volvo Group . Every day, you will be working with some of the sharpest and most creative brains in our field to be able to leave our society in better shape for the next generation. We are passionate about what we do, and we thrive on teamwork. We are almost 100,000 people united around the world by a culture of care, inclusiveness, and empowerment. Group Trucks Technology are seeking talents to help design sustainable transportation solutions for the future. As part of our team, you’ll help us by engineering exciting next-gen technologies and contribute to projects that determine new, sustainable solutions. Bring your love of developing systems, working collaboratively, and your advanced skills to a place where you can make an impact. Join our design shift that leaves society in good shape for the next generation.
Posted 2 days ago
3.0 - 6.0 years
0 Lacs
hyderabad, telangana, india
Remote
Role: Data Scientist Experience: 3-6 Years4 Location: HYD (WFO) Mandatory Skills: Python, Machine Learning Frameworks(e.g., TensorFlow, PyTorch is a plus), SQL, data visualization tools. JD: Role Overview: We are looking for a skilled Data Scientist with3 –6 years of experience who can turn data into actionable insights using machine learning, statistical analysis, and data modeling techniques. The candidate will work closely with cross-functional teams to solve business problems using data and analytics. This is an onsite role in Hyderabad with working hours from 2:00 PM to 11:00 PM IST. Key Responsibilities: Analyze large datasets to extract insights, build predictive models, and support data-driven decisions. Design and implement machine learning algorithms for classification, regression, clustering, and recommendation systems. Work with structured and unstructured data, performing data cleaning, preprocessing, and feature engineering. Present findings clearly using data visualizations and dashboards (e.g., Power BI, Tableau, or Matplotlib/Seaborn). Collaborate with engineering and product teams to deploy models into production. Continuously improve models based on performance metrics and feedback. Required Skills & Qualifications: 3–6 years of experience in data science, analytics, or applied machine learning. Strong programming skills in Python (pandas, scikit-learn, NumPy, etc.). Experience with machine learning frameworks (e.g., TensorFlow, PyTorch is a plus). Proficient in working with SQL and relational databases. Experience in data visualization tools like Tableau, Power BI, or libraries like Plotly/Matplotlib. Understanding of statistical techniques and hypothesis testing. Experience with cloud platforms (AWS/GCP/Azure) is a plus. Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, or related field. Other Details: Work Location: Hyderabad (Onsite only) Working Hours: 2:00 PM – 11:00 PM IST Remote Option: Not available
Posted 2 days ago
5.0 - 9.0 years
25 - 35 Lacs
bengaluru
Hybrid
• Total experience 5+ years • Working experience in Python • Experience with data visualization • working experience of DS stack packages • Experience working with GCP/AWS cloud services • Hands-on experience deploying ML solutions
Posted 3 days ago
10.0 years
0 Lacs
pune, maharashtra, india
On-site
Role: API Tester Location: Pune/Nagpur, Maharashtra (Office Location) Experience: 10+ years Duration: Full Time Working Hours: UK time shift Notice Period : Immediate or 30 Days Must Have Skills: Load Testing/Performance Testing & Load Runner & FAST API & Python . Job Description: Skill Set Must Have Python Programming Experience with FAST API Caching mechanism (Redis and in-memory) Swagger UI (To check API endpoints and facilitate exchange with the frontend) Experience with developing and testing high performing APIs that meets business SLAs Azure experience (Hosting and deployment) Familiarity with CI/CD implementation, GIT, Bitbucket Experience optimizing APIs for improved performance Technically familiar with the stack to recommend infrastructure improvements Desirable Skills Experience with Plotly library structure (For pictorial presentation of the data) Experience with Databricks/Database and schema knowledge Soft Skills Solid communications skills Negotiation skills with the highly technical SMEs
Posted 3 days ago
0 years
0 Lacs
chennai, tamil nadu, india
On-site
Artificial Intelligence Advancement Center is looking for professionals experienced in NLP/LLM/GenAI, who are hands-on and can employ many NLP/Prompt engineering techniques from traditional statistical/ML NLP to DL-based sequence models and transformers in their day-to-day work. Description: You'll be working alongside leading technical experts from all around the world, on a variety of products involving Sequence/token classification, QA/chatbots, translation, semantic/search and summarization, among others. Responsibilities: Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools. Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities. Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Qualifications: Education : Bachelor’s or master’s degree in computer science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Technical Requirements : Soft Skills : Strong communication skills and do excellent teamwork through Git/slack/email/call with multiple team members across geographies. GenAI Skills : Experience in LLM models like PaLM, GPT4, Mistral (open-source models), Work through the complete lifecycle of Gen AI model development, from training and testing to deployment and performance monitoring. Developing and maintaining AI pipelines with multimodalities like text, image, audio etc. Have implemented in real-world Chat bots or conversational agents at scale handling different data sources. Experience in developing Image generation/translation tools using any of the latent diffusion models like stable diffusion, Instruct pix2pix. Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. ML/DL Skills : High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others NLP Skills : Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment. Python Project Management Skills Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) Cloud Skills and Computing : Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open-source packages to benchmark and derive summary. Experience in using GPU/CPU of cloud and on-prem infrastructures. Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Deployment Skills : Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. UI : Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Miscellaneous Skills : Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases. If interested, please share update CV with Comp details and NP and Current location along with exp in below skills NLP/LLM/GenAI applications/products SoTA models/techniques Experience in LLM models like PaLM, GPT4, Mistral (open-source models) ML/DL Skills NLP Skills GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI
Posted 4 days ago
1.5 - 6.5 years
0 Lacs
hyderabad, telangana, india
On-site
POSITION SUMMARY Zoetis, Inc. is the world's largest producer of medicine and vaccinations for pets and livestock. Join us at Zoetis India Capability Center (ZICC) in Hyderabad, where innovation meets excellence. As part of the world's leading animal healthcare company, ZICC is at the forefront of driving transformative advancements and applying technology to solve the most complex problems. Our mission is to ensure sustainable growth and maintain a competitive edge for Zoetis globally by leveraging the exceptional talent in India. At ZICC, you'll be part of a dynamic team that partners with colleagues worldwide, embodying the true spirit of One Zoetis. Together, we ensure seamless integration and collaboration, fostering an environment where your contributions can make a real impact. Be a part of our journey to pioneer innovation and drive the future of animal healthcare. As an Full Stack Engineer, you will design, build, and optimize AI-driven applications using Large Language Models (LLMs), retrieval-augmented generation (RAG), and integration with enterprise systems. You will work closely with data scientists, product managers, and other engineers to transform ideas into scalable solutions. Your contributions will shape how Zoetis leverages AI to improve animal healthcare worldwide. Responsibilities: * Front-end & back-end development: o Design and implement user interfaces using react and other front-end technologies. o Develop server-side logic using node.js, python and other languages. § Ensure Responsive design. § Optimize applications for maximum speed and scalability. § Manage SQL interactions. § Integrate components seamlessly. * Problem-Solving and Attention to Detail: o Demonstrate excellent problem-solving abilities and attention to detail. o Identify and resolve issues promptly within SLA guidelines, providing ongoing support to business users. * Deployment and Maintenance: o Deploy applications to production environments. o Monitor and maintain applications post-deployment. o Implement updates and improvements based on user feedback. * Collaboration: o Work closely with designers, product managers, and other developers. o Participate in code reviews and provide constructive feedback. o Communicate effectively to ensure project alignment and progress. * Continuous Learning: o Stay updated with the latest technologies and industry trends. o Continuously improve skills and knowledge through training and practice. o Experiment with new tools and frameworks to enhance development processes. POSITION RESPONSIBILITIES Percent of Time: Design, develop, and deploy AI-powered custom agents, integrating front-end UI and back-end AI pipelines 50% Implement, fine-tune, and maintain LLM-based solutions using APIs and custom architectures 20% Collaborate across teams, conduct code reviews, and continuously explore new AI tools, frameworks, and methods 20% Monitor and optimize AI agent performance, ensuring security, compliance, and scalability 10% ORGANIZATIONAL RELATIONSHIPS * Interacting with business stakeholders to gather integration requirements, understand business processes, and ensure that integration solutions align with organizational goals and objectives. * Work with implementation partners who may be responsible for deploying, configuring, or maintaining integrated solutions within Zoetis IT landscape. * Coordinate with developers and other members of the team to implement integration solutions, share knowledge, and address technical challenges. * Work with data scientists to integrate AI models into production-ready services. * Partner with cloud engineers and DevOps teams for deployment and scaling of AI agents. * Coordinate with UX/UI designers to ensure AI tools are intuitive and impactful for end users. EDUCATION AND EXPERIENCE Education: Master's degree in computer science, Artificial Intelligence, or related field. Experience: * 1.5-6.5 years of experience in full-stack engineering with exposure to AI/ML projects. * Strong proficiency in React, Node.js, Python, and back-end frameworks. * Experience working with LLMs (e.g., OpenAI, Anthropic, Hugging Face) and RAG pipelines. * Hands-on experience with vector databases (e.g., Pinecone, Weaviate, FAISS, ChromaDB). * API development and integration skills for AI inference and data services. * CI/CD pipelines and version control with Git. * Familiarity with cloud platforms (AWS, Azure, or GCP) and AI services. TECHNICAL SKILLS REQUIREMENTS * Front-end: React, Next.js, TypeScript. * Back-end: Node.js, Python (FastAPI, Flask), REST/GraphQL APIs. * AI/ML: LLM APIs, LangChain, RAG, Prompt Engineering, Vector Databases. * Databases: PostgreSQL, SQL, NoSQL. * DevOps: CI/CD, Docker, Kubernetes. * Data Visualization: D3.js, Plotly, or similar. PHYSICAL POSITION REQUIREMENTS Regular working hours are from 11 AM to 8:00 PM IST. Sometimes, more overlap with the EST Time zone is required during production go-live. About Zoetis At Zoetis , our purpose is to nurture the world and humankind by advancing care for animals. As a Fortune 500 company and the world leader in animal health, we discover, develop, manufacture and commercialize vaccines, medicines, diagnostics and other technologies for companion animals and livestock. We know our people drive our success. Our award-winning culture, built around our Core Beliefs, focuses on our colleagues' careers, connection and support. We offer competitive healthcare and retirement savings benefits, along with an array of benefits, policies and programs to support employee well-being in every sense, from health and financial wellness to family and lifestyle resources. Global Job Applicant Privacy Notice
Posted 4 days ago
3.0 years
0 Lacs
bangalore urban, karnataka, india
On-site
Note: We are hiring Y25 Company Max - Budget: 15-20 LPA Total experience: 3+ years Role Overview We are looking for a Senior AI/Data Scientist with 3-5 years of experience who is passionate about building AI and machine learning solutions for real-world business problems. As part of our AI team, you will design, develop and deploy advanced machine learning models, Generative AI applications and AI-powered decision systems. You will work with structured and unstructured data, develop predictive models, AI-driven insights and business-aware Generative AI agents that enhance productivity and decision-making. Key Responsibilities • Build Gen AI-enabled solutions using online and offline LLMs, SLMs and TLMs tailored to domain-specific problems. • Deploy agentic AI workflows and use cases using frameworks like LangGraph, Crew AI etc. • Apply NLP, predictive modelling and optimization techniques to develop scalable machine learning solutions. • Integrate enterprise knowledge bases using Vector Databases and Retrieval Augmented Generation (RAG). • Apply advanced analytics to address complex challenges in Healthcare, BFSI and Manufacturing domains. • Deliver embedded analytics within business systems to drive real-time operational insights. Required Skills & Experience • 3–5 years of experience in applied Data Science or AI roles. • Experience working in any one of the following domains: BFSI, Healthcare/Health Sciences, Manufacturing or Utilities. • Proficiency in Python, with hands-on experience in libraries such as scikit-learn, TensorFlow • Practical experience with Gen AI (LLMs, RAG, vector databases), NLP and building scalable ML solutions. • Experience with time series forecasting, A/B testing, Bayesian methods and hypothesis testing. • Strong skills in working with structured and unstructured data, including advanced feature engineering. • Familiarity with analytics maturity models and the development of Analytics Centre of Excellence (CoE’s). • Exposure to cloud-based ML platforms like Azure ML, AWS SageMaker or Google Vertex AI. • Data visualization using Matplotlib, Seaborn, Plotly; experience with Power BI is a plus. What We Look for (Values & Behaviours) • AI-First Thinking – Passion for leveraging AI to solve business problems. • Data-Driven Mindset – Ability to extract meaningful insights from complex data. • Collaboration & Agility – Comfortable working in cross-functional teams with a fast-paced mindset. • Problem-Solving – Think beyond the obvious to unlock AI-driven opportunities. • Business Impact – Focus on measurable outcomes and real-world adoption of AI. • Continuous Learning – Stay updated with the latest AI trends, research and best practices. Why Join Us? • Work on cutting-edge AI & GenAI projects. • Be part of a high-Caliber AI team solving complex business challenges. • Exposure to global enterprises and AI-driven decision-making. • Competitive compensation and fast-track career growth in AI. • Get mentored by best-in-class AI leaders who will help shape you into a top AI professional
Posted 4 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Enphase Energy is a global energy technology company and a leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, their innovative microinverter technology revolutionized solar power, making it a safer, more reliable, and scalable energy source. The Enphase Energy System enables users to make, use, save, and sell their own power. With over 80 million products shipped across 160 countries, Enphase is recognized as one of the most successful and innovative clean energy companies worldwide. Join the dynamic teams at Enphase Energy that are dedicated to designing and developing next-generation energy technologies to help drive a sustainable future. The Staff Data Analyst role at Enphase Energy involves analyzing and interpreting product quality data throughout the entire product lifecycle. The role entails providing essential support for data management activities within the Quality organization and collaborating with Engineering/CS teams and Information Technology. Responsibilities: - Function as a resource to departments and process improvement teams in quality data management - Identify opportunities for leveraging company data to drive product/process improvements - Collaborate with cross-functional teams to understand their needs and run queries to support their analyses - Develop a deep understanding of the dataset and perform detailed analysis - Run SQL/R/Python queries on various databases to identify patterns and diagnose problems - Identify opportunities for automation, monitoring, and visualization using automated tools Qualifications: - BE/B.Tech degree or higher in Engineering or Computer Science - Minimum of 8 years of experience in data science - Proficiency in programming/query languages such as Python (Matplotlib, Plotly libraries), R, and SQL - Design, develop, and implement data-driven strategies and machine learning models - Ability to understand various data structures and common data transformation methods - Expertise in delivering analysis results using customized visualizations - Strong verbal and written communication skills to interpret and summarize presentations for executives Join Enphase Energy's team of talented individuals and contribute to shaping the future of sustainable energy solutions.,
Posted 4 days ago
3.0 years
0 Lacs
hyderabad, telangana, india
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: These roles have many overlapping skills with GENAI Engineers and architects. Description may scaleup/scale down based on expected seniority. Roles & Responsibilities: -Implement generative AI models, identify insights that can be used to drive business decisions. Work closely with multi-functional teams to understand business problems, develop hypotheses, and test those hypotheses with data, collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals. -Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services. -Optimizing existing generative AI models for improved performance, scalability, and efficiency. -Ensure data quality and accuracy -Leading the design and development of prompt engineering strategies and techniques to optimize the performance and output of our GenAI models. -Implementing cutting-edge NLP techniques and prompt engineering methodologies to enhance the capabilities and efficiency of our GenAI models. -Determining the most effective prompt generation processes and approaches to drive innovation and excellence in the field of AI technology, collaborating with AI researchers and developers -Experience working with cloud based platforms (example: AWS, Azure or related) -Strong problem-solving and analytical skills -Proficiency in handling various data formats and sources through Omni Channel for Speech and voice applications, part of conversational AI -Prior statistical modelling experience -Demonstrable experience with deep learning algorithms and neural networks -Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. -Contributing to the establishment of best practices and standards for generative AI development within the organization. Professional & Technical Skills: -Must have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. -Must be proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras. -Must have strong knowledge of data structures, algorithms, and software engineering principles. -Must be familiar with cloud-based platforms and services, such as AWS, GCP, or Azure. -Need to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face. -Must be familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly. -Need to have knowledge of software development methodologies, such as Agile or Scrum. -Possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Additional Information: -Must have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Ph.D. is highly desirable. -strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. -You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment., 15 years full time education
Posted 4 days ago
3.0 years
0 Lacs
hyderabad, telangana, india
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: These roles have many overlapping skills with GENAI Engineers and architects. Description may scaleup/scale down based on expected seniority. Roles & Responsibilities: -Implement generative AI models, identify insights that can be used to drive business decisions. Work closely with multi-functional teams to understand business problems, develop hypotheses, and test those hypotheses with data, collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals. -Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services. -Optimizing existing generative AI models for improved performance, scalability, and efficiency. -Ensure data quality and accuracy -Leading the design and development of prompt engineering strategies and techniques to optimize the performance and output of our GenAI models. -Implementing cutting-edge NLP techniques and prompt engineering methodologies to enhance the capabilities and efficiency of our GenAI models. -Determining the most effective prompt generation processes and approaches to drive innovation and excellence in the field of AI technology, collaborating with AI researchers and developers -Experience working with cloud based platforms (example: AWS, Azure or related) -Strong problem-solving and analytical skills -Proficiency in handling various data formats and sources through Omni Channel for Speech and voice applications, part of conversational AI -Prior statistical modelling experience -Demonstrable experience with deep learning algorithms and neural networks -Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. -Contributing to the establishment of best practices and standards for generative AI development within the organization. Professional & Technical Skills: -Must have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. -Must be proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras. -Must have strong knowledge of data structures, algorithms, and software engineering principles. -Must be familiar with cloud-based platforms and services, such as AWS, GCP, or Azure. -Need to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face. -Must be familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly. -Need to have knowledge of software development methodologies, such as Agile or Scrum. -Possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Additional Information: -Must have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Ph.D. is highly desirable. -strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. -You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment., 15 years full time education
Posted 4 days ago
3.0 years
0 Lacs
hyderabad, telangana, india
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: These roles have many overlapping skills with GENAI Engineers and architects. Description may scaleup/scale down based on expected seniority. Roles & Responsibilities: -Implement generative AI models, identify insights that can be used to drive business decisions. Work closely with multi-functional teams to understand business problems, develop hypotheses, and test those hypotheses with data, collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals. -Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services. -Optimizing existing generative AI models for improved performance, scalability, and efficiency. -Ensure data quality and accuracy -Leading the design and development of prompt engineering strategies and techniques to optimize the performance and output of our GenAI models. -Implementing cutting-edge NLP techniques and prompt engineering methodologies to enhance the capabilities and efficiency of our GenAI models. -Determining the most effective prompt generation processes and approaches to drive innovation and excellence in the field of AI technology, collaborating with AI researchers and developers -Experience working with cloud based platforms (example: AWS, Azure or related) -Strong problem-solving and analytical skills -Proficiency in handling various data formats and sources through Omni Channel for Speech and voice applications, part of conversational AI -Prior statistical modelling experience -Demonstrable experience with deep learning algorithms and neural networks -Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. -Contributing to the establishment of best practices and standards for generative AI development within the organization. Professional & Technical Skills: -Must have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. -Must be proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras. -Must have strong knowledge of data structures, algorithms, and software engineering principles. -Must be familiar with cloud-based platforms and services, such as AWS, GCP, or Azure. -Need to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face. -Must be familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly. -Need to have knowledge of software development methodologies, such as Agile or Scrum. -Possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Additional Information: -Must have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Ph.D. is highly desirable. -strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. -You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment., 15 years full time education
Posted 4 days ago
5.0 - 7.0 years
0 Lacs
gurgaon, haryana, india
On-site
BhaiFi Networks Private Limited is a leading provider of AI-powered cybersecurity solutions, designed to safeguard businesses - especially small and medium-sized enterprises (SMEs) - against evolving digital threats. Founded in 2017, our mission is to democratize cybersecurity by making enterprise-grade protection accessible and easy to use for businesses of all sizes, even those with lean or non-technical teams. We offer two flagship products: BhaiFi – AI-Powered Guest WiFi : https://bhaifi.ai FirewallX – An AI-First Network Security and Management Platform: https://firewallx.ai With features like advanced firewall protection, intrusion detection, secure VPN, real-time threat intelligence, and more, we help our customers secure their networks with ease. We’re a lean, high-impact team redefining how modern cybersecurity is built. Requirements Key Responsibilities: Lead the development and implementation of machine learning models for network security applications Oversee data analysis processes and create advanced visualizations to communicate insights Guide model selection, training, and validation procedures Develop algorithms for real-time forecasting and predictions Manage project timelines, resources, and deliverables Collaborate with and mentor the Data Engineer and ML/LLM Developer Communicate project progress and results to stakeholders Required Skills: Bachelor’s or master’s degree in computer science, Software Engineering, or advanced degree in Computer Science, Statistics, or related field 5-7 years of experience in data science, with at least 2 years in a leadership role Experience in network security or cybersecurity Knowledge of time series analysis and anomaly detection techniques Familiarity with graph analytics and network analysis Expertise in Python and R for data analysis and modelling Proficiency in machine learning libraries (scikit-learn, TensorFlow, PyTorch) Strong knowledge of statistical analysis and probability theory Experience with big data technologies (Hadoop ecosystem, Spark) Proficiency in SQL and NoSQL databases Strong data visualization skills (Matplotlib, Seaborn, Plotly, Tableau) Experience with version control systems (Git) Knowledge of data privacy and security best practices Experience with deep learning models for sequence data Understanding of DevOps practices and CI/CD pipelines Familiarity with containerization technologies (Docker, Kubernetes) Project management experience (Agile methodologies preferred) Benefits Daily Meditation & Weekly Gratitude Practice Comprehensive Health & Wellness Support Business Travel Reimbursement
Posted 4 days ago
0 years
0 Lacs
navi mumbai, maharashtra, india
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Scientist Senior Data Scientist (London) Who is Mastercard? As a global technology company our mission at Mastercard is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview Our International, Open Banking, Product Value Proposition Team is looking for a Senior Data Scientist to develop and drive forward Mastercard’s ambitious, data-driven open banking solutions, through the skillful application of data science and a highly customer centric focus. The ideal candidate is motivated, intellectually curious, technically excellent, a great communicator and someone who will enjoy helping us build out our data science team and capabilities. Role As an individual contributor working within a growing data science team, you will take responsibility for developing market leading, innovative, analytical open banking solutions. Focused in the first instance on affordability/credit decisioning and identity/income verification use cases, you will help empower consumers and drive value creation across our client base. Specifically In This Position, You Will: In pursuit of highly valued, market leading, solutions and insights, apply a range of problem appropriate data science techniques to large data sets, from development to deployment support. Work closely with data engineers and developers to build and deploy interactive dashboards, providing the best, most engaging insights and UX for our clients. Communicate effectively with clients and stakeholders, ensuring their requirements are fully understood and met. Conduct effective customer trials to grow our open banking impact, supporting data specification, data processing/analysis and result generation/presentation. Be highly proactive in pursuit of product excellence. For example, by investigating/proposing new data sources, encouraging cross team working, managing projects to agreed schedules and looking to utilise new tools and techniques. Help to develop, implement and honour effective, engaging team methods to support rapid prototyping, reproducibility, productivity, automation, and appropriate data governance. Engage with the wider Mastercard data science community, sharing best practice, knowledge, and insights, in support of collaborative, fulfilling work and value creation. All About You To succeed in this role, you will have: An undergraduate degree or higher in Computer Science, Data Science, Econometrics, Mathematics, Statistics, or similar field of study. Multi-project, hands-on experience of the end-to-end data science process in relation to large, complex data. From problem framing to results communication and solution deployment, you will be able to demonstrate having played a key part in a range of successfully delivered projects. Real world experience of developing and deploying interactive dashboards based on Plotly’s Dash framework. Workplace Python coding experience, including a good knowledge of the principal Python Data Science / Machine Learning (ML) library ecosystem. Excellent written and oral communication skills for both technical & non-technical audiences. To Succeed In This Role, You Will Be: A highly engaged individual, evidenced through specific examples of collaboration, effective teamwork, successful independent work, and continued professional development. Additionally, The Ideal Candidate Can Demonstrate: Commercial, experience of successfully utilizing time series and natural language processing (NLP – especially in relation to topic modelling and named entity recognition) methods. A good working knowledge of supervised and un-supervised techniques is presumed. Practical knowledge/experience of solution deployment (data science pipelines, MLOps frameworks and libraries etc.). Experience of working in financial services with respect to consumer and/or business lending. Corporate Security Responsibility Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security. All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And Therefore, It Is Expected That The Successful Candidate For This Position Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City