Jobs
Interviews

27245 Gcp Jobs - Page 37

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

madurai, tamil nadu

On-site

We are seeking an experienced MERN Stack Developer to join our team. As a MERN Stack Developer with us, your primary responsibility will be the development and maintenance of web applications utilizing MongoDB, Express.js, React, and Node.js. We are looking for someone who possesses a robust comprehension of current industry trends and excels in full-stack web development practices. Your qualifications should include a strong command of JavaScript and familiarity with ES6+ features. In addition, you should have hands-on experience in constructing RESTful APIs using Express.js and Node.js. Proficiency in crafting front-end applications with React, including expertise in state management through Redux or Context API, is essential. Moreover, we expect you to have knowledge of MongoDB, along with experience in employing Mongoose for database schema design and interactions. An understanding of contemporary front-end build pipelines and tools like Webpack, Babel, and NPM is crucial. You should be adept at using code versioning tools such as Git and have a background in implementing authentication and authorization mechanisms like JWT and OAuth. The ability to produce clean, efficient, and maintainable code is paramount, as is a solid grasp of web security best practices and common vulnerabilities. Experience in deploying and managing applications on cloud platforms such as AWS, Azure, or GCP is highly desirable. Familiarity with containerization and orchestration tools like Docker and Kubernetes is also beneficial. Your skill set should encompass front-end technologies like HTML, CSS, and responsive design principles. Strong problem-solving abilities and efficient issue debugging skills are required. You should possess good time-management skills, task prioritization capabilities, and communicate professionally and precisely. We value individuals who can swiftly adapt to new technologies and frameworks, understand the full software development lifecycle, and have experience with Agile/Scrum methodologies. Being a collaborative team player who can effectively work with cross-functional teams is essential. Proficiency in continuous integration and deployment (CI/CD) processes is expected. Candidates with familiarity in GraphQL will have an added advantage. Knowledge of server-side rendering (SSR) with frameworks such as Next.js is a plus. The preferred location for this position is Madurai.,

Posted 2 days ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. In infrastructure engineering at PwC, you will focus on designing and implementing robust and scalable technology infrastructure solutions for clients. Your work will involve network architecture, server management, and cloud computing experience. Data Modeler Job Description Looking for candidates with a strong background in data modeling, metadata management, and data system optimization. You will be responsible for analyzing business needs, developing long term data models, and ensuring the efficiency and consistency of our data systems. Key areas of expertise include Analyze and translate business needs into long term solution data models. Evaluate existing data systems and recommend improvements. Define rules to translate and transform data across data models. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Utilize canonical data modeling techniques to enhance data system efficiency. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems to ensure optimal performance. Strong expertise in relational and dimensional modeling (OLTP, OLAP). Experience with data modeling tools (Erwin, ER/Studio, Visio, PowerDesigner). Proficiency in SQL and database management systems (Oracle, SQL Server, MySQL, PostgreSQL). Knowledge of NoSQL databases (MongoDB, Cassandra) and their data structures. Experience working with data warehouses and BI tools (Snowflake, Redshift, BigQuery, Tableau, Power BI). Familiarity with ETL processes, data integration, and data governance frameworks. Strong analytical, problem-solving, and communication skills. Qualifications Bachelor's degree in Engineering or a related field. 5 to 9 years of experience in data modeling or a related field. 4+ years of hands-on experience with dimensional and relational data modeling. Expert knowledge of metadata management and related tools. Proficiency with data modeling tools such as Erwin, Power Designer, or Lucid. Knowledge of transactional databases and data warehouses. Preferred Skills Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams.

Posted 2 days ago

Apply

5.0 - 7.0 years

0 Lacs

Andhra Pradesh, India

On-site

Summary about Organization A career in our Advisory Acceleration Center is the natural extension of PwC’s leading global delivery capabilities. The team consists of highly skilled resources that can assist in the areas of helping clients transform their business by adopting technology using bespoke strategy, operating model, processes, and planning. You will be at the forefront of helping organizations adopt innovative technology solutions that optimize business processes or enable scalable technology. Our team helps organizations transform their IT infrastructure, modernize applications and data management to help shape the future of business. An essential and strategic part of Advisory's multi-sourced, multi-geography Global Delivery Model, the Acceleration Centers are a dynamic, rapidly growing component of our business. The teams out of these Centers have achieved remarkable results in process quality and delivery capability, resulting in a loyal customer base and a reputation for excellence. Job Description & Summary PwC’s Hybrid Cloud & Technical Resilience capability helps clients transform their business with innovative technology solutions. It enables organizations to optimize applications and services across various cloud solutions (e.g., public, private, edge, etc.), achieving greater value through innovation while enhancing customer and employee experiences. Responsibilities As a Manager, you'll join a team solving complex business issues, focusing on hybrid cloud solutions and IT system resilience from strategy to execution. This role requires technical knowledge and strong client engagement skills as well as the capability of leading small teams through the delivery lifecycle of projects and programs. PwC Professional responsibilities at this level include but are not limited to: Serve as a trusted advisor to client executives, providing strategic guidance on IT resilience, Disaster Recovery (DR), and Business Continuity (BC). Lead teams in the design and delivery of comprehensive hybrid and multi-cloud and resilience programs that align with clients’ business objectives and risk appetites. Drive innovation by identifying and integrating emerging technologies and practices into client solutions. Foster a collaborative environment where people and technology excel together. Contribute to open discussions with teams, clients, and stakeholders to build trust. Understand basic infrastructure technologies and be eager to learn more. Adhere to the firm's code of ethics and business conduct. Basic Qualifications Job Requirements and Preferences: Minimum Degree Required Bachelor’s degree in Information Technology, Computer Science, Risk Management, or a related field. Minimum Years Of Experience 5-7 years of relevant experience designing and delivering public, private, hybrid, or multi-cloud solutions and migrating applications and services to these hosting environments with a focus on modernization, disaster recovery and resilience. Preferred Qualifications Certification(s) Preferred: Certification(s) from a leading cloud service provider (AWS, Azure, GCP) Certification(s) from a leading on-premises infrastructure provider (VMware, Nutanix Microsoft, RedHat, NetApp, EMC, Cisco, Arista) Certified Business Continuity Professional (CBCP) ITIL Certification Certified Information Systems Security Professional (CISSP) Certified Information Systems Auditor (CISA) AWS or Azure certifications related to resilience or infrastructure Preferred Knowledge/Skills Demonstrates thought leader-level abilities with, and/or a proven record of success directing efforts in the following areas: Demonstrates public, private, hybrid, and multi-cloud Infrastructure experience. (Network, Server, Storage, and Database) discovery, design, build, and migration; Experience with private and public, private, and/or hybrid cloud architectures with migration and infrastructure/application migration modernization experience; Experience in IT resilience, disaster recovery, or technical risk consulting, preferably in a professional services environment; Collaborate with clients to identify critical business functions and their dependencies on IT systems; Provide expert advice on developing IT resilience strategies tailored to client-specific environments and challenges; Lead workshops and training sessions to educate client teams on resilience best practices. Develop and refine Business Continuity Plans (BCPs) that integrate technology resilience considerations; Recommend and configure tools and processes to enhance client resilience capabilities, including backup and recovery solutions; Excellent communication and presentation skills, with the ability to translate technical details into business value for clients; and, Strong organizational and project management skills in a fast-paced environment. Demonstrates abilities and/or success in one or many of the following areas: Architectural and / or engineering exposure to Windows, Linux, UNIX, VMware ESXi, Hyper-V, XenServer, Oracle DB, SQL Server, IIS Server, SAN, NAS, and other on-premises hosting technologies; Workload migration and automation toolsets (CloudEndure, Azure, Turbonomics, Python, TerraForm, etc.); Strong knowledge of IT infrastructure (e.g., cloud systems, networks, and cybersecurity); Experience with resilience tools, such as disaster recovery as a service (DRaaS), backup platforms, or monitoring solutions; and, Familiarity with risk management frameworks (e.g., ISO 22301, ISO 27001, NIST, ITIL). Travel Requirements 50%

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You are an experienced and highly motivated Senior Django Developer sought to join our engineering team. Your expertise in Django and Python will be pivotal in building scalable, secure, and high-performance backend systems. Your key responsibilities will include designing, developing, and maintaining robust web applications using Django and Python. You will create and integrate RESTful APIs for frontend and third-party services, ensuring reusable, testable, and efficient code following best practices. Collaborating closely with frontend developers, product managers, and QA teams, you will deliver high-quality features. Additionally, optimizing application performance, troubleshooting issues, and upholding security and data protection are integral aspects of your role. Your proficiency will be demonstrated in handling database design, schema migrations, and queries using PostgreSQL/MySQL. Engaging in code reviews, design discussions, and contributing to architectural decisions will be part of your routine. You will also implement and maintain CI/CD pipelines and deployment scripts while upholding proper documentation and technical specifications for projects. Required skills for this role include 5+ years of hands-on experience with Python and the Django framework, a strong understanding of Django ORM, middleware, forms, and templates, and experience with REST APIs, specifically Django REST Framework (DRF). A good grasp of relational databases (PostgreSQL/MySQL), familiarity with Docker, Git, and CI/CD pipelines, and understanding security principles and best practices in web development are essential. Proficiency with asynchronous programming and Celery for task queues, experience with cloud platforms like AWS/GCP/Azure, and familiarity with frontend technologies (HTML, CSS, JavaScript) are advantageous.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

nagpur, maharashtra

On-site

The Data & Analytics Team is looking for a Data Engineer with a hybrid skillset in data integration and application development. In this role, you will play a crucial part in designing, engineering, governing, and enhancing our entire Data Platform. This platform serves customers, partners, and employees by providing self-service access. You will showcase your expertise in data & metadata management, data integration, data warehousing, data quality, machine learning, and core engineering principles. To be successful in this role, you should have at least 5 years of experience in system/data integration, development, or implementation of enterprise and/or cloud software. You must have strong experience with Web APIs (RESTful and SOAP) and be proficient in setting up data warehousing solutions and associated pipelines, including ETL tools (preferably Informatica Cloud). Demonstrated proficiency in Python, data wrangling, and query authoring in SQL and NoSQL environments is essential. Experience in a cloud-based computing environment, specifically GCP, is preferred. You should also excel in documenting Business Requirement, Functional & Technical documentation, writing Unit & Functional Test Cases, Test Scripts & Run books, and incident management systems like Jira, Service Now, etc. Working knowledge of Agile Software development methodology is required. As a Data Engineer, you will be responsible for leading system/data integration, development, or implementation efforts for enterprise and/or cloud software. You will design and implement data warehousing solutions and associated pipelines for internal and external data sources, including ETL processes. Performing extensive data wrangling and authoring complex queries in both SQL and NoSQL environments for structured and unstructured data will be part of your daily tasks. You will develop and integrate applications, leveraging strong proficiency in Python and Web APIs (RESTful and SOAP). Providing operational support for the data platform and applications, including incident management, will also be a key responsibility. Additionally, you will create comprehensive Business Requirement, Functional, and Technical documentation, develop Unit & Functional Test Cases, Test Scripts, and Run Books, and manage incidents effectively using systems like Jira, Service Now, etc. At GlobalLogic, we prioritize a culture of caring. We put people first, offering an inclusive culture where you can build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. We are committed to your continuous learning and development, providing various opportunities to grow personally and professionally. You'll have the chance to work on interesting and meaningful projects that make an impact. We believe in balance and flexibility, offering various career areas, roles, and work arrangements to help you achieve a perfect work-life balance. Join us in a high-trust organization where integrity is key, and trust is a cornerstone of our values to employees and clients. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world's largest companies, helping create innovative digital products and experiences. You'll have the opportunity to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.,

Posted 2 days ago

Apply

10.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Role : Senior Cloud Architect (AWS&GCP) Location : Mumbai, India Work Type : Full-Time, Onsite Cloud Platforms : AWS & GCP (Primary), Azure (Secondary) Core Focus : Cloud Architecture, Operational Excellence, Security Hardening, AI-Driven Infrastructure Automation About The Role Were looking for a forward-thinking Senior Cloud Architect who is not only an expert in multi-cloud infrastructure, but also excited to lead the charge in managing infrastructure via bots and AI agents. Based in Mumbai, this role combines deep architectural expertise across AWS, GCP, and modern app platforms with a strong focus on operational efficiency, security hardening, and next-gen automation. Youll help drive the adoption of AI-driven approaches, embedding intelligent agents into cloud operations to automate detection, remediation, monitoring, and decision-making. In This Role You Will Architect and implement secure, scalable, and cost-effective infrastructure on AWS and GCP, supporting hybrid and multi-cloud workloads. Design platforms to support varied enterprise stacks : .NET + MSSQL, Ruby on Rails, Java, PostgreSQL, MySQL, IIS, and Kubernetes. Lead initiatives around Operations as Code, Monitoring as Code, Security as Code, and Self-Healing infrastructure. Champion the integration and scaling of AI agents and intelligent bots to support tasks like alert triage, defect remediation, ticket handling, and observability. Use Terraform, Ansible, Puppet, or Chef for environment automation and configuration consistency. Collaborate with Security to embed policy enforcement, conduct audits, and lead remediation across infrastructure layers. Define and implement monitoring/alerting systems using Prometheus, Grafana, ELK Stack, CloudWatch, and others. Drive cost efficiency, availability, and performance across infrastructure with automation-first thinking. Provide architectural guidance and mentorship to DevOps, CloudOps, and SRE teams. Actively contribute to innovation by exploring how LLMs, AI Ops, and AI-driven tooling can transform traditional infrastructure operations. Youve Got What It Takes If You Have 10+ years in Cloud Architecture, Infrastructure Engineering, or related roles. Proven expertise in AWS (certified preferred), strong working knowledge of GCP and Azure. Demonstrated experience supporting applications in .NET/IIS, Java/RoR, and containerized environments on Kubernetes. Strong experience with RDBMS and NoSQL : MSSQL, PostgreSQL, MySQL. Scripting and automation with Python, Go, PowerShell, YAML. Familiarity with datacenter operations, CI/CD, and IaC (e.g., Terraform, CloudFormation). Hands-on experience or experimentation with AI Agents, LLMs, or intelligent automation frameworks in an ops context. A strong bias toward automation, and passion for managing infrastructure through bots and intelligent systems. Experience integrating security tools like GuardDuty, SCC, AWS Config, etc., into infrastructure Dose of Awesome If You Have : AWS Solutions Architect Professional or GCP Professional Cloud Architect certification. Experience with modern observability platforms and security automation. Familiarity with concepts like Auto-Healing systems, anomaly detection, and AI-based log analysis. (ref:hirist.tech)

Posted 2 days ago

Apply

8.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

Job Description: As a Data Scientist at Hitachi Solutions India Pvt Ltd in Pune, India, you will be a valuable member of our dynamic team. Your primary responsibility will be to extract valuable insights from complex datasets, develop advanced analytical models, and drive data-driven decision-making across the organization. With 8-14 years of experience, your primary skills should include Data Science, with secondary skills in Data Engineering/Data Analytics. You will play a pivotal role in working on cutting-edge AI applications with a focus on Natural Language Processing (NLP), Time Series Forecasting, and a working knowledge of Computer Vision (CV) techniques. Your role will involve collaborating with a diverse team of engineers, analysts, and domain experts to build holistic, multi-modal solutions. Your expertise in Python and libraries like Pandas, NumPy, Scikit-learn, HuggingFace Transformers, and Prophet/ARIMA will be essential. Additionally, you should have a strong understanding of the model development lifecycle, from data ingestion to deployment, and hands-on experience with SQL and data visualization tools like Seaborn, Matplotlib, and Tableau. Experience in handling retail-specific data, familiarity with cloud platforms like AWS, GCP, or Azure, and exposure to API development (FastAPI, Flask) for ML model deployment will be beneficial. Knowledge of MLOps practices, previous experience in fine-tuning language models, and expertise in Data Engineering using Azure technologies are desirable skills for this role. Key responsibilities will include applying NLP techniques to extract insights from text data, analyzing historical demand data for Time Series Forecasting, and potentially contributing to Computer Vision projects. Collaboration with cross-functional teams and developing scalable ML components for production environments will be crucial aspects of your role. Qualifications required for this position include a Master's degree in Computer Science, Data Science, Statistics, or a related field, proven experience in data science or machine learning, strong proficiency in Python and SQL, and familiarity with cloud technologies like Azure Databricks and MLflow. Excellent problem-solving skills, strong communication abilities, and the capability to work independently and collaboratively in a fast-paced environment are essential for success in this role. Please be cautious of potential scams during the recruitment process, and all official communication regarding your application and interview requests will be from our @hitachisolutions.com domain email address.,

Posted 2 days ago

Apply

1.0 - 5.0 years

0 - 0 Lacs

karnataka

On-site

You are a Backend Developer with at least 2 years of experience in product building. Your primary responsibility will be to design, develop, and maintain robust backend systems and APIs using NodeJS. You will collaborate with cross-functional teams to ensure seamless integration between frontend and backend components. It is essential to architect scalable, secure, and high-performance backend solutions that promote a culture of collaboration, knowledge sharing, and continuous improvement. Your key responsibilities include designing and optimizing data storage solutions using relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB, Redis). You will implement best practices for code quality, security, and performance optimization, along with developing and maintaining CI/CD pipelines to automate build, test, and deployment processes. Comprehensive test coverage, including unit testing, and utilizing cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment are crucial aspects of your role. Furthermore, you will stay updated with industry trends and emerging technologies to drive innovation within the team. Secure authentication, authorization mechanisms, and data encryption for sensitive information are integral to your tasks. Additionally, you will design and develop event-driven applications using serverless computing principles to enhance scalability and efficiency. To excel in this role, you must have a strong portfolio of product-building projects and extensive experience with JavaScript backend frameworks (e.g., Express, Socket). Proficiency in SQL and NoSQL databases (MySQL and MongoDB), RESTful API design and development, and asynchronous programming is essential. Practical experience with Redis, caching mechanisms, web server optimization techniques, and containerization technologies (e.g., Docker, Kubernetes) will be highly beneficial. Strong problem-solving, analytical, and communication skills are necessary to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.,

Posted 2 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Role : DevOps Automation Location : : 5+ Roles & Responsibilities : Build, maintain, and optimize automated CI/CD pipelines for enterprise-scale software delivery. Deploy, monitor, and troubleshoot software across hybrid and public cloud environments using orchestration tools. Collaborate in agile, cross-functional teams to uphold quality, performance, scalability, security, and resilience. Develop scripts and tools (Python, Shell, PowerShell, etc.) for automation, cloud deployment, and code quality. Design and enforce DevOps best practices: version control, automated testing, configuration management. Define and validate software infrastructure solutions; ensure alignment with stakeholder requirements and standards. Produce and maintain design documentation, requirement specs, and operational guides. Communicate technical insights and guidance to engineering teams and business users. Job Skills & Requirements Degree in Engineering, Computer Science, or related discipline. 5+ years in software development lifecycle roles, especially in Skills : Proficient in Windows and Linux environments. Experience with CI/CD tools (Jenkins, Azure DevOps, others). Scripting skills : Python, Perl, Shell, PowerShell. Container platforms : Docker, Kubernetes. Cloud deployment : AWS, Azure, GCP, IBM Cloud (hybrid setups). Infrastructure-as-code/automation : Ansible or equivalent. Version control systems : Git, GitHub workflows. Familiarity with code analysis, test coverage, and database technologies. Strong troubleshooting across full infrastructure stack. Enhance monitoring, reliability, and reduce deployment failures. Maintain documentation and adhere to infrastructure standards. (ref:hirist.tech)

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role : Java Full-Stack Developer. Job Summary We are seeking a skilled and passionate Java Full-Stack Developer to join our team. The ideal candidate will possess a strong foundation in Java programming, a solid understanding of relational databases, and hands-on experience with front-end technologies like React. You will be instrumental in developing and maintaining robust, scalable, and high-quality applications across the full software development lifecycle, utilizing modern Java frameworks and cloud platforms. Key Responsibilities Software Development : Design, develop, and maintain high-quality, scalable applications primarily using Java, Spring Boot, and related frameworks. Front-end Development : Develop interactive and responsive user interfaces using React to ensure a seamless user experience. Database Management : Design, optimize, and interact with relational databases such as MySQL and SQL, ensuring efficient data storage and retrieval. API Development : Build and consume RESTful APIs for seamless communication between front-end and back-end systems. Testing & Quality : Write comprehensive unit tests using the JUnit Testing Framework to ensure code quality and reliability. SDLC & Agile : Actively participate in all phases of the Software Development Life Cycle (SDLC), strictly adhering to Agile methodologies. Problem-Solving : Utilize excellent problem-solving and debugging skills to identify and resolve complex technical issues efficiently. Cloud Integration : Collaborate on deploying and managing applications on cloud platforms such as AWS, Azure, or GCP. Code Review & Best Practices : Participate in code reviews, contribute to architectural discussions, and ensure adherence to OOPS concepts and coding best practices. Required Skills & Qualifications Proven Experience : Proven experience as a Java Developer with a strong understanding of Java programming language. Core Java & Frameworks : Strong proficiency in JAVA 8 (or higher) and experience with the Spring Boot Framework and Hibernate. Front-end : Hands-on experience with React. Databases : Solid understanding of relational databases, with practical experience in MySQL and SQL. Testing : Exposure to and practical experience with the JUnit Testing Framework. SDLC & Agile : Solid understanding of the Software Development Life Cycle (SDLC) and Agile methodologies. OOPS : Strong understanding and application of OOPS concepts. Problem-Solving : Excellent problem-solving and debugging skills. Cloud Familiarity : Familiarity with cloud platforms (e.g., AWS, Azure/GCP). Data Modeling : Knowledge of Data Modeling Concepts. Communication : Good verbal and written communication skills. Technical Skills Preferred (Good To Have) Direct experience with AWS, Azure, or GCP for deployment and management. Experience with other ORM tools in addition to Hibernate. Knowledge of Microservices architecture. (ref:hirist.tech)

Posted 2 days ago

Apply

5.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Skills: AI/ML, Machine Learning, TensorFlow, CI/Cd, AWS, devop, azure, Job Description Minimum 5 - 8 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, Deep learning in Computer Vision (CV), and generative AI techniques. Proficiency in programming languages such as Python and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, like fairness, transparency, and accountability in AI models and systems. Strong collaboration with engineering teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the lates Mandatory Skills Generative AI techniques, NLP techniques, BERT, GPT, or Transformer models Azure Open AI GPT models, Hugging Face Transformers, Prompt Engineering Python, Knowledge on frameworks like TensorFlow or PyTorch, LangChain, R Deploying AI solutions in Azure, AWS, or GCP. Deep learning in Computer Vision (CV), Large Language Model (LLM) Good To Have Skills Knowledge on DevOps and MLOps practices Implement CI/CD pipelines Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines.

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for owning the full ML stack that is capable of transforming raw dielines, PDFs, and e-commerce images into a self-learning system that can read, reason about, and design packaging artwork. This includes building data-ingestion & annotation pipelines for SVG/PDF to JSON conversion, designing and modifying model heads using technologies such as LayoutLM-v3, CLIP, GNNs, and diffusion LoRAs, training & fine-tuning on GPUs, as well as shipping inference APIs and evaluation dashboards. Your daily tasks will involve close collaboration with packaging designers and a product manager, establishing you as the technical authority on all aspects of deep learning within this domain. Your key responsibilities will be divided into three main areas: **Area Tasks:** - Data & Pre-processing (40%): Writing robust Python scripts for parsing PDF, AI, SVG files, extracting text, colour separations, images, and panel polygons. Implementing tools like Ghostscript, Tesseract, YOLO, and CLIP pipelines. Automating synthetic-copy generation for ECMA dielines and maintaining vocabulary YAMLs & JSON schemas. - Model R&D (40%): Modifying LayoutLM-v3 heads, building panel-encoder pre-train models, adding Graph-Transformer & CLIP-retrieval heads, and running experiments, hyper-param sweeps, ablations to track KPIs such as IoU, panel-F1, colour recall. - MLOps & Deployment (20%): Packaging training & inference into Docker/SageMaker or GCP Vertex jobs, maintaining CI/CD, experiment tracking, serving REST/GraphQL endpoints, and implementing an active-learning loop for designer corrections. **Must-Have Qualifications:** - 5+ years of Python experience and 3+ years of deep-learning experience with PyTorch, Hugging Face. - Hands-on experience with Transformer-based vision-language models and object-detection pipelines. - Proficiency in working with PDF/SVG tool-chains, designing custom heads/loss functions, and fine-tuning pre-trained models on limited data. - Strong knowledge of Linux, GPU, graph neural networks, and relational transformers. - Proficient in Git, code review discipline, and writing reproducible experiments. **Nice-to-Have:** - Knowledge of colour science, multimodal retrieval, diffusion fine-tuning, or packaging/CPG industry exposure. - Experience with vector search tools, AWS/GCP ML tooling, and front-end technologies like Typescript/React. You will own a tool stack including DL frameworks like PyTorch, Hugging Face Transformers, torch-geometric, parsing/CV tools, OCR/detectors, retrieval tools like CLIP/ImageBind, and MLOps tools such as Docker, GitHub Actions, W&B or MLflow. In the first 6 months, you are expected to deliver a data pipeline for converting ECMA dielines and PDFs, a panel-encoder checkpoint, an MVP copy-placement model, and a REST inference service with a designer preview UI. You will report to the Head of AI or CTO and collaborate with a front-end engineer, a product manager, and two packaging-design SMEs.,

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are looking for a Google Cloud Professional with a strong technical background in Google Cloud Platform (GCP) services. The ideal candidate will hold multiple Google Cloud certifications and have hands-on experience in delivering GCP-based projects across various domains. Key Responsibilities Design, implement, and manage scalable, secure, and cost-efficient solutions on Google Cloud Platform. Optimize cloud costs through FinOps practices, including budgeting, forecasting, and cost allocation. Develop and maintain data pipelines and analytics solutions using BigQuery for large-scale data processing. Implement Cloud DevSecOps practices to ensure secure, automated, and efficient CI/CD pipelines. Leverage Vertex AI for building, deploying, and managing machine learning models. Collaborate with cross-functional teams to architect and deploy GCP solutions tailored to business needs. Monitor, troubleshoot, and enhance GCP infrastructure to ensure high availability and performance. Stay updated on GCP services and best practices to drive continuous improvement. Required Qualifications Google Cloud Certifications: Minimum of two Google Cloud certifications (e.g., Professional Cloud Architect, Professional Data Engineer, Professional Machine Learning Engineer, etc.). Experience: Demonstrated hands-on experience working on Google Cloud projects, including: FinOps: Cost optimization and cloud financial management. BigQuery: Data warehousing and analytics. Cloud DevSecOps: Secure development and operations pipelines. Vertex AI: Machine learning model development and deployment. Other GCP services such as Cloud Storage, Cloud Functions, Pub/Sub, or Cloud Run. Strong understanding of cloud architecture, security, and networking principles. Proficiency in scripting or programming languages (e.g., Python, Go, or Java) for automation and integration. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Preferred Qualifications Additional Google Cloud certifications or other relevant cloud certifications (e.g., AWS, Azure). Experience with hybrid or multi-cloud environments. Familiarity with infrastructure-as-code tools (e.g., Terraform, Deployment Manager). Knowledge of Kubernetes and Google Kubernetes Engine (GKE) for containerized workloads. (ref:hirist.tech)

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

You have an exciting opportunity to join Ripplehire as a Senior DevOps Team Lead - GCP Specialist. In this role, you will play a crucial part in shaping and executing the cloud infrastructure strategy using Google Cloud Platform (GCP), with a particular focus on GKE, networking, and optimization strategies. As the Senior DevOps Team Lead, your responsibilities will include designing, implementing, and managing GCP-based infrastructure, optimizing GKE clusters for performance and cost-efficiency, establishing secure VPC architectures and firewall rules, setting up logging and monitoring systems, driving cost optimization initiatives, mentoring team members on GCP best practices, and collaborating with development teams on CI/CD pipelines. To excel in this role, you must possess extensive experience with GCP, including GKE, networking, logging, monitoring, and cost optimization. Additionally, you should have a strong background in Infrastructure as Code, CI/CD pipeline design, container orchestration, troubleshooting, incident management, and performance optimization. Qualifications for this position include at least 5 years of DevOps experience with a focus on GCP environments, GCP Professional certifications (Cloud Architect, DevOps Engineer preferred), experience leading technical teams, cloud security expertise, and a track record of scaling infrastructure for high-traffic applications. If you are ready for a new challenge and an opportunity to advance your career in a supportive work environment, don't miss this chance to apply. Click on Apply, complete the Screening Form, upload your resume, and increase your chances of getting shortlisted for an interview. Uplers is committed to making the hiring process reliable, simple, and fast, and we are here to support you throughout your engagement. Apply today and take the next step in your career journey with us!,

Posted 2 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title : AI/ML Engineer / Junior Data Scientist Location : Bangalore / Pune Experience : 05 Years Employment Type : Full-Time Salary : 5 15 LPA (Based on experience and skillset) About The Role We are looking for a passionate and driven AI/ML Engineer or Junior Data Scientist to join our growing analytics and product team. Youll work closely with senior data scientists, engineers, and business stakeholders to build scalable AI/ML solutions, extract insights from complex datasets, and develop models that improve real-world decision-making. Whether you're a fresher with solid projects or a professional with up to 5 years of experience, if you're enthusiastic about AI/ML and data science, we want to hear from you! Key Responsibilities Collect, clean, preprocess, and analyze structured and unstructured data from multiple sources. Design, implement, and evaluate machine learning models for classification, regression, clustering, NLP, or recommendation systems. Collaborate with data engineers to deploy models in production (using Python, APIs, or cloud services like AWS/GCP). Visualize results and present actionable insights through dashboards, reports, and presentations. Conduct experiments, hypothesis testing, and A/B tests to optimize models and business outcomes. ? Develop scripts and reusable tools for automation and scalability of ML pipelines. Stay updated with the latest research papers, open-source tools, and trends in AI/ML. Required Skills & Qualifications Bachelors/Masters degree in Computer Science, Data Science, Mathematics, Statistics, or related fields. Strong Python programming skills with experience in libraries like NumPy, Pandas, Scikit-learn, TensorFlow, or PyTorch. Proficiency in data analysis, visualization (using tools like Matplotlib, Seaborn, Plotly, or Power BI/Tableau). Solid understanding of ML algorithms (linear regression, decision trees, random forests, SVMs, neural networks). Experience with SQL and working with large datasets. Exposure to cloud platforms (AWS, GCP, or Azure) and APIs is a plus. Knowledge of NLP, computer vision, or generative AI models is desirable. Strong problem-solving skills, attention to detail, and ability to work in agile teams. Good To Have (Bonus Points) Experience in end-to-end ML model lifecycle (development to deployment). Experience with MLOps tools like MLflow, Docker, or CI/CD. Participation in Kaggle competitions or open-source contributions. Certifications in Data Science, AI/ML, or Cloud Platforms. What We Offer A dynamic and collaborative work environment. Opportunities to work on cutting-edge AI projects. Competitive salary and growth path. Training, mentorship, and access to tools and resources. Flexible work culture and supportive teams. (ref:hirist.tech)

Posted 2 days ago

Apply

6.0 years

0 Lacs

Greater Kolkata Area

Remote

Company Name : AdMedia Digital Labs Pvt. Ltd. Website : www.admedia.com Headquarter : California, USA Job Location : Anywhere in India (WFH) About The Company Join the Team Thats Redefining Digital Advertising - Founded in 1998 and headquartered in Los Angeles, AdMedia is a trailblazer in cross-channel digital advertising. Our global team of 250+ experts across the USA, UK, India, and Dubai brings experience in ad operations, campaign management, and optimization to deliver cutting-edge, performance-driven strategies. We provide advertisers with targeted reach, serving over 500 million impressions daily across search, native, mobile, video, and remarketing channels. Our solutions are trusted by global brands for delivering measurable impact, maximizing ROI, and engaging high-intent consumers with precision. At AdMedia, were committed to innovation, transparency, and excellence. Our team is passionate about solving challenges, driving growth, and building long-lasting relationships with clients. We're looking for bold, driven individuals to join us and help shape the future of digital advertising. Job Description At AdMedia, we believe your job description is just the starting line. Our fun, highly motivated team has pioneered the largest search marketplace outside of the major engines! We have an award-winning ad tech platform, and we compete head-to-head with Google. We're enjoying unrivaled success as a formidable disruptor in the paid online search advertising Description : AdMedia is currently hiring a highly experienced and visionary Senior Architect with strong expertise in PHP-based fullstack development and experience with Python or Node.js. This role demands a strategic thinker who can drive system architecture, ensure scalability, and lead cross-functional teams. You will be responsible for architecting robust web platforms and overseeing backend integrations using modern Timing : 9 AM to 6 PM IST Key Responsibilities Design, architect, and develop scalable, secure, and high-performance applications using PHP, Python, Node.js, MySQL, API Integration, OOPS, etc. Lead end-to-end architecture and integration of systems using PHP, Python or Node.js platforms. Write and conduct code and architecture reviews ensuring adherence to industry best practices. Own architectural decisions and translate business goals into technical solutions. Define technical roadmaps and system blueprints aligning with product and engineering strategies. Mentor and guide engineering teams on modern PHP and Python/Node.js backend development practices. Evaluate, recommend and use tools, frameworks, and platforms for both PHP, Python and Node.js. Ensure code quality, scalability, security, and maintainability in all deliverables. Stay current with industry trends in both software architecture and backend development Experience and Qualifications : Bachelors or Masters degree in Computer Science, Engineering, or related field. 6+ years of software development experience, with at least 5+ years in PHP development. 3+ years of hands-on experience with Python or Node.JS backend systems. Experience of PHP & Python frameworks (Laravel, CodeIgniter Django), MySQL, and RESTful APIs. Exposure to containerization (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure). Proven experience in leading technical teams and large-scale system design. Excellent communication, leadership, and stakeholder management Skills : Contributions to open-source projects in PHP with Python or Node.js. Experience with CI/CD pipelines, backend optimization, and DevOps practices. Certifications in AWS/Azure or relevant backend/web development & Perks : Competitive Salary 12 Paid Company Holidays & 24 Paid Time Off PF, Gratuity & Medical Insurance 5 Days working - Good Work/Life Balance! Training & Certifications A Friendly & Supportive Culture! (ref:hirist.tech)

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary We are looking for a skilled and proactive Security Engineer with a strong understanding of cybersecurity principles and hands-on experience in implementing security measures in a financial service or NBFC environment. The ideal candidate will work closely with IT, compliance, and risk teams to ensure robust security across systems, networks, and Responsibilities : Design, implement, and manage security tools, technologies, and controls across the IT infrastructure. Monitor security events and logs, investigate incidents, and respond to threats in real time (SIEM/SOC operations). Ensure compliance with RBI guidelines, ISO 27001, PCI DSS, and other applicable regulatory frameworks. Conduct vulnerability assessments and penetration testing for web, mobile, and infrastructure layers. Develop and enforce security policies, standards, and procedures tailored to NBFC operations. Support data protection initiatives, including DLP, encryption, secure key management, and endpoint protection. Collaborate with product and engineering teams to embed security best practices into SDLC and DevSecOps. Prepare reports and documentation for audits, inspections, and regulatory reviews. Provide regular training and awareness programs for employees on cybersecurity Skills : Hands-on experience with firewalls, IDS/IPS, antivirus, DLP, and SIEM tools (e.g., Splunk, ELK, QRadar). Strong understanding of security protocols, cryptography, authentication, and authorization. Experience in cloud security (AWS/Azure/GCP), endpoint security, and network hardening. Familiarity with RBI regulations, cyber resilience guidelines, and NBFC-specific security controls. Knowledge of application security, OWASP Top 10, and secure coding to Have : Relevant certifications like CEH, CISSP, CISA, OSCP, or CCSP. Prior experience working in an NBFC, fintech, or regulated financial environment. Experience with automation/scripting tools (Python, Bash, PowerShell) for security operations. Exposure to risk management and business continuity planning Qualification : Bachelor's degree in Computer Science, Information Security, or a related field. (ref:hirist.tech)

Posted 2 days ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Job Description What You'll Do : Develop and maintain scalable backend systems using Python, with a focus on Django or Flask frameworks. Integrate with OAuth and various social networking APIs (Facebook, Twitter, LinkedIn, Google+). Collaborate closely with mobile development teams to integrate with mobile applications. Implement and manage Django/Flask permissions, caching strategies, and asynchronous mechanisms. Optimize database interactions and ensure high-performance solutions with scalability in mind. Contribute to all phases of the development lifecycle, from design to deployment and Makes You a Great Fit : Strong problem-solving skills and a solid understanding of data structures and algorithms. A strong desire for more responsibility and continuous learning. Passion for building robust systems that are engineered to handle failure scenarios. An unwavering commitment to maintaining high coding standards. A strong advocate for producing quality software, actively raising and resolving issues. Experience with at least one major cloud platform like AWS, GCP, Azure, or Digital Ocean. (Familiarity with Docker, Kubernetes, and microservices is a Skill-Sets : Python/Django or Flask Experience with API Required : 3-5 years Perks Of Joining Aubergine Solutions 5-day work week Flexible shift timings Company-sponsored certifications Team-friendly culture Flat hierarchy Recreational activities : Carrom, Table Tennis, Cricket Tournament participation Snack-filled pantry Group Medical Insurance (-) (ref:hirist.tech)

Posted 2 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

FICO is a leading global analytics software company that assists businesses in over 100 countries in making informed decisions. By joining the world-class team at FICO, you will have the opportunity to realize your career potential. As a part of the product development team, you will play a crucial role in providing thought leadership and driving innovation. This position involves collaborating closely with product management to architect, design, and develop a highly feature-rich product as the VP, Software Engineering. Your responsibilities will include designing, developing, testing, deploying, and supporting the capabilities of a large enterprise-level platform. You will create scalable microservices with a focus on high performance, availability, interoperability, and reliability. Additionally, you will contribute to technical designs, participate in defining technical acceptance criteria, and mentor junior engineers to uphold quality standards. To be successful in this role, you should hold a Bachelor's or Master's degree in computer science or a related field and possess a minimum of 7 years of experience in software architecture, design, development, and testing. Expertise in Java, Spring, Spring Boot, Maven/Gradle, Docker, Git, GitHub, as well as experience with data structures, algorithms, and system design is essential. Furthermore, you should have a strong understanding of microservices architecture, RESTful and gRPC APIs, cloud engineering technologies such as Kubernetes and AWS/Azure/GCP, and databases like MySQL, PostgreSQL, MongoDB, and Cassandra. Experience with Agile software development, data engineering services, and software design principles is highly desirable. At FICO, you will have the opportunity to work in an inclusive culture that values core principles like acting like an owner, delighting customers, and earning respect. You will benefit from competitive compensation, benefits, and rewards programs while enjoying a people-first work environment that promotes work/life balance and professional development. Join FICO and be part of a leading organization at the forefront of Big Data analytics, where you can contribute to helping businesses leverage data to enhance decision-making processes. Your role at FICO will make a significant impact on global businesses, and you will be part of a diverse and inclusive environment that fosters collaboration and innovation.,

Posted 2 days ago

Apply

710.0 years

0 Lacs

Greater Kolkata Area

Remote

Job Title : Senior Data Scientist Location : Remote Department : Data Science / Analytics / AI & ML Experience : 710 years Employment Type : Summary : Responsibilities We are seeking an experienced and highly motivated Senior Data Scientist with 710 years of industry experience to lead advanced analytics initiatives and drive data-driven decision-making across the organization. The ideal candidate will be skilled in statistical modeling, machine learning, and data engineering, with a strong business sense and the ability to mentor junior team Responsibilities : Lead end-to-end data science projects from problem definition through model deployment. Build, evaluate, and deploy machine learning models and statistical algorithms to solve complex business problems. Collaborate with cross-functional teams including Product, Engineering, and Business stakeholders to integrate data science solutions. Work with large, complex datasets using modern data tools (e.g., Spark, SQL, Airflow). Translate complex analytical results into actionable insights and present them to non-technical audiences. Mentor junior data scientists and provide technical guidance. Stay current with the latest trends in AI/ML, data science, and data engineering. Ensure reproducibility, scalability, and performance of machine learning systems in Qualifications : Bachelors or Masters degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. PhD is a plus. 710 years of experience in data science, machine learning, or applied statistics roles. Strong programming skills in Python and/or R; proficiency in SQL. Deep understanding of statistical techniques, hypothesis testing, and predictive modeling. Hands-on experience with ML libraries such as scikit-learn, TensorFlow, PyTorch, XGBoost, etc. Familiarity with data processing tools like Spark, Hadoop, or equivalent. Experience deploying models into production environments (APIs, MLOps, CI/CD pipelines). Excellent communication skills and ability to convey technical insights to business audiences. Experience working in cloud environments such as AWS, GCP, or Azure. (ref:hirist.tech)

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

Omniful is a rapidly expanding B2B SaaS company that is transforming the way businesses enhance their operations. We are seeking a skilled Golang Developer to become a valuable member of our dynamic team and contribute to the development of high-performance, scalable applications. As a Golang Developer at Omniful, your key responsibilities will include developing, testing, and maintaining high-performance backend services using Golang. You will be working on scalable, distributed systems and APIs, optimizing the performance and efficiency of existing applications. Collaboration with cross-functional teams such as frontend developers, DevOps, and product managers will be essential. Implementing secure and efficient coding practices and ensuring clean, maintainable, and well-documented code will also be part of your role. To be successful in this position, you should possess at least 4 years of experience in backend development with Golang. A strong understanding of concurrency, multithreading, and microservices architecture is required. Experience with RESTful APIs, gRPC, and WebSockets is essential, as well as proficiency with SQL/NoSQL databases like PostgreSQL, MongoDB, or Redis. Hands-on experience with Docker, Kubernetes, and cloud platforms such as AWS/GCP is necessary. Familiarity with CI/CD pipelines and DevOps practices, along with experience with message queues like Kafka and RabbitMQ, will be advantageous. Excellent problem-solving skills and the ability to thrive in a fast-paced environment are also key requirements. Joining Omniful means being part of a high-growth and innovative SaaS company where you will have the opportunity to work on cutting-edge technologies in a collaborative environment. You will enjoy a dynamic office culture in Gurugram, Sector-16, and benefit from a competitive salary and benefits package. If you are ready to elevate your Golang expertise and embark on an exciting journey with Omniful, apply now and seize this opportunity to grow professionally with us.,

Posted 2 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Java Developer with expertise in Prompt Engineering to join our AI-driven development team. The ideal candidate will combine robust Java backend development capabilities with hands-on experience in integrating and fine-tuning LLMs (e.g., OpenAI, Cohere, Mistral, or Anthropic), designing effective prompts, and embedding AI functionality into enterprise applications. This role is ideal for candidates passionate about merging traditional enterprise development with cutting-edge AI technologies. Key Responsibilities Design, develop, and maintain scalable backend systems using Java (Spring Boot) and integrate AI/LLM services. Collaborate with AI/ML engineers and product teams to design prompt templates, test prompt effectiveness, and iterate for accuracy, performance, and safety. Build and manage RESTful APIs that interface with LLM services and microservices in production-grade environments. Fine-tune prompt formats for various AI tasks (e.g., summarization, extraction, Q&A, chatbots) and optimize for performance and cost. Apply RAG (Retrieval-Augmented Generation) patterns to retrieve relevant context from data stores for LLM input. Ensure secure, efficient, and scalable communication between LLM APIs (OpenAI, Google Gemini, Azure OpenAI, etc.) and internal systems. Develop reusable tools and frameworks to support prompt evaluation, logging, and improvement cycles. Write high-quality unit tests, conduct code reviews, and maintain CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab. Work in Agile/Scrum teams and contribute to sprint planning, estimation, and retrospectives. Must-Have Technical Skills Java & Backend Development : Core Java 8/11/17 Spring Boot, Spring MVC, Spring Data JPA RESTful APIs, JSON, Swagger/OpenAPI Hibernate or other ORM tools Microservices architecture Prompt Engineering / LLM Integration : Experience working with OpenAI (GPT-4, GPT-3.5), Claude, Llama, Gemini, or Mistral models. Designing effective prompts for various tasks (classification, summarization, Q&A, etc.) Familiarity with prompt chaining, zero-shot/few-shot learning Understanding of token limits, temperature, top_up, and stop sequences Prompt evaluation methods and frameworks (e.g., LangChain, LlamaIndex, Guidance, PromptLayer) AI Integration Tools : LangChain or LlamaIndex for building LLM applications API integration with AI platforms (OpenAI, Azure AI, Hugging Face, etc.) Vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB) DevOps / Deployment : Docker, Kubernetes (preferred) CI/CD tools (Jenkins, GitHub Actions) AWS/GCP/Azure cloud environments Monitoring : Prometheus, Grafana, ELK Stack Good-to-Have Skills Python for prototyping AI workflows Chatbot development using LLMs Experience with RAG pipelines and semantic search Hands-on with GitOps, IaC (Terraform), or serverless functions Experience integrating LLMs into enterprise SaaS products Knowledge of Responsible AI and bias mitigation strategies Soft Skills Strong problem-solving and analytical thinking Excellent written and verbal communication skills Willingness to learn and adapt in a fast-paced, AI-evolving environment Ability to mentor junior developers and contribute to tech strategy Education Bachelors or Masters degree in Computer Science, Engineering, or related field Preferred Certifications (Not Mandatory) : OpenAI Developer or Azure AI Certification Oracle Certified Java Professional AWS/GCP Cloud Certifications (ref:hirist.tech)

Posted 2 days ago

Apply

5.0 - 8.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Skills: AI/ML, Machine Learning, TensorFlow, CI/Cd, AWS, devop, azure, Job Description Minimum 5 - 8 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, Deep learning in Computer Vision (CV), and generative AI techniques. Proficiency in programming languages such as Python and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, like fairness, transparency, and accountability in AI models and systems. Strong collaboration with engineering teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the lates Mandatory Skills Generative AI techniques, NLP techniques, BERT, GPT, or Transformer models Azure Open AI GPT models, Hugging Face Transformers, Prompt Engineering Python, Knowledge on frameworks like TensorFlow or PyTorch, LangChain, R Deploying AI solutions in Azure, AWS, or GCP. Deep learning in Computer Vision (CV), Large Language Model (LLM) Good To Have Skills Knowledge on DevOps and MLOps practices Implement CI/CD pipelines Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines.

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Python Developer within our Information Technology department, your primary responsibility will be to leverage your expertise in Artificial Intelligence (AI), Machine Learning (ML), and Generative AI. We are seeking a candidate who possesses hands-on experience with GPT-4, transformer models, and deep learning frameworks, along with a profound comprehension of model fine-tuning, deployment, and inference. Your key responsibilities will include designing, developing, and maintaining Python applications that are specifically tailored towards AI/ML and generative AI. You will also be involved in building and refining transformer-based models such as GPT, BERT, and T5 for various NLP and generative tasks. Working with extensive datasets for training and evaluation purposes will be a crucial aspect of your role. Moreover, you will be tasked with implementing model inference pipelines and scalable APIs utilizing FastAPI, Flask, or similar technologies. Collaborating closely with data scientists and ML engineers will be essential in creating end-to-end AI solutions. Staying updated with the latest research and advancements in the realms of generative AI and ML is imperative for this position. From a technical standpoint, you should demonstrate a strong proficiency in Python and its relevant libraries like NumPy, Pandas, and Scikit-learn. With at least 7+ years of experience in AI/ML development, hands-on familiarity with transformer-based models, particularly GPT-4, LLMs, or diffusion models, is required. Experience with frameworks like Hugging Face Transformers, OpenAI API, TensorFlow, PyTorch, or JAX is highly desirable. Additionally, expertise in deploying models using Docker, Kubernetes, or cloud platforms like AWS, GCP, or Azure will be advantageous. Having a knack for problem-solving and algorithmic thinking is crucial for this role. Familiarity with prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF) would be a valuable asset. Any contributions to open-source AI/ML projects, experience with vector databases, building AI chatbots, copilots, or creative content generators, and knowledge of MLOps and model monitoring will be considered as added advantages. In terms of educational qualifications, a Bachelor's degree in Science (B.Sc), Technology (B.Tech), or Computer Applications (BCA) is required. A Master's degree in Science (M.Sc), Technology (M.Tech), or Computer Applications (MCA) would be an added benefit for this role.,

Posted 2 days ago

Apply

10.0 - 14.0 years

0 Lacs

andhra pradesh

On-site

You are seeking a highly skilled Technical Architect with expertise in Java Spring Boot, React.js, IoT system architecture, and a strong foundation in DevOps practices. As the ideal candidate, you will play a pivotal role in designing scalable, secure, and high-performance IoT solutions, leading full-stack teams, and collaborating across product, infrastructure, and data teams. Your key responsibilities will include designing and implementing scalable and secure IoT platform architecture, defining data flow and event processing pipelines, architecting micro services-based solutions, and integrating them with React-based front-ends. You will also be responsible for defining CI/CD pipelines, managing containerization & orchestration, driving infrastructure automation, ensuring platform monitoring and observability, and enabling auto-scaling and zero-downtime deployments. In addition, you will need to collaborate with product managers and business stakeholders to translate requirements into technical specs, mentor and lead a team of developers and engineers, conduct code and architecture reviews, set goals and targets, and provide coaching and professional development to team members. Your role will also involve conducting unit testing, identifying risks, using coding standards and best practices to ensure quality, and maintaining a long-term outlook on the product roadmap and its enabling technologies. To be successful in this role, you must have hands-on IoT project experience, experience in designing and deploying multi-tenant SaaS platforms, strong knowledge of security best practices in IoT and cloud, excellent problem-solving, communication, and team leadership skills. It would be beneficial if you have experience with Edge Computing frameworks, AI/ML model integration into IoT pipelines, exposure to industrial protocols, experience with digital twin concepts, and certifications in relevant technologies. Ideally, you should have a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. By joining us, you will have the opportunity to lead architecture for cutting-edge industrial IoT platforms, work with a passionate team in a fast-paced and innovative environment, and gain exposure to cross-disciplinary challenges in IoT, AI, and cloud-native technologies.,

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies