Jobs
Interviews

17074 Tuning Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

India

On-site

Required Skills & Experience: Minimum 7+ years of professional experience in ColdFusion development (CFML). Proven expertise with ColdFusion versions 10, 11, 2016, 2018, or newer. Strong understanding of MVC frameworks (e.g., FW/1, ColdBox) and object-oriented programming (OOP) principles in ColdFusion. Proficiency in SQL and experience working with relational databases (e.g., MS SQL Server, MySQL). Ability to write complex queries, stored procedures, and optimize database performance. Experience with front-end technologies: HTML, CSS, JavaScript, jQuery. Familiarity with web servers (IIS, Apache) and their integration with ColdFusion. Experience with version control systems (e.g., Git). Strong analytical and problem-solving skills with an ability to diagnose and resolve complex technical issues quickly and effectively. Excellent communication skills, both written and verbal, with the ability to articulate technical concepts clearly to non-technical stakeholders. Ability to work independently with minimal supervision and as part of a distributed team. Proactive attitude with a strong sense of ownership and responsibility for application stability. Job Description We are seeking a highly skilled and experienced ColdFusion Developer to join our team on a contract basis. The successful candidate will be instrumental in enhancing, maintaining, troubleshooting our customer’s critical ColdFusion applications, ensuring their stability and performance during our strategic migration. This role requires a strong understanding of ColdFusion best practices, excellent problem-solving skills, and the ability to work collaboratively with customer’s internal development and operations teams. Key Responsibilities & Duties: Enhancements & Refinements: Implement enhancements and feature requests for ColdFusion applications as required. Optimize existing ColdFusion code for efficiency, scalability, and security. Participate in code reviews to ensure quality and adherence to established standards. Application Maintenance & Support: Perform regular maintenance, bug fixes, and performance tuning on existing ColdFusion applications. Monitor application health, identify potential issues, and implement proactive solutions to prevent downtime. Respond to and resolve production incidents and user-reported issues in a timely and efficient manner. Collaborate with internal support teams to diagnose and resolve complex technical problems. Documentation: Create and update technical documentation for ColdFusion applications, including system architecture, configurations, and troubleshooting guides. Document solutions to recurring issues and best practices for future reference. Collaboration & Communication: Work closely with existing development teams (including PHP developers) to understand application dependencies and ensure smooth operations during the transition phase. Communicate effectively with project managers, stakeholders, and other team members regarding progress, challenges, and solutions. o Provide technical guidance and knowledge transfer to internal teams as needed, particularly regarding the intricacies of the ColdFusion codebase. Database Interaction: Develop and optimize complex SQL queries for various database systems (e.g., MS SQL Server, MySQL) used by ColdFusion applications. Ensure data integrity and performance of database interactions. Security: Adhere to security best practices and implement necessary measures to protect sensitive data within ColdFusion applications. Other nice to have skills: Experience with PHP or other modern web technologies (Node.js, Python, Java) would be a plus, demonstrating an understanding of different development paradigms. Familiarity with AWS or Azure cloud environments. Experience in the FinTech or Healthcare sector. Knowledge of Direct Debit systems or recurring payment platforms.

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

Remote

Role Description This is a full-time remote role for an AI Research Engineer specializing in Reinforcement Learning (RL). The AI Research Engineer will be responsible for developing and implementing state-of-the-art RL algorithms, collaborating on research projects, and integrating these algorithms into existing systems. Qualifications Full-stack engineering, from data engineering to model architecture, RL and deployment. Experience with performance engineering and identifying bottlenecks in RL training. Experience with tuning reward functions, hyper-parameters and exploration strategies to solve complex tasks with deep RL. 8+ years of Python programming experience. Nice to have Advanced degree (MS or PhD) in Computer Science or related field Experience with k8s, docker, GPU Performance / systems engineering and model inference.

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

About Company: They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. Job Description: Job Title: LLM CUDA/C++ and Python Developer Location: Pan India Experience: 6+ yrs. Employment Type: Contract to hire Work Mode: Remote Notice Period: Immediate joiners Role Overview: This role is part of a project supporting leading LLM companies. The primary objective is to help these foundational LLM companies improve their Large Language Models.We support companies in enhancing their models by offering high-quality proprietary data. This data can be used as a basis for fine-tuning models or as an evaluation set to benchmark the performance. In an SFT data generation workflow, you might have to put together a prompt that contains code and questions, then elaborate model responses, and translate the provided CUDA/C++ code into equivalent Python code using PyTorch and NumPy to replicate the algorithm's behavior.For RLHF data generation, you may need to create a prompt or use one provided by the customer, ask the model questions, and evaluate the outputs generated by different versions of the LLM, comparing it and providing feedback, which is then used in fine-tune processes. Please note that this role does not involve building or fine-tuning LLMs. What does day-to-day look like: ● Translate CUDA/C++ code into equivalent Python implementations using PyTorch and NumPy, ensuring logical and performance parity. ● Analyze CUDA kernels and GPU-accelerated code for structure, efficiency, and function before translation. ● Evaluate LLM-generated translations of CUDA/C++ code to Python, providing technical feedback and corrections. ● Collaborate with prompt engineers and researchers to develop test prompts that reflect real-world CUDA/PyTorch tasks. ● Participate in RLHF workflows, ranking LLM responses and justifying ranking decisions clearly. ● Debug and review translated Python code for correctness, readability, and consistency with industry standards. ● Maintain technical documentation to support reproducibility and code clarity. ● Propose enhancements to prompt structure or conversion approaches based on common LLM failure patterns. Requirements: ● 5+ years of overall work experience, with at least 3 years of relevant experience in Python and 2+ years in CUDA/C++. ● Strong hands-on experience with Python, especially in scientific computing using PyTorch and NumPy. ● Solid understanding of CUDA programming concepts and C++ fundamentals. ● Demonstrated ability to analyze CUDA kernels and accurately reproduce them in Python. ● Familiarity with GPU computation, parallelism, and performance-aware coding practices. ● Strong debugging skills and attention to numerical consistency when porting logic across languages. ● Experience evaluating AI-generated code or participating in LLM tuning is a plus. ● Ability to communicate technical feedback clearly and constructively. ● Fluent in conversational and written English communication skills.

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

The System Engineer is responsible for designing, implementing, maintaining, and supporting IT infrastructure systems. This includes servers, networks, and cloud environments, ensuring systems are optimized for performance, security, and reliability. The role involves both hands-on technical work and collaboration with other IT and business units. Design, configure, and manage server and network infrastructure (physical and virtual environments). Install, upgrade, and maintain operating systems (Windows, Linux, etc.) and system software. Monitor system performance, identify issues, and implement solutions to ensure high availability and performance. Manage system backups, disaster recovery plans, and failover procedures. Implement and manage security protocols, access controls, and compliance measures. Coordinate with development, IT, and support teams to ensure system compatibility and efficiency. Automate system tasks using scripting tools (PowerShell, Bash, etc.). Participate in capacity planning, performance tuning, and future system upgrades. Create and maintain detailed documentation of system configurations and procedures. Stay updated with the latest industry trends and technologies. Proficiency in managing Windows and/or Linux server environments. Experience with virtualization technologies (VMware, Hyper-V). Familiarity with cloud platforms (AWS, Azure, Google Cloud). Solid understanding of networking concepts (DNS, DHCP, TCP/IP, VPN). Knowledge of cybersecurity principles and system hardening. Experience with monitoring tools (Nagios, Zabbix, SolarWinds). Strong analytical, problem-solving, and troubleshooting skills. Excellent communication and documentation skills.

Posted 1 day ago

Apply

0.0 - 3.0 years

0 - 0 Lacs

Utran, Surat, Gujarat

On-site

We are seeking a Senior Backend Developer with deep expertise in Node.js and AWS Cloud Services to architect, develop, and scale backend systems for modern web applications. The ideal candidate will have 3–5 years of hands-on experience and a strong foundation in building secure, scalable APIs and microservices using cloud-native architecture. If you’re a problem solver who loves backend logic, API performance, and clean infrastructure – we’d love to meet you! Key Responsibilities: Backend Development: Design, develop, and maintain scalable backend systems using Node.js and related technologies. API Development: Create secure, efficient RESTful and GraphQL APIs for web and mobile applications. Cloud Infrastructure: Leverage AWS services (Lambda, EC2, S3, RDS, DynamoDB, API Gateway, etc.) for deployment, storage, and scalability. Database Management: Design and optimize schemas using MySQL, MongoDB, or PostgreSQL. Performance Tuning: Analyze and improve backend performance and reliability. DevOps & CI/CD: Participate in building automated deployment pipelines and monitoring services. Security: Implement authentication, authorization, and secure data practices (OAuth, JWT, etc.). Team Collaboration: Work closely with frontend developers, DevOps engineers, and product managers in an agile environment. Mentorship: Support junior developers through code reviews and technical guidance. Must-Have Skills: 3–5 years of experience in backend development with Node.js Strong experience with AWS services (Lambda, S3, RDS, EC2, API Gateway, etc.) Proficiency in building RESTful APIs and handling asynchronous operations Experience with databases : MySQL, MongoDB, or PostgreSQL Familiarity with Git , CI/CD tools, and containerization (Docker) Solid understanding of security best practices and API authentication Experience working in Agile/Scrum teams Excellent debugging and performance tuning skills Nice-to-Haves: Experience with GraphQL Knowledge of Serverless architecture Exposure to Redis , Kafka , or message queues Familiarity with Infrastructure-as-Code tools like Terraform or AWS CloudFormation Basic understanding of frontend integration with React/Angular What We Offer: 5-Day Work Week – Prioritizing work-life balance Challenging Projects – Solve real-world problems with scalable solutions Learning Culture – Training programs and certifications to keep you growing Collaborative Team – Supportive environment focused on innovation and ownership Attractive Compensation – Competitive salary and performance-based growth Build the future of scalable web technology with us. Join now! Job Type: Full-time Pay: ₹38,000.00 - ₹62,000.00 per month Benefits: Flexible schedule Health insurance Paid time off Education: Bachelor's (Required) Experience: Node.js: 3 years (Required) Location: Utran, Surat, Gujarat (Required) Work Location: In person Speak with the employer +91 9904361666

Posted 1 day ago

Apply

3.0 years

10 - 12 Lacs

Sewri, Mumbai, Maharashtra

On-site

JD For ABAP- HR Job Overview: We are seeking a skilled and motivated SAP ABAP-HR Developer to join our team. The successful candidate will play a critical role in the design, development, testing, and support of SAP ABAP applications specifically within the SAP HR (Human Resources) modules. This position requires a deep understanding of ABAP development tools and methodologies, especially in HR domains like Personnel Administration, Payroll, Time Management, and Organizational Management. Key Responsibilities: Develop and support ABAP RICEF objects (Reports, Interfaces, Conversions, Enhancements, Forms) with a strong focus on HR modules. Design and implement Module Pool Programming, RFCs, BDC, Enhancements, and Dynamic Programming. Build and maintain ABAP Dictionary objects, including tables, views, data elements, and domains. Develop applications using ABAP OOP, Web Dynpro, and Adobe Forms. Implement OSS notes, perform code optimization, and ensure high performance of solutions. Work closely with functional teams to translate business requirements into technical specifications. Create and manage HR InfoTypes, and work with function modules and tables related to PMS, Leave, PA, OM, Payroll, and Time modules. Debug and resolve ABAP issues, including performance tuning and enhancements. Prepare technical documentation in accordance with provided templates and standards. Participate in project discussions, code reviews, and team collaborations for medium to large-scale SAP initiatives. Mandatory Skills: Minimum 3 years of hands-on experience in ABAP-HR development Strong experience with: RFC, BDC, Enhancements, Module Pool Programming ABAP OOP SAP HR Info Types and associated function modules Appraisals, Leave, PA, OM, Payroll, and Time Management modules Proficiency in: Adobe Forms Development Web Dynpro ABAP ABAP Dictionary objects Sound debugging, performance optimization, and documentation skills Desirable Skills: Experience with SAP Advanced Claim Framework Exposure to HR processes like PMS, Leave Management, Payroll, etc. Familiarity with SAP upgrades and code redesign projects Knowledge of SAP HANA, SAP Fiori, and UI design tools Understanding of Adhoc Query and custom reporting features in SAP HR Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field Minimum of 3 years of ABAP development experience within SAP HR modules Excellent analytical, debugging, and communication skills Ability to work collaboratively with cross-functional teams and stakeholders Job Type: Contractual / Temporary Contract length: 12 months Pay: ₹1,000,000.00 - ₹1,200,000.00 per year Benefits: Health insurance Application Question(s): Are you an immediate joiner? How many total years of experience do you have with SAP ABAP development? Do you have experience with HR modules? Location: Sewri, Mumbai, Maharashtra (Required) Work Location: In person

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are seeking a skilled and experienced Technical Lead – Data Observability to spearhead the development and enhancement of a large-scale data observability platform on AWS. This platform plays a mission-critical role in delivering real-time monitoring, reporting, and actionable insights across the client’s data ecosystem. The ideal candidate will have strong technical acumen in AWS data services, proven leadership experience in engineering teams, and a passion for building scalable, high-performance cloud-based data pipelines. You will work closely with the Programme Technical Lead / Architect to define the platform vision, technical priorities, and key success metrics. Key Responsibilities: Lead the design, development, and deployment of features for the data observability platform. Mentor and guide junior engineers, fostering a culture of technical excellence and collaboration . Collaborate with architects to align on roadmap, architecture, and KPIs for platform evolution. Ensure code quality, performance, and scalability across data engineering solutions. Conduct and participate in code reviews, architecture design discussions, and sprint planning . Support operational readiness including performance tuning, alerting, and incident response. Must-Have Skills & Experience (Non-Negotiable): 5+ years of hands-on experience in Data Engineering or Software Engineering. 3+ years in a technical lead/squad lead capacity. Expertise in AWS Data Services : AWS Glue AWS EMR Amazon Kinesis AWS Lambda Amazon Athena Amazon S3 Strong programming skills in PySpark, Python , and SQL . Proven experience in building and maintaining scalable, production-grade data pipelines on cloud platforms. 💡 Preferred / Nice-to-Have Skills: Familiarity with Data Observability tools (e.g., Monte Carlo, Databand, Bigeye) is a plus. Understanding of DevOps/CI-CD practices using Git, Jenkins, etc. Knowledge of data quality frameworks , metadata management, and data lineage concepts. Exposure to agile methodologies and tools like Jira, Confluence.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our technology services client is seeking multiple Senior Data Security Analyst to join their team on a contract basis. These positions offer a strong potential for conversion to full-time employment upon completion of the initial contract period. Below are further details about the role: Role: Senior Data Security Analyst Experience: 3- 8 Years Location: Pune, Chennai, Bangalore Notice Period: Immediate- 15 Days Mandatory Skills: Data Security, Big ID, Palo Alto Purview, SaaS Job Description: Maintain and Improve Data Security Controls (Data Discovery and Classification,) Manage Security Policies using Security tools. Manage and Monitor Auditing and Logging on all Cloud Databases. Manage the day to day governance of the enterprise of Big ID, Purview and Palo Alto. This includes configuration tuning, and policy management as well as defining and executing escalation criteria. Work with Security teams to tune control systems to best meet the need of the business. Work on Integrations using API, Connectors etc. Work on daily activities to support security controls. Support configuration, rules, and policies across the enterprise Support for Security incident response, database security controls with enterprise CSIRT & SOC teams The Skills You Bring 3+ Years Working Knowledge on BigID, Palo Alto, Purview. 3+ Working knowledge on SaaS technology & Services Strong understanding of Data Classification concepts and best practices Expertise in at least one major cloud provider (AWS, Azure, GCP) Ability to document security governance processes and procedures in team run book Ability to interact with personnel at all levels across the organization and to comprehend business imperatives. A strong customer/client focus with ability to manage expectations appropriately and provide superior customer/client experience and build long-term relationships. Strong communication and collaboration skills; ability to work effectively across multiple teams. Ability to think strategically, use sound judgement, and balance short and long- term risk decisions. Comfortable with appropriate challenge and escalation. Must be self-motivated, willing to take on initiative, and capable of working independently. If you are interested, share the updated resume to sai.a@s3staff.com

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title:-PostgreSQL-DB Administration Location : Bengaluru Experience : 5+Years Job Type : Contract to hire . Notice Period :- Immediate joiners. Detailed JD: PostgreSQL Database Administrator will be providing technical and operational support for activities of Database Servers including installation troubleshooting performance monitoring tuning and optimizing Three 3 years of experience with PostgreSQL version 9 up till latest version hosted on Azure Postgre SQL platforms Three 3 years of experience migrating MS SQL Serer databases to PostgreSQL deploying databases in containers Install monitor and maintain PostgreSQL implement monitoring and ing implement backup and recovery processes provide system and SQL performance tuning Two 2 years of experience as a PostgreSQL database administrator deploying PostgreSQL databases on Cloud platforms such as Azure Cloud Environment Programming languages such as UNIX shell scripting PLpgSQL Python or Perl experience Two 2 years of experience with PostgreSQL native tools like pgAdmin pgAudit pgBadger pgPool pSQL Estimate PostgreSQL database capacities develop methods for monitoring database capacity and usage Must have experience in PostgreSQL database architecture logical and physical design automation documentation installs shell scripting PL SQL programming catalog navigation query tuning system tuning resource contention analysis backup and recovery standby replication etc Must have strong understanding of command line and server administration Participate in application development projects and be responsible for the database architecture design and deployment Participate in the creation of development staging and production database instances and the migration from one environment to another Responsible for regular backups and recovery of databases Responsible for regular maintenance on databases eg Vacuum Reindexing Archiving Responsible for proactive remediation of database operational problems Responsible for Query tuning and preventative maintenance Ability to proactively identify troubleshoot and resolve live database systems issues Skills Mandatory Skills : Windows Server,Azure Database Service,AWS Database Service,PostgreSQL-DB Administration,RedHat Linux Administrator Good to Have Skills : Azure Database Service, AWS Database Service

Posted 1 day ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title:Senior AI/ML Engineer Experience: 8+ Years Location:Bangalore (On-site) Mandatory Skills: LLM & Prompt Engineering: Strong expertise in Prompt Engineering techniques and strategies In-depth understanding of how Large Language Models (LLMs) work, including control over hyperparameters such as temperature, top_p, etc. Experience in LLM fine-tuning , prompt optimization , and zero-/few-shot learning Agentic AI Frameworks: Hands-on experience with agent-based frameworks like: LangChain LangGraph Crew AI Retrieval-Augmented Generation (RAG): Implementation experience in RAG pipelines Experience in integrating vector databases with LLMs for contextual augmentation LLM Evaluation & Observability: Knowledge of techniques and tools for LLM performance evaluation Familiarity with LLM observability platforms and metrics monitoring Ability to define and track quality benchmarks like accuracy, coherence, hallucination rate, etc. Programming & Deployment: Strong programming skills in Python Experience deploying AI/ML models or LLM-based systems on at least one major cloud platform (AWS, GCP, or Azure) Preferred Skills (Nice to Have): Experience working with OpenAI, Anthropic, Cohere , or open-source LLMs (LLaMA, Mistral, Falcon, etc.) Knowledge of Docker , Kubernetes , MLflow , or other ML Ops tools Experience with embedding models , vector databases (like Pinecone, FAISS, Weaviate) Familiarity with transformer architecture and fine-tuning techniques Responsibilities: Design, build, and optimize LLM-powered applications Develop and maintain prompt strategies tailored to business use-cases Architect and implement agentic AI workflows using modern frameworks Build and monitor RAG pipelines for improved information retrieval Establish processes for evaluating and monitoring LLM behavior in production Collaborate with cross-functional teams including Product, Data, and DevOps Ensure scalable and secure deployment of models to production Soft Skills: Strong problem-solving and analytical thinking Excellent communication and documentation skills Passion for staying updated with advancements in GenAI and LLM ecosystems

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Role: SAP GTS Consultant Job Location: PAN INDIA Work Mode: (WFO) Experience – 8+ Years Job description: Hands-on experience with GTS E4H upgrades and familiarity with S/4HANA integration points. In-depth knowledge of SAP GTS modules: Customs, Compliance, Risk Management, Embargo Check, Legal Control, Preference Processing, and SPL Screening. Strong understanding of SAP landscape transformation, migration tools, and upgrade methodologies. Experience with system sizing, performance tuning, and security compliance for GTS environments. Excellent communication, stakeholder management, and documentation skills.

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Job Title: Full-Stack Engineer (Next.js + FAST API) Job Type: Full-Time, Contractor About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: We are seeking a highly skilled Full-Stack Engineer with strong expertise in FastAPI (Python) and Next.js (React) to join our growing engineering team. In this role, you’ll be responsible for building and maintaining modern, scalable applications from backend services to frontend interfaces. If you enjoy owning features end-to-end, solving real-world problems, and collaborating cross-functionally we’d love to hear from you. Key Responsibilities Design, develop, and maintain robust backend services and APIs using Python and FastAPI. Build dynamic and performant frontend applications using React and Next.js. Implement best practices in software architecture, API design, and system performance. Translate product requirements into clean, testable, and maintainable code across the stack. Work closely with product managers, designers, and fellow engineers to deliver end-to-end features. Conduct code reviews, debug production issues, and maintain high code quality standards. Stay up to date with trends and advancements in both frontend and backend technologies. Required Skills and Qualifications 5+ years of experience in full-stack development with production-grade applications. Strong hands-on experience with FastAPI, Python, and asynchronous backend development. Proficient in Next.js, React, and modern JavaScript/TypeScript. Solid understanding of REST APIs, microservices, and scalable backend architecture. Ability to manage and prioritize tasks independently, with clear communication across teams. Proven debugging, optimization, and performance tuning skills. Strong verbal and written communication abilities. Preferred Qualifications Experience with cloud platforms like AWS, GCP, or Azure. Familiarity with Docker, CI/CD pipelines, and DevOps best practices. Past experience in technical leadership, mentoring, or guiding junior developers.

Posted 1 day ago

Apply

6.0 years

0 Lacs

India

On-site

About the Role We are seeking a visionary and technically astute Lead AI Architect to lead the architecture and design of scalable AI systems and next-generation intelligent platforms. As a core member of the leadership team, you will be responsible for driving end-to-end architectural strategy, model optimization, and AI infrastructure that powers mission-critical solutions across our product lines. This is a foundational role for someone passionate about architecting solutions involving RAG , SLMs/LLMs , multi-agent systems , and scalable model pipelines across cloud-native environments. Salary 30 - 45 LPA with additional benefits. Key Responsibilities Define and own the AI/ML architectural roadmap , aligning with product vision and technical goals. Architect and oversee implementation of RAG-based solutions , LLM/SLM fine-tuning pipelines , and multi-agent orchestration . Lead design of model training and inference pipelines ensuring scalability, modularity, and observability. Evaluate and select open-source and proprietary foundation models for fine-tuning, instruction tuning, and domain adaptation. Guide integration of vector databases, semantic search, and prompt orchestration frameworks (LangChain, LlamaIndex, etc.). Ensure best practices in model deployment, versioning, monitoring , and performance optimization (GPU utilization, memory efficiency, etc.). Collaborate with Engineering, DevOps, Product, and Data Science teams to bring AI features to production. Mentor mid-level engineers and interns; contribute to technical leadership and code quality . Maintain awareness of latest research, model capabilities, and trends in AI. Required Skills & Qualifications 6+ years of hands-on experience in AI/ML architecture and model deployment. Expert-level knowledge of Python and libraries such as PyTorch, Hugging Face Transformers, scikit-learn, and FastAPI. Deep understanding of LLMs/SLMs, embedding models, tokenization strategies, fine-tuning, quantization, and LoRA/QLoRA. Proven experience with Retrieval-Augmented Generation (RAG) pipelines and vector DBs like FAISS, Pinecone, or Weaviate. Strong grasp of system design, distributed training, MLOps, and scalable cloud-based infrastructure (AWS/GCP/Azure). Experience with containerization (Docker), orchestration (Kubernetes), and experiment tracking (MLFlow, W&B). Experience in building secure and performant REST APIs , deploying and monitoring AI services in production. Nice to Have Exposure to multi-agent frameworks, task planners, or LangGraph. Experience leading AI platform teams or architecting enterprise-scale ML platforms. Familiarity with Data Governance, Responsible AI, and model compliance requirements. Published papers, open-source contributions, or patents in the AI/ML domain. Why Join Us Be at the forefront of innovation in AI and language intelligence. Influence strategic technical decisions and drive company-wide AI architecture. Lead a growing AI team in a high-impact, fast-paced environment. Competitive compensation, equity options, and leadership opportunity.

Posted 1 day ago

Apply

3.0 - 4.0 years

0 Lacs

India

Remote

Experience Required: 3 - 4 Years Location: India WFH (Remote) Job Shift: Night Shift (6pm – 3am) Job Type: Full Time Job Description: We are looking for a Full Stack .NET developer who is passionate about working on exciting solutions for a US-based client in the insurance industry. As a full-stack developer, you'll be part of the team who would work to develop applications and websites for complex business problems with out of box solutions. The ideal candidate would have strong software development fundamentals and experience developing web-based applications. Are you ready to use your full stack .NET skills to make an impact every day? Put your experience to work by applying to be part of a fast-paced environment where input and innovations are valued. Responsibilities: Develop all layers of our enterprise web application Work in an agile development team environment Work within a geographically disparate team with a mix of on/offshore developers Assist in performing client support when needed Support system testing by following up on and closing defect tickets in a timely manner. Collaborating with team lead, and developers to provide technical design guidance to align with strategy and applicable technical standards. Document new developments, procedures, or test plans as needed. Interact with other team members to ensure a consistent, uniform approach to software development. Provide support for existing systems Minimum Requirements: 3 years’ experience in .NET programming. Experience in web design and development using the Microsoft Technology stack Experience with at least C#, .Net Core, ASP.NET, ADO.NET, MVC Experience with building Web Services using RESTful API, WCF, and Web APIs Experience with data serialization formats like JSON/XML etc Experience with at least one UI framework like React, Angular, Vue preferably React Demonstrating proficiency with fundamental front-end \ web technologies such as HTML, JavaScript, CSS, JSON Experience with Visual Studio, and SSMS Strong SQL development skills, including advanced T-SQL, stored procedures, functions Knowledge of SQL databases, slow query analysis and performance tuning Experience in troubleshooting common database issues (deadlocks, blocks, indexes, expensive queries and performance counters using DMV). Experience in Agile methodologies and SCRUM processes. Experience with Source Control Systems preferably GIT. Experience with CI/CD tools preferably Azure DevOps or BitBucket pipeline. Ability to work independently and virtually. Ability to learn and apply new technologies Demonstrated effective verbal and written communication skills. Preferred Qualifications (nice to have): Microsoft Azure Developer or Azure Architect certification obtained or in process or must be able/willing to obtain. Working knowledge of Microservices, Docker, and Kubernetes. Working knowledge of enterprise application architecture and design patterns. Experience working in the Insurance domain Experience working in the e-commerce domain Experience working in a consulting environment Note (BGV): **As part of the candidate onboarding, we would conduct a candidate background check that includes education, employment history, criminal records, Reference check, etc.** Company Information: Reveation Labs (reveation.io) is a rapidly growing technology solutions company headquartered in the USA, founded with the purpose of empowering businesses with technical solutions to unlock maximum potential. Led by management leaders with decades of experience working for Top 20 Fortune companies, we believe in being the ultimate solution providers for our clients and partners, helping them with innovative solutions. We specialize in Blockchain development, Enterprise Applications Development, E-commerce solutions, Cloud Computing, DevOps, Mobile Application Development, and Staff Augmentation. Why Reveation Labs? We Truly Believe that where your work matters & as a software company, we know a thing or two about what makes employees happy. When you Join Reveation Labs, you do more than simply switch companies to advance your career. You become part of the Reveation Labs Family, a group of talented people who drive innovation, embrace change & celebrate the global community which is Reveation Labs.

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

About the Role We are seeking a hands-on AI/ML Engineer with deep expertise in Retrieval-Augmented Generation (RAG) agents , Small Language Model (SLM) fine-tuning , and custom dataset workflows . You'll work closely with our AI research and product teams to build production-grade models, deploy APIs, and enable next-gen AI-powered experiences. Key Responsibilities Design and build RAG-based solutions using vector databases and semantic search. Fine-tune open-source SLMs (e.g., Mistral, LLaMA, Phi, etc.) on custom datasets. Develop robust training and evaluation pipelines with reproducibility. Create and expose REST APIs for model inference using FastAPI . Build lightweight frontends or internal demos with Streamlit for rapid validation. Analyze model performance and iterate quickly on experiments. Document processes and contribute to knowledge-sharing within the team. Must-Have Skills 3–5 years of experience in applied ML/AI engineering roles. Expert in Python and common AI frameworks (Transformers, PyTorch/TensorFlow). Deep understanding of RAG architecture, vector stores (FAISS, Pinecone, Weaviate). Experience with fine-tuning transformer models and instruction-tuned SLMs. Proficient with FastAPI for backend API deployment and Streamlit for prototyping. Knowledge of tokenization, embeddings, training loops, and evaluation metrics. Nice to Have Familiarity with LangChain, Hugging Face ecosystem, and OpenAI APIs. Experience with Docker, GitHub Actions, and cloud model deployment (AWS/GCP/Azure). Exposure to experiment tracking tools like MLFlow, Weights & Biases. What We Offer Build core tech for next-gen AI products with real-world impact. Autonomy and ownership in shaping AI components from research to production. Competitive salary, flexible remote work policy, and a growth-driven environment.

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

About Company: Our client organization's mission is to empower people to participate in global conversations through communities. They are responsible for the consumer-facing application on the Web, Android, and iOS platform. In this role, you'll work with a specific team within this organization to drive related technical & product strategy, operations, architecture, and execution for one of the largest sites in the world. Poster Experience specifically focuses on the user journey, which is the main source of user content for the product. We aim to make it easier, faster, and smarter to create and participate in conversations, and we drive several core product metrics for the entire ecosystem. This specific role will involve migrating legacy Python microservice code to one or more existing Go microservices. Successful candidates have prior experience in these migrations at large scale (think millions of actions per day) and understand how to instrument and monitor their code for parity and consistency during rollout. Job Description: Job Title: Python Developer Location: Pan India Experience: 5+ yrs. Employment Type: Contract to hire Work Mode: Remote Notice Period: - Immediate joiners Roles and Responsibilities: 5+ years of overall work experience, with at least 3 years of relevant experience in Python and 2+ years in CUDA/C++. Strong hands-on experience with Python, especially in scientific computing using PyTorch and NumPy. Solid understanding of CUDA programming concepts and C++ fundamentals. Demonstrated ability to analyze CUDA kernels and accurately reproduce them in Python. Familiarity with GPU computation, parallelism, and performance-aware coding practices. Strong debugging skills and attention to numerical consistency when porting logic across languages. Experience evaluating AI-generated code or participating in LLM tuning is a plus. Ability to communicate technical feedback clearly and constructively. Fluent in conversational and written English communication skills.

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

The candidate should have experience in AI Development including experience in developing, deploying, and optimizing AI and Generative AI solutions. The ideal candidate will have a strong technical background, hands-on experience with modern AI tools and platforms, and a proven ability to build innovative applications that leverage advanced AI techniques. You will work collaboratively with cross-functional teams to deliver AI-driven products and services that meet business needs and delight end-users. Key Prerequisites  Experience in AI and Generative AI Development  Experience in Design, develop, and deploy AI models for various use cases, such as predictive analytics, recommendation systems, and natural language processing (NLP).  Experience in Building and fine-tuning Generative AI models for applications like chatbots, text summarization, content generation, and image synthesis.  Experience in implementation and optimization of large language models (LLMs) and transformer-based architectures (e.g., GPT, BERT).  Experience in ingestion and cleaning of data  Feature Engineering and Data Engineering  Experience in Design and implementation of data pipelines for ingesting, processing, and storing large datasets.  Experience in Model Training and Optimization  Exposure to deep learning models and fine-tuning pre-trained models using frameworks like TensorFlow, PyTorch, or Hugging Face.  Exposure to optimization of models for performance, scalability, and cost efficiency on cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI).  Hands-on experience in monitoring and improving model performance through retraining and evaluation metrics like accuracy, precision, and recall. AI Tools and Platform Expertise  OpenAI, Hugging Face  MLOps tools  Generative AI-specific tools and libraries for innovative applications. Technical Skills 1. Strong programming skills in Python (preferred) or other languages like Java, R, or Julia. 2. Expertise in AI frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, and Hugging Face. 3. Proficiency in working with transformer-based models (e.g., GPT, BERT, T5, DALL-E). 4. Experience with cloud platforms (AWS, Azure, Google Cloud) and containerization tools (Docker, Kubernetes). 5. Solid understa

Posted 1 day ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Gen AI Technical Architect_Full-Time_Hyderabad (Work from office) Job Title: Gen AI Technical Architect Job Type: Full-Time Location: Hyderabad (Work from office) Experience: 12-16 Years Primary Skills: Gen Ai, Azure Open Ai, Python and which are mentions in cheat sheet Job Description: Gen AI Technical Architect Generative AI Architect will play a pivotal role in designing, developing, and implementing advanced generative AI solutions that drive significant business impact for our clients. This role offers the exciting opportunity to work at the forefront of AI innovation, leveraging the power of AI. Role Overview: The Generative AI Architect will be responsible for the end-to-end architecture, design, and deployment of scalable and robust generative AI systems. This includes conceptualizing solutions, selecting appropriate models and frameworks, overseeing development, and ensuring the successful integration of generative AI capabilities into existing and new platforms. You will work closely with business stakeholders to translate complex requirements into high-performance AI solutions. Key Responsibilities: Architect and Design Generative AI Solutions : Lead the architectural design of generative AI systems, including model selection (LLMs), RAG and fine-tuning approaches. Azure AI Expertise : Design and deploy scalable AI solutions leveraging a comprehensive suite of Azure AI services. Python Development : Write clean, efficient, and maintainable Python code for data processing, automation, and API integrations. Model Optimization and Performance : Optimize generative AI models for performance, scalability, and cost-efficiency. Data Strategy: Design data architectures and pipelines to ingest, process, and prepare data for generative AI model training and inference, utilizing Azure data services. Integration and Deployment: Oversee the integration of generative AI models into existing enterprise systems and applications. Implement robust MLOps practices, CI/CD pipelines (e.g., Azure DevOps, GitHub, Jenkins), and containerization (Docker, Kubernetes) for seamless deployment. Technical Leadership & Mentorship: Provide technical leadership and guidance to development teams, fostering best practices in AI model development, deployment, and maintenance. Research and Innovation: Stay abreast of the latest advancements in generative AI technologies, research methodologies, and industry trends. Drive proof-of-concepts (PoCs) and pilot implementations for new AI capabilities. Collaboration and Communication: Collaborate effectively with cross-functional teams, including product managers, data scientists, software engineers, and business analysts, to ensure AI solutions align with business goals and deliver tangible value. Articulate complex technical concepts to non-technical stakeholders. Required Skills and Qualifications: Minimum of 12-16 years of experience in IT, with at least 3+ years specifically focused on Gen AI architecture and development. Technical Proficiency: Deep expertise in Azure Cloud Platform and its AI services Strong proficiency in Python and relevant AI/ML libraries (e.g., TensorFlow, PyTorch, Hugging Face, LangChain, LlamaIndex). Solid understanding of large language models (LLMs), transformer architectures, diffusion models, and other generative AI techniques. Hands-on experience with prompt engineering, RAG pipelines, and vector databases (e.g., Pinecone, Weaviate, Chroma). Experience with MLOps, CI/CD, and deployment tools (Azure DevOps, GitHub, Kubernetes). Familiarity with RESTful API design and development. Architectural Principles: Strong understanding of cloud architecture, microservices architecture, and design patterns. Problem-Solving: Excellent analytical and problem-solving skills with the ability to think critically and creatively. Communication: Exceptional communication and interpersonal skills, with the ability to convey complex technical concepts clearly to both technical and non-technical audiences. Team Player: Ability to work collaboratively in a fast-paced, agile environment and lead projects with multiple stakeholders. Desirable: Relevant Azure AI certifications Experience with other cloud platforms (AWS). Experience with fine-tuning LLMs for specific use cases. Contributions to open-source AI projects or publications in AI/ML.

Posted 1 day ago

Apply

7.0 - 15.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Hiring for Oracle Database Administrator Job Summary: We're hiring for an Oracle Database Administrator with 7-15 years of experience in Mumbai, to provide technical support and maintenance for Oracle database systems in a 24*7 shift environment. Key Responsibilities: 1. Technical Support: Provide technical support for day-to-day BAU tasks, incidents, and RFCs. 2. Service Improvements: Propose service improvements for Oracle/EBS services. 3. Database Administration: Manage and maintain Oracle database systems. Required Skills: 1. Oracle RAC: Strong experience in Oracle RAC and Data Guard. 2. Database Tuning: Experience in database tuning. 3. Oracle OEM: Knowledge of Oracle OEM 12C/13C. Details: 1. Location: Mumbai. 2. Experience: 7-15 years. 3. CTC: Up to 30 LPA. 4. Shift Time: 24*7. 5. Notice Period: Immediate to 30 days. How to Apply: Send your CV to Shabnam.s@liveconnections.in #liveconnections #livec #weplacepeoplefirst.

Posted 1 day ago

Apply

9.0 - 15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title- Snowflake Data Architect Experience- 9 to 15 Years Location- Gurugram Job Summary: We are seeking a highly experienced and motivated Snowflake Data Architect & ETL Specialist to join our growing Data & Analytics team. The ideal candidate will be responsible for designing scalable Snowflake-based data architectures, developing robust ETL/ELT pipelines, and ensuring data quality, performance, and security across multiple data environments. You will work closely with business stakeholders, data engineers, and analysts to drive actionable insights and ensure data-driven decision-making. Key Responsibilities: Design, develop, and implement scalable Snowflake-based data architectures. Build and maintain ETL/ELT pipelines using tools such as Informatica, Talend, Apache NiFi, Matillion, or custom Python/SQL scripts. Optimize Snowflake performance through clustering, partitioning, and caching strategies. Collaborate with cross-functional teams to gather data requirements and deliver business-ready solutions. Ensure data quality, governance, integrity, and security across all platforms. Migrate legacy data warehouses (e.g., Teradata, Oracle, SQL Server) to Snowflake. Automate data workflows and support CI/CD deployment practices. Implement data modeling techniques including dimensional modeling, star/snowflake schema, normalization/denormalization. Support and promote metadata management and data governance best practices. Technical Skills (Hard Skills): Expertise in Snowflake: Architecture design, performance tuning, cost optimization. Strong proficiency in SQL, Python, and scripting for data engineering tasks. Hands-on experience with ETL tools: Informatica, Talend, Apache NiFi, Matillion, or similar. Proficient in data modeling (dimensional, relational, star/snowflake schema). Good knowledge of Cloud Platforms: AWS, Azure, or GCP. Familiar with orchestration and workflow tools such as Apache Airflow, dbt, or DataOps frameworks. Experience with CI/CD tools and version control systems (e.g., Git). Knowledge of BI tools such as Tableau, Power BI, or Looker. Certifications (Preferred/Required): ✅ Snowflake SnowPro Core Certification – Required or Highly Preferred ✅ SnowPro Advanced Architect Certification – Preferred ✅ Cloud Certifications (e.g., AWS Certified Data Analytics – Specialty, Azure Data Engineer Associate) – Preferred ✅ ETL Tool Certifications (e.g., Talend, Matillion) – Optional but a plus Soft Skills: Strong analytical and problem-solving capabilities. Excellent communication and collaboration skills. Ability to translate technical concepts into business-friendly language. Proactive, detail-oriented, and highly organized. Capable of multitasking in a fast-paced, dynamic environment. Passionate about continuous learning and adopting new technologies. Why Join Us? Work on cutting-edge data platforms and cloud technologies Collaborate with industry leaders in analytics and digital transformation Be part of a data-first organization focused on innovation and impact Enjoy a flexible, inclusive, and collaborative work culture

Posted 1 day ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

As our AI Engineer, you’ll own the design, development, and production-grade deployment of our machine learning and NLP pipelines. You’ll work cross-functionally with backend (Java/Spring Boot), data (Kafka/MongoDB/ES), and frontend (React) teams to embed AI capabilities throughout. Responsibilities Build & Deploy ML/NLP Models Design end-to-end ML pipelines for data ingestion, preprocessing, feature engineering, model training, evaluation and monitoring. Train, deploy and operate predictive models (classification, regression, anomaly detection) to drive actionable insights across all MCP sources. Implement NLP components - such as text classification, summarization, and conversational interfaces—to enhance chat-driven workflows and knowledge retrieval. Data Engineering & Integration Ingest, clean, and normalize data from Kafka/Mongo and third-party APIs Define and maintain JSON-schema validations and transformation logic Collaborate with backend services to embed AI outputs Platform & Service Collaboration Work with Java/Spring Boot teams to wrap models as REST endpoints or Kafka stream processors Ensure end-to-end monitoring, logging, and performance tuning within Kubernetes Partner with frontend engineers to surface AI insights in React-based chat interfaces Continuous Improvement Establish A/B testing, metrics, and feedback loops to tune model accuracy and latency Stay on top of LLM and MLops best practices to evolve our AI stack Qualifications Experience: 2–3 years in ML/AI or data science roles, preferably in SaaS. Languages & Frameworks: Python and Familiarity with Java & Spring Boot for service integrations. Data & Infrastructure: Hands-on with Kafka, MongoDB, Redis or similar. Experience containerizing in Docker and deploying on Kubernetes. JSON-path/JSONLogic or similar transformation engines. Soft Skills: Excellent communication - able to translate complex AI concepts to product and customer teams. Nice-to-Haves Experience integrating LLMs or building vector search indexes for semantic retrieval Prior work on chatbots or conversational UIs Familiarity with DevOps stack. (AWS/Azure, k8s, GitOps, Security, Observability and Incident management)

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

Remote

About Client : Our client is one of the world's fastest-growing AI companies, accelerating the advancement and deployment of powerful AI systems. They helps customers in two ways: Working with the world’s leading AI labs to advance frontier model capabilities in thinking, reasoning, coding, agentic behavior, multimodality, multilinguality, STEM and frontier knowledge; and leveraging that work to build real-world AI systems that solve mission-critical priorities for companies. Powering this growth is our clients talent cloud—an AI-vetted pool of 4M+ software engineers, data scientists, and STEM experts who can train models and build AI applications. All of this is orchestrated by ALAN—our AI-powered platform for matching and managing talent, and generating high-quality human and synthetic data to improve model performance. ALAN also accelerates workflows for model and agent evals, supervised fine-tuning, reinforcement learning, reinforcement learning with human feedback, preference-pair generation, benchmarking, data capture for pre-training, post-training, and building AI applications. Job Title: Azure Cloud Solution Architect Location: Pan India Experience:8+ years Employment Type: Contract to hire Work Mode: Remote Notice Period: Immediate joiners Job Description: We are looking for an experienced Azure Cloud Solution Architect (8+ years) for a contract role to support solution design, implementation, and migration activities across client environments. Key Responsibilities: Design and implement Azure cloud solutions (10%) Setup Azure Landing Zones and Disaster Recovery (20%) Integrate with on-premises technologies (10%) Apply Azure Well-Architected Framework (live implementation examples) (10%) Implement Azure Security services and best practices (20%) Manage Azure IAM, RBAC, and Conditional Access (10%) Plan and execute Azure/cloud migrations (20%)

Posted 1 day ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title : Cloud Data Engineer | Database Administrator | ETL & Power BI | DevOps Enthusiast Job Location : Hyderabad /Chennai Job Type : Full Time Experience : 6+ Yrs Notice Period - Immediate to 15 days joiners are highly preferred About the Role: We are seeking a Cloud Data Engineer & Database Administrator to join our Cloud Engineering team and support our cloud-based data infrastructure. This role focuses on optimizing database operations, enabling analytics/reporting tools, and driving automation initiatives to improve scalability, reliability, and cost efficiency across the data platform. Key Responsibilities: Manage and administer cloud-native databases, including Azure SQL, PostgreSQL Flexible Server, Cosmos DB (vCore), and MongoDB Atlas . Automate database maintenance tasks (e.g., backups, performance tuning, auditing, and cost optimization). Implement and monitor data archival and retention policies to enhance query performance and reduce costs. Build and maintain Jenkins pipelines and Azure Automation jobs for database and data platform operations. Design, develop, and maintain dashboards for cost tracking, performance monitoring, and usage analytics (Power BI/Tableau). Enable and manage authentication and access controls (Azure AD, MFA, RBAC). Collaborate with cross-functional teams to support workflows in Databricks, Power BI, and other data tools . Write and maintain technical documentation and standard operating procedures (SOPs) for data platform operations. Work with internal and external teams to ensure alignment of deliverables and data platform standards. Preferred Qualifications: Proven experience with cloud platforms (Azure preferred; AWS or GCP acceptable). Strong hands-on expertise with relational and NoSQL databases . Experience with Power BI (DAX, data modeling, performance tuning, and troubleshooting). Familiarity with CI/CD tools (Jenkins, Azure Automation) and version control (Git). Strong scripting knowledge ( Python, Bash, PowerShell ) and experience with Jira, Confluence, and ServiceNow . Understanding of cloud cost optimization and billing/usage tracking. Experience implementing RBAC, encryption, and security best practices . Excellent problem-solving skills, communication, and cross-team collaboration abilities. Nice to Have: Hands-on experience with Databricks, Apache Spark, or Lakehouse architecture . Familiarity with logging, monitoring, and incident response for data platforms. Understanding of Kubernetes, Docker, Terraform , and advanced CI/CD pipelines. Required Skills: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent professional experience). 6+ years of professional experience in data engineering or database administration. 3+ years of database administration experience in Linux and cloud/enterprise environments. About the Company: Everest DX – We are a Digital Platform Services company, headquartered in Stamford. Our Platform/Solution includes Orchestration, Intelligent operations with BOTs’, AI-powered analytics for Enterprise IT. Our vision is to enable Digital Transformation for enterprises to deliver seamless customer experience, business efficiency and actionable insights through an integrated set of futuristic digital technologies. Digital Transformation Services - Specialized in Design, Build, Develop, Integrate, and Manage cloud solutions and modernize Data centers, build a Cloud-native application and migrate existing applications into secure, multi-cloud environments to support digital transformation. Our Digital Platform Services enable organizations to reduce IT resource requirements and improve productivity, in addition to lowering costs and speeding digital transformation. Digital Platform - Cloud Intelligent Management (CiM) - An Autonomous Hybrid Cloud Management Platform that works across multi-cloud environments. helps enterprise Digital Transformation get most out of the cloud strategy while reducing Cost, Risk and Speed. To know more please visit: http://www.everestdx.com

Posted 1 day ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Description: Participate in design discussions/scrum meetings. Develop microservices using Java, Spring Boot, and Cassandra. Develop REST APIs for other systems to interface with our microservice platforms Working with Cassandra database to store and retrieve application data CI/CD Integration using Jenkins to validate code security and push software builds to Dev/Test/Prod sites Checking code into GitHub with proper versioning and release notes Write unit tests using JUnit and integration tests Load test components using JMeter and ensure that the applications are horizontally scalable. Support operations team with troubleshooting production issues. Describe any special skills that you want an applicant to have (certifications, knowledge, experience, etc.) Overall 10+ years in software development 3+ years of experience with Cassandra. 4+ years of experience with Unix (Ubuntu preferred) Experience with (REST) API development Experience with Git, Maven, and Gradle Experience in performance tuning APIs, using profilers, analyzing heap dumps Experience with distributed caching mechanisms and messaging systems (HazelCast) Some experience with IP Networking and Troubleshooting Experience with Kubernetes, Docker is a plus Experience with Open Search, FluentBit, and TView Weekly Hours: 40 Time Type: Regular Location: Bangalore, Karnataka, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Senior Software Engineer Bangalore, Karnataka, India Date posted Jul 28, 2025 Job number 1849823 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft Silicon, Cloud Hardware, and Infrastructure Engineering (SCHIE) is the team behind Microsoft’s expanding Cloud Infrastructure and responsible for powering Microsoft’s “Intelligent Cloud” mission. SCHIE delivers the core infrastructure and foundational technologies for Microsoft's over 200 online businesses including Bing, MSN, Office 365, Xbox Live, Teams, OneDrive, and the Microsoft Azure platform globally with our server and data center infrastructure, security and compliance, operations, globalization, and manageability solutions. Our focus is on smart growth, high efficiency, and delivering a trusted experience to customers and partners worldwide and we are looking for passionate engineers to help achieve that mission. As Microsoft's cloud business continues to grow the ability to deploy new offerings and hardware infrastructure on time, in high volume with high quality and lowest cost is of paramount importance. To achieve this goal, the SW/FW Centre of Excellence team is instrumental in defining and delivering operational measures of success for hardware manufacturing, improving the planning process, quality, delivery, scale and sustainability related to Microsoft cloud hardware. We are looking for seasoned engineers with a dedicated passion for customer focused solutions, insight and industry knowledge to envision and implement future technical solutions that will manage and optimize the Cloud infrastructure. We are looking for a highly motivated Senior Software Engineer with a track record in Cloud Service development to come help us develop and light up innovative AI-based solutions to improve engineering efficiency across development, validation and monitoring. To be successful in this role, you must have a great track record of delivering quality results to customers, an engineering mindset, an innate aptitude for agility, and technical excellence in software engineering. #SCHIE Qualifications Required Qualifications Bachelor’s degree in Computer Science, Computer Engineering, or a related field. 6+ years of industry experience in AI/ML engineering using platforms and languages/frameworks such as Python, Semantic Kernel, AutoGen, Azure AI Foundry, Mem0, Azure AI Search. Proven experience in designing, building, and deploying AI agents across the autonomy spectrum—from retrieval-based to task-oriented and autonomous agents. Strong background in developing web applications and services that integrate AI/ML models for business insights and automation. Preferred Qualifications Hands-on experience with large language models (LLMs), including training, fine-tuning, and inference optimization for multi-billion parameter models. Familiarity with the full ML lifecycle: data engineering, model training, evaluation, deployment, and monitoring. Understanding of embedded systems, firmware development, OS concepts is a strong plus. Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter. Responsibilities Design and implement AI agents using modern agent development frameworks (e.g., Semantic Kernel, AutoGen, AI Foundry). Build scalable, production-grade AI services that integrate with enterprise systems and workflows. Collaborate with cross-functional teams to define agent capabilities, communication protocols, and compliance requirements. Optimize agent performance for real-time inference and continuous learning. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies