Jobs
Interviews

3197 Indexing Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

India

Remote

Client Type: US Client Location: Remote This is a 6-month freelance contract, offering up to 30 hours per week. We are seeking a dynamic and knowledgeable Subject Matter Expert (SME) to play a key role in the development and delivery of a certification program focused on Microsoft Data Architecture for Modern Data Stacks. The successful candidate will leverage their deep expertise in designing, implementing, and managing modern data architecture using Microsoft Fabric, Azure, and Power Platform tools to shape the curriculum, create training materials, and empower learners with practical skills in areas such as unified data lakes, lakehouse and data warehousing patterns, data ingestion, transformation, orchestration, and advanced data governance. This role requires a strong passion for education, excellent communication skills, and the ability to effectively convey complex technical information to support beginner, intermediate, and advanced architects. As this is a freelance position, we're seeking individuals with a proven track record of successfully managing freelance engagements and multiple client relationships. Responsibilities: Collaborate with a team of learning experience designers to help create and validate training materials. Review a skills task or job task analysis for accuracy and completeness, providing feedback on essential vs. nice-to-know tasks and suggesting improvements. Review a high-level program outline and provide feedback on the order and complexity of topics for the intended audience. Review a detailed program outline, ensuring alignment with the high-level program outline, and helping to confirm that content is presented in the correct order and format. Validate AI-generated content to ensure it conforms to learning objectives and is technically accurate. Support creation of content-specific graphics such as tables, flow charts, screen captures, etc. Create any assets required to make demonstrations that showcase specific procedures and skills. Create recordings of software demonstrations and related audio scripts. Coordinate with learning experience designers to build out the assets, steps, and technical elements needed for hands-on projects such as exercises, labs, and projects. Be available during US business hours, M-F, for content reviews, questions, and occasional meetings. Work on the company's systems for all work, including email, messaging platform, and cloud-based file storage systems. Log time weekly and invoice time monthly. Essential Tools & Platforms Microsoft Fabric OneLake Fabric Lakehouse Fabric Data Warehouse Data Factory (Fabric) Dataflows Gen2 Event Streams Data Activator Microsoft Purview Power BI Copilot in Fabric Copilot Studio Azure Monitor Azure Stream Analytics Microsoft Entra ID (formerly Azure AD) SQL Server / T-SQL Visual Studio Code (for development, if applicable for notebooks/scripts) Required Skills & Experience Proven hands-on experience designing, building, and optimizing modern data architectures using Microsoft Fabric, including OneLake, Fabric Lakehouse, Fabric Data Warehouse, Data Factory (Fabric), Dataflows Gen2, Event Streams, Data Activator, Microsoft Purview, and Power BI. Demonstrated ability to architect and implement unified data lakes with OneLake, leveraging open data formats (Delta, Parquet, Iceberg), and medallion architectures (bronze/silver/gold zones). Skilled in building and managing Lakehouse solutions using Delta tables, managed folders, and notebooks. Expertise in designing, deploying, and querying Fabric Data Warehouses with advanced T-SQL, including schema design (star, snowflake, data vault), partitioning, indexing, and compute scaling. Experience with distributed and replicated tables, and DirectLake mode for high-speed analytics. Practical experience creating robust ETL/ELT pipelines using Data Factory (Fabric), mapping dataflows, notebooks, and SQL transformations. Skilled in handling schema evolution, parameterization, error handling, and performance tuning. Experience with real-time data processing using Event Streams and Data Activator. Deep understanding of data governance using Microsoft Purview, including data cataloging, classification, sensitivity labeling, lineage visualization, and compliance mapping (GDPR, HIPAA). Ability to define domain ownership, stewardship, and glossary terms within Purview. Proficiency in enforcing identity and access control with Microsoft Entra ID, configuring row level and column-level security, and applying RBAC and service principal authentication across Lakehouse, Warehouse, and Power BI. Experience with auditing, monitoring, and securing data architectures. Strong experience building reusable Power BI semantic models, defining DAX measures, implementing incremental refresh, and leveraging DirectLake connectivity. Skilled in designing and publishing secure, interactive dashboards and embedded analytics solutions. Ability to recognize and apply Lakehouse, Warehouse, Mesh, and hybrid patterns based on business needs. Experience with performance optimization, cost control, modular design, and decentralized domain architectures. Familiarity with Copilot in Fabric, Data Agents, and AI-powered automation in Power BI for pipeline generation, natural language querying, and workflow optimization. Demonstrated success designing, reviewing, and delivering hands-on labs, real-world projects, and portfolio artifacts (architecture diagrams, pipeline configs, governance plans, BI reports) for intermediate data professionals. Exceptional ability to articulate complex technical concepts clearly for diverse audiences. Experience creating technical documentation, screencasts, and video tutorials. Strong understanding of adult learning principles and instructional best practices. Experience in reviewing training materials for technical accuracy and clarity. A strong understanding of adult learning principles is a plus. Essential experience in training, learning, and development, or teaching. Proven ability to create and deliver effective screencasts and video tutorials. Strong ability to articulate complex technical concepts in an accessible manner. Availability to work during the US time zones' business hours. Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related technical field. 5+ years of hands-on experience designing, building, and deploying complex data solutions specifically on Microsoft Fabric, Azure data services, and Power Platform, covering areas like data lakes, lakehouses, data warehouses, ETL/ELT, and governance. Demonstrable expertise with key Microsoft data services and tools, including, but not limited to, Microsoft Fabric, OneLake, Fabric Lakehouse, Fabric Data Warehouse, Data Factory (Fabric), Microsoft Purview, Power BI, Event Streams, and Data Activator. Proven experience in technical training, learning & development, or teaching, with a strong ability to create, review, and deliver high-quality, technically accurate training materials (e.g., course outlines, hands-on exercises, video tutorials). Exceptional technical communication skills, including the ability to articulate complex concepts clearly to diverse audiences and a strong understanding of adult learning principles. Strong problem-solving, debugging, and communication skills.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Stantec is a global leader in sustainable engineering, architecture, and environmental consulting. The diverse perspectives of our partners and interested parties drive us to think beyond what’s previously been done on critical issues like climate change, digital transformation, and future-proofing our cities and infrastructure. We innovate at the intersection of community, creativity, and client relationships to advance communities everywhere, so that together we can redefine what’s possible. The Stantec community unites approximately 32,000 employees working in over 450 locations across 6 continents. Description Of Duties And Responsibilities Consistently follow company policies and procedures to complete assigned tasks and duties. Major P2P activities like Invoice processing accurate, complete and timely reconciliation of vendor accounts and Cash application, statement reconciliation, invoice indexing, batch posting etc. Follow detailed instructions in order to maintain accurate, consistent, and efficient processing procedures and standards for the department. Participate in ongoing training and professional development as directed by management. Work in a manner to ensure your personal safety and that of fellow employees by following company health and safety guidelines and policies. Perform research and additional assignments as directed by the Accounts Payable Team Lead. Research and process incoming vendor statements Monitor and follow up on aged invoices in process Provide excellent customer service through email, telephone, and instant messaging to both internal and external customers as per requirement. Essential Qualifications & Skills Bachelor’s degree in commerce or business administration with year of passing as 2021 - 2024. Excellent English communication skills. Percentage criteria – 65 % throughout the academics, no backlogs. Understanding of transaction processing, data analysis. Experience in Computerized Accounting systems will be a plus Proficiency in Microsoft Office Suite (Good excel skills, e.g. using pivot tables to analyze and report on large volumes of data, v look ups) Strong analytical and mathematical abilities Attention to detail, high level of accuracy. Good verbal & written communication skills Strong team and collaboration bias Willingness to learn and ask questions in a positive, non-confrontational manner Flexibility to stretch the shift as per business needs and sometimes to work on weekends. Primary Location: India | Pune Organization: Stantec IN Business Unit Employee Status: Regular Travel: No Schedule: Full time Job Posting: 08/07/2025 05:07:25 Req ID: 1001004

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

Client Type: US Client Location: Remote The hourly rate is negotiable. About the Role We’re creating a new certification: Google AI Ecosystem Architect (Gemini & DeepMind) - Subject Matter Expert . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement.

Posted 1 week ago

Apply

8.0 years

12 - 40 Lacs

Gurgaon

On-site

Job Title: AWS Devops Engineer – Manager Business solutions Location: Gurgaon, India Experience Required: 8-12 years Industry: IT We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes. Key Deliverables (Essential functions & Responsibilities of the Job) : · Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS. · Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance. · Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines. · Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK. · Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack. · Enforce cloud security best practices — IAM, VPC setups, encryption, certificate management, and compliance controls. · Work closely with development teams to improve application reliability, scalability, and performance. · Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS. · Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding. · Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution. Knowledge Skills and Abilities: · 7+ years of hands-on AWS DevOps experience, especially with middleware services. · Strong expertise in MongoDB Atlas or other cloud MongoDB services. · Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK. · Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc. · Excellent scripting skills in Python, Bash, or PowerShell. · Experience in containerization and orchestration: Docker, EKS, ECS. · Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana. · Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups. · Ability to solve complex problems and thrive in a fast-paced environment. Preferred Qualifications · AWS Certified DevOps Engineer – Professional or AWS Solutions Architect – Associate/Professional. · MongoDB Certified DBA or Developer. · Experience with serverless services like AWS Lambda, Step Functions. · Exposure to multi-cloud or hybrid cloud environments. Mail updated resume with current salary- Email: jobs@ glansolutions.com Satish; 88O2749743 Job Type: Full-time Pay: ₹1,222,917.42 - ₹4,015,740.18 per year Schedule: Day shift Ability to commute/relocate: Gurgaon, Haryana: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): How much experience you have in AWS DevOps with middleware services ? How Much Experience you have in MongoDB Atlas or other cloud MongoDB services ? Current CTC ? Expected CTC ? Notice period ? Current Location ? Would you be comfortable with job location (Gurgaon) ? Experience: AWS Devops: 7 years (Preferred) MongoDB: 7 years (Preferred) Work Location: In person Speak with the employer +91 9015477985

Posted 1 week ago

Apply

0 years

5 - 11 Lacs

Thiruvananthapuram

On-site

Required Skills We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities 1. AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. 2. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. 3. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. 4. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. 5. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. 6. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) 7. Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. 8. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Technical Skills Proficient in Python , with strong knowledge of libraries like NumPy, Pandas, SciPy, and Matplotlib for data manipulation and visualization. Expertise in TensorFlow, PyTorch, Scikit-learn, and Keras for building, training, and optimizing machine learning and deep learning models. Hands-on experience with Transformer libraries like Hugging Face Transformers, OpenAI APIs, and LangChain for NLP tasks. Practical knowledge of CNN architectures (e.g., YOLO, ResNet, VGG) and Vision Transformers (ViT) for Computer Vision applications. Proficiency in developing and deploying Diffusion Models like Stable Diffusion, SDX, and other generative AI frameworks. Experience with RLHF (Reinforcement Learning with Human Feedback) and reinforcement learning algorithms for optimizing AI behaviors. Proficiency with Docker and Kubernetes for containerization and orchestration of AI workflows. Hands-on experience with MLOps tools such as MLFlow for model tracking and CI/CD integration in AI pipelines. Expertise in setting up monitoring tools like Prometheus and Grafana to track model performance, latency, throughput, and drift. Knowledge of performance optimization techniques, such as quantization, pruning, and knowledge distillation, to improve model efficiency. Experience in building data pipelines for preprocessing, cleaning, and transforming large datasets using tools like Apache Airflow, Luigi Familiarity with cloud-based storage systems (e.g., AWS S3, Google BigQuery) for efficient data handling in AI workflows. Strong understanding of cloud platforms (AWS, GCP, Azure) for deploying and scaling AI solutions. Knowledge of advanced search technologies such as Elasticsearch for indexing and querying large datasets. Familiarity with edge deployment frameworks and optimization for resource-constrained environments Qualifications · Bachelor's or Master's degree in Data Science, Statistics, Mathematics, Computer Science, or a related field. Experience: 2.5 to 5 yrs Location: Trivandrum Job Type: Full-time Pay: ₹500,000.00 - ₹1,100,000.00 per year Benefits: Health insurance Provident Fund Location Type: In-person Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

3.0 years

6 - 12 Lacs

India

On-site

Experience Required 3–6 years Job Summary: We are looking for an experienced and highly skilled Senior Full Stack Developer with a strong command of React.js and Node.js to take a leading role in building complex, scalable, and secure web applications. You’ll be expected to contribute to the architecture and development of both frontend and backend systems, mentor junior developers, and collaborate closely with product and design teams. Key Responsibilities: Lead the design, development, and deployment of full-stack applications using React.js (front-end) and Node.js (back-end). Architect robust, secure, and scalable backend systems using Express.js and modern API design principles. Experience working with microservices architecture, serverless frameworks, and distributed systems. Translate business and product requirements into high-quality technical solutions. Write high-performance, reusable, and modular code following industry best practices and coding standards. Oversee database design, manage complex queries, and optimize performance in MySQL and PostgreSQL. Familiarity with build tools like Webpack, Babel, etc., and package managers such as npm or Yarn. Develop and maintain comprehensive test coverage through unit and integration tests. Conduct code reviews, provide mentorship, and enforce clean code practices across the team. Lead troubleshooting and debugging efforts across full-stack environments. Contribute to and maintain CI/CD pipelines to ensure reliable deployment processes. Collaborate cross-functionally with Product Managers, Designers, QA, and DevOps. Ensure applications meet security, accessibility, and performance benchmarks. Advocate for and help adopt emerging technologies and AI developer tools like Cursor AI or Cline. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 5–6 years of hands-on experience in full-stack development with a primary focus on React.js and Node.js or Next JS. Deep understanding of JavaScript (ES6+), HTML5, CSS3, and modern front-end development practices. Strong architectural understanding of web applications, MVC patterns, and modular design. Proficient in MySQL and PostgreSQL, including schema design, indexing, joins, and query optimization. Demonstrated experience with unit testing frameworks (Jest, Mocha, etc.) and test-driven development (TDD). Proficient in version control systems like Git and workflows such as GitFlow. Strong understanding of asynchronous programming, authentication, and authorization protocols (JWT, OAuth). Proven experience in setting up and managing CI/CD pipelines and deployment automation. Preferred Skills Experience with TypeScript in both front-end and back-end environments. Exposure to containerization tools like Docker and experience deploying to cloud platforms (AWS, GCP, Firebase). Familiarity with GraphQL, WebSockets, or event-driven systems. Hands-on experience with AI-powered dev tools (e.g., Cursor AI, Cline, GitHub Copilot). Experience mentoring junior developers and leading sprint-based teams in an Agile environment. Familiarity with monitoring, alerting, and observability tools (e.g., Sentry, New Relic, Prometheus). Exposure to performance profiling and browser-based debugging. Location - Trivandrum Work from office Salary- 6- 12 LPA Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹1,200,000.00 per year Work Location: In person

Posted 1 week ago

Apply

5.0 years

4 - 8 Lacs

Hyderābād

On-site

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA As a Cross Technology Managed Services Engineer (L3) at NTT DATA, your primary role will be to provide exceptional service to our clients by proactively identifying and resolving technical incidents. You will ensure that client infrastructure is configured, tested, and operational, leveraging your deep technical expertise to solve complex problems and enhance our service quality. Part of your Responsibilities will entail pre-emptive service incident resolution, product reviews, and operational improvements. You will manage high-complexity tickets, perform advanced tasks, and provide diverse solutions while ensuring zero missed service level agreement (SLA) conditions. Your role will also involve mentoring junior team members and working across various technology domains such as Cloud, Security, Networking, and Applications. You will conduct necessary checks, apply monitoring tools, and respond to alerts, identifying issues before they become problems. Logging incidents with the required level of detail, you will analyse, assign, and escalate support calls. Additionally, you will proactively identify opportunities for optimization and automation, ensuring continuous feedback to clients and affected parties. Important responsibility is to create knowledge articles for frequent tasks/issues and train junior team members in executing those tasks. Provide inputs to automation teams to reduce manual efforts. You will engage with third-party vendors when necessary and keep systems and portals updated as prescribed. As a senior engineer, you will coach L2 teams on advanced troubleshooting techniques and support the implementation and delivery of projects, disaster recovery functions, and more, ensuring all actions adhere to client requirements and timelines. To thrive in this role, you need to have: The Senior Oracle Database Administrator with 5+ years of work experience closely with Change Control, Release Management, Asset and Configuration Management and Capacity and Availability Management to help to establish the needs of users, monitoring user access and security. Assists with performing the installation, configuration, and maintenance of Oracle Database Management Systems (DBMS). Candidate must have proven hands-on experience in Oracle Database administration. Candidate would be primarily supporting Oracle Databases (Support Role/Managed Services). Assists with the mapping out of the conceptual design for a planned database. Participates in the writing of database documentation, including data standards, data flow diagrams, standard operating procedures, and definitions for the data dictionary (metadata). Assists with monitoring database performance, identifies performance bottlenecks, and optimizes queries and indexing for optimal database performance. Helps to design and implement robust backup and disaster recovery strategies to ensure data availability and business continuity. Proactively supports the development of database utilities and automated reporting. Works closely with the Change Control and Release Management functions to commission and install new applications and customizing existing applications in order to make them fit for purpose. Assists with planning and executing database software upgrades and applies patches to keep systems up-to-date and secure. Communicates regularly with technical, applications and operational employees to ensure database integrity and security. Implements and manages security measures to safeguard databases from unauthorized access, data breaches, and data loss. Ensures data integrity and consistency by performing regular data validation, integrity checks, and data cleansing activities. Works collaboratively with cross-functional teams, including developers, system administrators, network engineers, and business stakeholders, to support database-related initiatives. Provides technical support to end-users, assists with database-related enquiries, and conducts training sessions as needed. Performs any other related task as required. Day to day operating hours: during Australia and New Zealand business hours, plus on call 24x7 rostered work as required. Required to stay online during working hours and ready to take incoming video calls within the team and clients. Knowledge and Attributes: Proficiency in database administration tasks, including database installation, configuration, maintenance, and performance tuning. Knowledge of SQL (Structured Query Language) to write complex queries, stored procedures, and functions. Proficiency in understanding of database security principles, access controls, and data encryption methods. Proficiency in understanding database backup and recovery strategies to ensure data availability and business continuity. Ability to monitor database performance, identify and resolve issues, and optimize database operations. Ability to learn new technologies as needed to provide the best solutions to all stakeholders. Ability to communicate complex IT information in simplified form depending on the target audience. Effective communication and collaboration skills to work with cross-functional teams and stakeholders. Proficiency in understanding of the principles of data architecture and data services. Knowledge of application development lifecycle and data access layers. Displays problem-solving skills to troubleshoot database-related issues and implement effective solutions. Displays ability to manipulate, process and extract value from large, disconnected datasets. Academic Qualifications and Certifications: Bachelor’s degree or equivalent in computer science, engineering, information technology or related field. Relevant professional certifications such as Oracle Certified Professional (OCP) Database Administrator. Completion of database management courses covering topics like database administration, data modelling, SQL, and performance tuning can provide foundational knowledge. Required Experience: Experienced Oracle Database Administrator with 7+ years of experience (preferably in technology consulting). General skills (self-motivated, problem solving, take ownership and proactive thinking). Experience in working as a consultant supporting multiple clients. Technical skills (Oracle RDBMS 11g-19c, DataGuard, High availability). Highly Regarded (Oracle ADB-Autonomous Database, LogicMonitor, ServiceNow). Expert level experience with database backup and recovery best practices. Expert level demonstrated experience running and creating health assessment reports. Expert level experience working with suppliers to deliver solutions. Expert level experience managing databases. Moderate level experience in Oracle Enterprise Manager. Extensive Managed Services experience handling complex cross-technology infrastructure. Strong understanding of ITIL processes. Proven experience working with vendors and/or third parties. Ability to communicate and work across different cultures and social groups. Effective planning skills, even in changing circumstances. Positive outlook and ability to work well under pressure. Active listening skills and adaptability to changing client needs. Client-focused approach to create positive client experiences throughout their journey. Bachelor's degree or equivalent qualification in IT/Computing (or demonstrated equivalent work experience). Workplace type : Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

On-site

Experience – 8+ Years and above Job Location – Hyderabad (Madhapur) Shifts – 24/7 Support, 5 days week Skills - Microsoft SQL Server, MySQL, Oracle & Azure, AWS, GCP, on-prem. Overview: We are seeking a highly skilled Senior Database Engineer to join our team. This role is responsible for designing, implementing, and maintaining robust, scalable, and secure database systems across cloud and on-premises environments. The ideal candidate will have deep expertise in various database technologies and a strong focus on performance optimization, data security, and system reliability. You will work with a range of Database Management Systems (DBMS) including Microsoft SQL Server, MySQL, Oracle. Proficiency with version control systems such as Git and Azure DevOps is essential. Key Responsibilities: Data Security & Privacy: Implement and enforce database security best practices to protect sensitive information and comply with regulatory requirements. Performance Tuning & Optimization: Analyze and improve database performance through query optimization, indexing strategies, and server configuration. Backup & Disaster Recovery: Regularly test backup and high availability/disaster recovery (HA/DR) strategies across various platforms (Azure, AWS, GCP, on-prem). Data Migration & Integration: Plan and execute seamless data migrations, system upgrades, and integrations between different platforms and tools. Database Administration: Manage the installation, configuration, monitoring, and maintenance of DBMS environments, ensuring maximum uptime and reliability. Cloud & On-Premise DB Solutions: Deep experience in managing both cloud-based and on-premise database services, with hands-on expertise in Azure, AWS, and GCP. Collaboration & Support Coordination: Collaborate closely with L1 and L2 support teams and coordinate with vendors to resolve complex database issues and service requests. Required Expertise: Strong experience with relational and non-relational databases including: Microsoft SQL Server MySQL Oracle Expertise in: High Availability and Disaster Recovery (HA/DR) strategies Backup and restore processes Performance tuning and troubleshooting large/complex database environments Familiarity with: Version control tools (e.g., Git, Azure DevOps) Cloud platforms (Azure, AWS, GCP)

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

On-site

Experience – 8+ Years and above Job Location – Hyderabad (Madhapur) Shifts – 24/7 Support, 5 days week Skills - Microsoft SQL Server, MySQL, Oracle & Azure, AWS, GCP, on-prem. Overview: We are seeking a highly skilled Senior Database Engineer to join our team. This role is responsible for designing, implementing, and maintaining robust, scalable, and secure database systems across cloud and on-premises environments. The ideal candidate will have deep expertise in various database technologies and a strong focus on performance optimization, data security, and system reliability. You will work with a range of Database Management Systems (DBMS) including Microsoft SQL Server, MySQL, Oracle. Proficiency with version control systems such as Git and Azure DevOps is essential. Key Responsibilities: Data Security & Privacy: Implement and enforce database security best practices to protect sensitive information and comply with regulatory requirements. Performance Tuning & Optimization: Analyze and improve database performance through query optimization, indexing strategies, and server configuration. Backup & Disaster Recovery: Regularly test backup and high availability/disaster recovery (HA/DR) strategies across various platforms (Azure, AWS, GCP, on-prem). Data Migration & Integration: Plan and execute seamless data migrations, system upgrades, and integrations between different platforms and tools. Database Administration: Manage the installation, configuration, monitoring, and maintenance of DBMS environments, ensuring maximum uptime and reliability. Cloud & On-Premise DB Solutions: Deep experience in managing both cloud-based and on-premise database services, with hands-on expertise in Azure, AWS, and GCP. Collaboration & Support Coordination: Collaborate closely with L1 and L2 support teams and coordinate with vendors to resolve complex database issues and service requests. Required Expertise: Strong experience with relational and non-relational databases including: Microsoft SQL Server MySQL Oracle Expertise in: High Availability and Disaster Recovery (HA/DR) strategies Backup and restore processes Performance tuning and troubleshooting large/complex database environments Familiarity with: Version control tools (e.g., Git, Azure DevOps) Cloud platforms (Azure, AWS, GCP)

Posted 1 week ago

Apply

8.0 years

23 Lacs

Hyderābād

On-site

Job Title: Java Enterprise Technical Architect: Location: Hyderabad Notice: Immediate joiners required We are looking for a highly skilled Java Enterprise Technical Architect with deep expertise in microservices architecture, cloud computing, DevOps, security, database optimization, and high-performance enterprise application design . The ideal candidate will have hands-on experience in fixing VAPT vulnerabilities , suggesting deployment architectures , implementing clustering and scalability solutions , and ensuring robust application and database security . They must also be ready to code when needed , ensuring best practices in software development while leading architecture decisions. Responsibilities: Architecture Design & Deployment Define and implement scalable, high-performance microservices architecture. Design secure and efficient deployment architectures, including clustering, failover, and HA strategies. Optimize enterprise applications for Apache HTTP Server, ensuring security and reverse proxy configurations. Provide recommendations for cloud-native architectures on AWS, Azure, or GCP. Security & VAPT Compliance Fix all Vulnerability Assessment & Penetration Testing (VAPT) issues and enforce secure coding practices. Implement end-to-end security including API security, identity management (OAuth2, JWT, SAML), and encryption mechanisms. Ensure database security (Oracle/PostgreSQL) with encryption (TDE), access control (RBAC/ABAC), and audit logging. Deploy DevSecOps pipelines integrating SAST/DAST tools like SonarQube, OWASP ZAP, or Checkmarx. Performance Optimization & Scalability Fine-tune Oracle & PostgreSQL databases, including indexing, query optimization, caching, and replication. Optimize microservices inter-service communication using Kafka, RabbitMQ, or gRPC. Implement load balancing, caching strategies (Redis, Memcached, Hazelcast), and high availability (HA) solutions. DevOps & Cloud Enablement Implement CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCD. Optimize containerized deployments using Docker, Kubernetes (K8s), and Helm. Automate infrastructure as code (IaC) using Terraform or Ansible. Ensure observability with ELK Stack, Prometheus, Grafana, and distributed tracing (Jaeger, Zipkin). Technical Leadership & Hands-on Development Lead architecture decisions while being hands-on in coding with Java, Spring Boot, and microservices. Review and improve code quality, scalability, and security practices across development teams. Mentor developers, conduct training sessions, and ensure adoption of best practices in software engineering. Define architecture patterns, best practices, and coding standards to ensure high-quality, scalable, and secure applications. Collaborate with stakeholders, including business analysts, developers, and project managers, to ensure technical feasibility and alignment with business needs. Evaluate and recommend technologies, tools, and frameworks that best meet the project's needs. Oversee the integration of diverse technologies, platforms, and applications to ensure smooth interoperability. Ensure the security, performance, and reliability of system architecture through design and implementation. Review and optimize existing systems and architectures, identifying areas for improvement and implementing enhancements. Stay updated with emerging technologies, trends, and industry best practices to drive innovation. Conduct technical reviews, audits, and assessments of systems to ensure alignment with architecture and organizational standards. Experience: 8+ years of hands-on experience in Java full-stack, Spring Boot, J2EE, and microservices. 5+ years of expertise in designing enterprise-grade deployment architectures. Strong security background, with experience in fixing VAPT issues and implementing security controls. Network design and implementation Deep knowledge of Application servers, Apache HTTP Server, including reverse proxy, SSL, and load balancing. Proven experience in database performance tuning, indexing, and security (Oracle & PostgreSQL). Strong DevOps and Cloud experience, with knowledge of Kubernetes, CI/CD, and automation. Strong knowledge of cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes). Hands-on experience with microservices architecture, APIs, and distributed systems. Solid understanding of DevOps practices and CI/CD pipelines. Excellent problem-solving and analytical skills, with the ability to navigate complex technical challenges. Experience with databases (SQL and NoSQL) and data modelling. Effective communication and collaboration skills, with the ability to work closely with both technical and non-technical stakeholders. Ability to balance technical depth with an understanding of business requirements and project timelines. Education: Bachelor’s degree or master’s degree in computer science, Information Technology, or a related field Job Type: Full-time Pay: Up to ₹2,300,000.00 per year Benefits: Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Weekend availability Application Question(s): Are you an immediate Joiner? What is your CCTC and ECTC? Experience: java fullstack: 10 years (Required) Work Location: In person

Posted 1 week ago

Apply

0 years

2 - 4 Lacs

Vasant Kunj

On-site

Job Summary The Journal Publication Coordinator is responsible for managing end-to-end activities related to academic and research journal publication. This includes overseeing manuscript submissions, coordinating peer reviews, ensuring timely publication schedules, liaising with authors, editors, and reviewers, and maintaining the quality and integrity of the publication process. --- Key Responsibilities: ⁠ ⁠ Manage the complete journal publication lifecycle — from manuscript submission to final publishing. ⁠ ⁠Coordinate with editorial board members, reviewers, and authors to ensure smooth workflow. ⁠ ⁠Facilitate and track the peer review process, ensuring quality standards and timelines are met. ⁠ ⁠Maintain communication with authors regarding acceptance, revision, or rejection of manuscripts. ⁠ ⁠Edit, format, and proofread manuscripts for grammar, style, and consistency. ⁠ ⁠Ensure adherence to ethical standards and plagiarism policies (COPE, UGC-CARE, etc.). ⁠ ⁠Work with graphic designers or publishers for layout and design (if required). ⁠ ⁠Maintain journal indexing, citation tracking, and online repository updates (DOI, Scopus, Web of Science, etc.). ⁠ ⁠Support digital publishing through journal websites, repositories, or platforms like OJS. ⁠ ⁠Organize calls for papers, promotional activities, and academic engagement. --- Required Skills & Qualifications: ⁠ ⁠Bachelor’s/Master’s degree in English, Communications, Publishing, or related fields (PhD preferred for academic publishing). ⁠ ⁠Prior experience in academic publishing or editorial roles. ⁠ ⁠Excellent written and verbal communication skills. ⁠ ⁠Familiarity with academic journal databases and publishing tools (OJS, CrossRef, plagiarism checkers). ⁠ ⁠Strong organizational and multitasking skills. ⁠ ⁠Knowledge of referencing styles (APA, MLA, IEEE, etc.) and citation management tools. --- Preferred: ⁠ ⁠Exposure to Scopus/UGC-CARE/ESCI indexed journal processes. ⁠ ⁠Experience working with academic or research institutions Job Type: Full-time Pay: ₹240,000.00 - ₹400,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Work Location: In person

Posted 1 week ago

Apply

1.0 - 3.0 years

4 - 6 Lacs

Delhi

On-site

Job Title : MIS Executive Location : Karol Bagh, New Delhi CTC : ₹40,000 – ₹55,000 per month Role Overview: The MIS Executive will be responsible for collecting, processing, and analyzing manufacturing, inventory, procurement, and sales data. Using SQL, Excel, and BI tools, you'll produce regular and ad-hoc reports and dashboards to guide decisions across production and sales operations. Key Responsibilities: Data Extraction & Management : Collect/clean data from multiple departments; maintain data quality. Reporting & Dashboards : Develop daily/weekly/monthly MIS reports; visualize trends using SQL, Excel (pivot tables/macros), and BI tools. Analysis & Insights : Identify production and sales trends, anomalies, and efficiency opportunities. SQL & Automation : Write and optimize SQL queries, stored procedures, and automate ETL/reporting routines. Collaboration & Training : Work with production, QC, procurement, and sales teams; document systems and train users. Required Skills & Experience: Bachelor’s degree in IT, Engineering, Computer Science, Statistics, or related. 1–3 years’ experience in manufacturing or apparel analytics. Strong proficiency in SQL (joins, aggregation, indexing, optimization). Advanced Excel skills (pivot tables, VLOOKUP, macros). Familiarity with BI tools (Power BI, Tableau, Looker Studio). Experience with ETL processes and understanding of manufacturing KPIs (yield, throughput, inventory levels). Soft Skills: Analytical and detail-oriented mindset. Effective communication—can translate data into actionable insights. Organizational skills and ability to multitask under deadlines. Job Types: Full-time, Permanent Pay: ₹40,000.00 - ₹55,000.00 per month Schedule: Day shift Fixed shift Application Question(s): Are you comfortable with Karo Bagh, New Delhi location? What is your current CTC? How many years of experience you have in SQL? How many years of experience you have in Advance excel? Do you have experience in automation and Looker studio? Work Location: In person

Posted 1 week ago

Apply

0 years

4 - 7 Lacs

Chennai

On-site

Job Title: Consultant - Application Development Introduction to role: Are you ready to redefine an industry and change lives? We are seeking a skilled SQL Developer and Support Specialist to design, develop, and maintain efficient databases while providing technical support to ensure the smooth operation of database systems. This is your chance to work inclusively in a diverse team, inspiring change and making a real difference. Collaborate with multi-functional teams to optimize database performance, write complex SQL queries, and solve database-related issues. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise! Accountabilities: SQL Development Design, develop, and optimize SQL queries, stored procedures, functions, and scripts for database management and reporting. Create and maintain database schemas, tables, views, indexes, and other database objects. Collaborate with software developers to integrate database solutions with applications. Develop, implement, and maintain ETL (Extract, Transform, Load) pipelines for data integration. Analyze and optimize database performance, including query tuning and indexing strategies. Ensure data integrity, consistency, and security in all database systems. Database Support Monitor and maintain database systems to ensure high levels of performance, availability, and security. Troubleshoot and resolve database-related issues, including performance bottlenecks and errors. Conduct root cause analysis for database-related support incidents and implement preventive measures. Provide technical support to end-users and stakeholders, addressing queries and issues related to database functionality and reports. Collaborate with IT and infrastructure teams to ensure regular database backups and disaster recovery plans are in place. Assist in database migrations, upgrades, and patching activities. Documentation and Training Create and maintain technical documentation, including database architecture, data models, and workflows. Train team members and end-users on database standard methodologies and reporting tools. Essential Skills/Experience: Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as an SQL Developer or in a similar role. Proficiency in SQL programming and database management systems (e.g., SQL Server, Oracle, MySQL, PostgreSQL). Solid understanding of database design, normalization, and data modeling concepts. Hands-on experience with ETL tools and data integration. Familiarity with performance tuning, query optimization, and indexing strategies. Experience with database monitoring and troubleshooting tools. Knowledge of scripting languages (e.g., Python, PowerShell) is a plus. Understanding of data security, backup, and recovery practices. Excellent problem-solving skills and attention to detail. Good communication and collaboration skills. Key Traits: Analytical attitude with a proactive approach to problem-solving. Ability to work in a fast-paced, collaborative environment. Strong organizational and time-management skills. Desirable Skills/Experience: Preferred Qualifications Experience with BI tools such as Power BI, Tableau, or SSRS (SQL Server Reporting Services). Familiarity with cloud database solutions (e.g., Azure SQL, Amazon RDS). Knowledge of DevOps practices and CI/CD pipelines for database deployments. Certification in SQL or database technologies (e.g., Microsoft Certified: Azure Data Engineer, Oracle Database Administrator). When we put unexpected teams in the same room, we ignite bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and bold world. At AstraZeneca, our work has a direct impact on patients by redefining our ability to develop life-changing medicines. We empower the business to perform at its peak by combining brand new science with leading digital technology platforms. With a passion for data analytics, AI, machine learning, and more, we drive cross-company change to redefine the entire industry. Here you can innovate, take ownership, experiment with pioneering technology, and begin challenges that might never have been addressed before. Be part of a team that has the backing to innovate and change lives! Ready to make an impact? Apply now to join our dynamic team!

Posted 1 week ago

Apply

1.0 - 4.0 years

2 - 3 Lacs

India

On-site

Job description - Maintain and update electronic (EMR/EHR) and physical medical records of patients, ensuring accuracy and completeness. - Organize, classify, and file patient records systematically for easy retrieval by doctors, nurses, and administrative staff. - Ensure compliance with HIPAA (or applicable Indian laws like Clinical Establishment Act) for data privacy and confidentiality. - Coordinate with doctors, nurses, and billing departments to verify and correct discrepancies in medical documentation. - Prepare and submit reports for audits, insurance claims, and legal requirements. - Manage the digitization of records (scanning, indexing, and archiving) and assist in transitioning from paper-based to electronic systems. - Follow hospital protocols for record retention, disposal, and data backup as per statutory requirements. - Train staff on proper documentation practices and use of Hospital Information Systems (HIS). *Skills & Qualifications**: - Bachelor’s degree in Health Information Management (HIM), Medical Records Science, or related field, or Any graduate with prior experience(1-4 Years) - Prior experience in medical records management in a hospital. - Knowledge of ICD-10/11 coding, medical terminology, and healthcare compliance standards. - Proficiency in HIS, EMR software and MS Office (Excel, Word). *Immediate joiner Preferred Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Food provided Health insurance Leave encashment Provident Fund Schedule: Fixed shift Education: Bachelor's (Preferred) Experience: Medical records: 1 year (Preferred) Total work: 3 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

A Document Imaging Specialist converts physical documents into digital format, ensuring accuracy, organization, and accessibility of records. They operate scanning equipment, manage digital archives, and maintain data integrity. Their responsibilities may also include troubleshooting technical issues, maintaining equipment, and potentially overseeing projects or teams. Key Responsibilities: Scanning and Digitization: Operating scanners to convert paper documents into digital images. Document Preparation: Preparing documents for scanning, which may involve organizing, sorting, and removing staples or other fasteners. Quality Control: Ensuring the quality of scanned images and making adjustments as needed. Data Entry and Indexing: Entering relevant data from documents into a system and indexing files for easy retrieval. Database Management: Storing and organizing digital documents in a structured manner within a database or other storage system. Record Management: Following established procedures for document retention and disposal. Troubleshooting: Diagnosing and resolving issues with scanning equipment or software. Compliance: Ensuring all work is performed in compliance with relevant laws and regulations. Customer Service: Providing assistance to users who need to access or work with the digitized documents. Teamwork and Communication: Effectively communicating with team members and other departments regarding project status and workflow. Skills and Qualifications: Technical Skills: Proficiency with scanners, document management software, and basic computer troubleshooting. Organizational Skills: Ability to manage large volumes of documents and maintain accurate records. Attention to Detail: Accuracy and thoroughness in scanning, data entry, and quality control. Communication Skills: Effective verbal and written communication for interacting with team members and users. Problem-Solving Skills: Ability to troubleshoot technical issues and find solutions. Potential Career Paths: Junior Document Imaging Specialist: Focuses on basic scanning, data entry, and quality control tasks. Senior Document Imaging Specialist: Oversees document imaging processes, manages projects, and may supervise other specialists. Document Imaging Manager: Leads the document imaging team, develops procedures, and ensures efficient workflows. Tamil candidates only Job Type: Permanent Benefits: Food provided Schedule: Day shift Supplemental Pay: Yearly bonus Work Location: In person

Posted 1 week ago

Apply

5.0 - 8.0 years

1 - 5 Lacs

Chennai

On-site

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. ͏ Hands-on experience in data modelling for both OLTP and OLAP systems. In-depth knowledge of Conceptual, Logical, and Physical data modelling. Strong understanding of indexing, partitioning, and data sharding with practical experience. Experience in identifying and addressing factors affecting database performance for near-real-time reporting and application interaction. Proficiency with at least one data modelling tool (preferably DB Schema). Functional knowledge of the mutual fund industry is a plus. Familiarity with GCP databases like Alloy DB, Cloud SQL, and Big Query. Willingness to work from Chennai (office presence is mandatory) Chennai customer site, requiring five days of on-site work each week. Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 week ago

Apply

0.0 - 3.0 years

4 - 7 Lacs

Chennai

On-site

Join our “Finance – Procure to Pay Team” at DHL Global Forwarding, Freight (DGFF) GSC – Global Service Centre! Job Title: Associate – Finance (P2P) Job Grade – N Job Location: Chennai Are you dynamic and results-oriented with a passion for logistics? Join our high-performing Global Shared Services Team (GSC) at DHL Global Forwarding, Freight (DGFF); a Great Place to Work certified organization and one of the “Top 20 most admired Shared Services Organizations in 2022” by the independent global Shared Services & Outsourcing Network (SSON). We are the captive Shared Service Provider for DHL Global Forwarding and DHL Freight (DGFF). We are an organization of more than 4,600 colleagues complemented by approximately 500 virtual FTE (i.e., bots applied in process automation). Our colleagues are based across six service delivery centers in Mumbai, Chennai, Chengdu, Manila, Bogota & Budapest. You will interact with people from all over the world and get the chance to a truly international organization. In this role, you will have the opportunity to deliver exceptional service within the Finance - Procure to Pay (P2P) Service line, supporting our DGFF regions and countries globally. The role will involve training to handle various activities including invoice processing, payment processing, query management, scanning and indexing, and managing month-end close activities. Key Responsibilities: To understand the requirement of the station’s / country’s documentation and ensure jobs are executed as per standard operating procedures. Ensure department SLAs and all Key Performance Indicators are being met as per the agreed delivery guidelines. Deliver a high level of service quality through timely and accurate completion of services. Collaborate with colleagues within the business to identify solutions, best practices, and opportunities to improve the service to our business partners. Flag any challenges in the operations to the immediate supervisor and business partner in a timely manner. Co-ordinate with the relevant stakeholders for regular communication and flow of information as defined for the respective service. Required Skills/Abilities: Bachelor´s degree. A degree in logistics, industrial engineering, management will be an advantage 0 – 3 years of job experience from BPO or logistics domain - Preferred Good knowledge in MS office Effective English communication skills, written and verbal Exposure to working with Enterprise Resource Platforms (ERPs) Detail oriented Good logical reasoning skills High level of customer centricity Apply now and embark on an exciting journey with us! We offer: We recognize and reward your hard work through a competitive compensation and performance-based incentive. We empower you to learn and grow through training that gives you the knowledge, skills, and abilities to develop into your role and a great range of resources to support your future career aspirations & personal development. Flexible work arrangements to support work/life balance. Generous paid time off: Privilege (earned leave). Comprehensive medical insurance coverage including voluntary parental cover (applicable for IN only) Recognition & Engagement culture By joining one of the world's leading logistics companies, you have a chance to explore a wide range of interesting job challenges and opportunities across our GSC service lines and in our different divisions around the globe.

Posted 1 week ago

Apply

170.0 years

7 - 8 Lacs

Chennai

On-site

Job ID: 30879 Location: Chennai, IN Area of interest: Operations Job type: Regular Employee Work style: Office Working Opening date: 4 Jun 2025 Job Summary Processes Complete Indexing/Assessing/Processing as per the allocation every day Accurate capture/review of all requisite fields while performing Indexing/UI. Indexing the correct category namely LCY, FCY, Credit Note, Staff, Vendor, E-proc and Non- Proc Assigned invoice volumes to be completed on a day if not completed due to unforced reason do have discussion with line manager before your shift timings Urgent invoices should be prioritized basis instruction from “Manager / Team Co-ordinator'” Incomplete/incorrect invoices to be reviewed prior to rework queue movement 100% accuracy is expected while performing indexing/UI: Source would be “Processor's or Checker/Rework” feedback. ZERO error in selecting/reviewing the categories while indexing/UI Validation: Source would be “Processor/Checker and Rework” feedback. “Minimum 250 Invoices to be Indexed if indexing performed in PSAP Or 200 invoices in UI Validation to be performed on Day" : Source would be “Process Leads/Managers'” feedback. Zero Miss of timelines for “Urgent Invoices”: Source would be “Process Leads/Line Manager'” feedback. 100% accuracy to maintained while moving the invoices to “Rework Queue”: Source would be “Rework” feedback Key Responsibilities Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead the team to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Serve as a Director of the Board Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Other Responsibilities Embed Here for good and Group’s brand and values in team Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures Multiple functions (double hats Risk Management Managing the assigned tasks professionally and efficiently as per the SLA & DOI Ensuring total Customer Satisfaction by providing quality service that is error free and timely To be Responsive to the needs of the Stake-holders at all times, effective and regular communication to be maintained Skills and Experience Communicative skills Excel Skill sets Finance Stake holder management Qualifications B.com, M.com or MBA About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together we: Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. www.sc.com/careers

Posted 1 week ago

Apply

5.0 years

4 - 5 Lacs

India

On-site

Job Overview: We are looking for a skilled and strategic Senior SEO Specialist with 5+ years of proven experience in boosting organic traffic, optimizing website content, and implementing comprehensive on-page and off-page SEO strategies. The ideal candidate is data-driven, detail-oriented, and stays current with evolving search engine algorithms and industry trends. In this role, you will be instrumental in leading SEO initiatives, enhancing search visibility, and supporting broader digital marketing goals. Key Responsibilities: Ongoing campaign performance assessment against designated client goals which may be traffic and/or lead based (MOM, QOQ, YOY) Provide strategic insights and recommendations to clients, aimed at increasing traffic, conversions, and overall campaign performance Collaborate with the creative, content, and analytics teams to execute high-impact campaigns across channels (SEO, PPC, social media, email marketing, etc.). Develop and execute advanced SEO strategies for websites across various industries. Conduct technical SEO audits and implement recommended fixes (site structure, crawl errors, indexing issues, etc.). Perform keyword research and mapping to align with business objectives and user intent. Collaborate with content, web development, and marketing teams to ensure SEO best practices are implemented across all digital platforms. Monitor performance using tools like Google Analytics, Search Console, Google Tag Manager, SEMrush, Ahrefs, etc. Implement and promote the use of automation tools to increase productivity, reduce manual work, and drive campaign efficiency. Required Skills & Qualifications: Minimum 5 years of SEO experience with proven results in increasing organic traffic and rankings. Strong understanding of search engine algorithms, ranking factors, and technical SEO. Proficiency in SEO tools such as Google Search Console, SEMrush, Ahrefs, Screaming Frog, Strong analytical skills and ability to derive actionable insights from data. Excellent written and verbal communication skills. Experience in local SEO, eCommerce SEO, or international SEO is a plus. Job Types: Full-time, Permanent Pay: ₹40,000.00 - ₹45,000.00 per month Benefits: Health insurance Paid sick time Paid time off Schedule: Monday to Friday Morning shift Work Location: In person

Posted 1 week ago

Apply

2.0 - 5.0 years

3 - 5 Lacs

Ahmedabad

On-site

At our core, we’re a software development company passionate about building innovative web and mobile solutions. We move fast, think creatively, and solve real-world problems with technology. If you’re driven by challenge, growth, and the opportunity to make an impact, we’d love to have you on our team. Job description We are Hiring a ASP.NET MVC/Core Professional, for Ahmedabad Location (Onsite/Hybrid) Experience Level: Looking for professionals with 2 to 5 years of experience in web application development using ASP.NET technologies. Key Responsibilities: Develop, maintain, and enhance web applications using ASP.NET MVC and ASP.NET Core. Write clean, scalable, and efficient code following best practices and coding standards. Design and implement RESTful APIs with ASP.NET Web API. Collaborate closely with front-end developers and UI/UX designers to deliver a seamless user experience. Work with MSSQL databases, writing complex T-SQL queries, stored procedures, and ensuring data integrity. Participate in code reviews, contribute to architectural discussions, and help improve overall application performance. Technical Skills & Expertise: Proficiency in ASP.NET MVC with at least 2 years of hands-on project experience. Solid working knowledge of ASP.NET Core to build modern web applications. Strong programming skills in C#, JavaScript, and HTML. Comfortable working with .NET Framework versions 4.5 and above. Experience building and consuming ASP.NET Web APIs. Hands-on experience with MSSQL, including TSQL, complex queries, indexing, and optimization. Soft Skills & Communication:: Ability to clearly communicate ideas and technical concepts in both verbal and written form. Collaborative mindset with a focus on team success and a willingness to share knowledge. Preferred (Bonus) Skills: Experience with Angular (or similar frameworks like React, Vue) to build dynamic, responsive front-end applications. Experience with unit testing frameworks (like Jasmine, Karma for Angular) is a plus. Understanding of DevOps practices and CI/CD integration within testing pipelines. Familiarity with TypeScript for structured, scalable JavaScript development. Job details Education Bachelor's in Computer Science, IT, or related field Experience 2–5 years in .NET Development Location Ahmedabad, Gujarat Work Mode Full-time, On-site

Posted 1 week ago

Apply

8.0 years

4 - 7 Lacs

Noida

On-site

Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionises customer engagement by transforming contact centres into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organisations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Position Overview: We seek an experienced Staff Software Engineer to lead the design and development of our data warehouse and analytics platform in addition to helping raise the engineering bar for the entire technology stack at Level AI, including applications, platform, and infrastructure. They will actively collaborate with team members and the wider Level AI engineering community to develop highly scalable and performant systems. They will be a technical thought leader who will help drive solving complex problems of today and the future by designing and building simple and elegant technical solutions. They will coach and mentor junior engineers and drive engineering best practices. They will actively collaborate with product managers and other stakeholders both inside and outside the team. What you’ll get to do at Level AI (and more as we grow together): Design, develop, and evolve data pipelines that ingest and process high-volume data from multiple external and internal sources. Build scalable, fault-tolerant architectures for both batch and real-time data workflows using tools like GCP Pub/Sub, Kafka and Celery. Define and maintain robust data models with a focus on domain-oriented design , supporting both operational and analytical workloads. Architect and implement data lake/warehouse solutions using Postgres and Snowflake . Lead the design and deployment of workflow orchestration using Apache Airflow for end-to-end pipeline automation. Ensure platform reliability with strong monitoring, alerting, and observability for all data services and pipelines. Collaborate closely with Other internal product & engineering teams to align data platform capabilities with product and business needs. Own and enforce data quality, schema evolution, data contract practices, and governance standards. Provide technical leadership, mentor junior engineers , and contribute to cross-functional architectural decisions. We'd love to explore more about you if you have 8+ years of experience building large-scale data systems ; preferably in high-ingestion, multi-source environments. Strong system design, debugging, and performance tuning skills . Strong programming skills in Python and Java . Deep understanding of SQL (Postgres, MySQL) and data modeling (star/snowflake schema, normalization/denormalization). Hands-on experience with streaming platforms like Kafka and GCP Pub/Sub . Expertise with Airflow or similar orchestration frameworks. Solid experience with Snowflake , Postgres , and distributed storage design. Familiarity with Celery for asynchronous task processing. Comfortable working with ElasticSearch for data indexing and querying. Exposure to Redash , Metabase , or similar BI/analytics tools. Proven experience deploying solutions on cloud platforms like GCP or AWS . Compensation: We offer market-leading compensation, based on the skills and aptitude of the candidate. Preferred Attributes- Experience with data governance and lineage tools. Demonstrated ability to handle scale, reliability, and incident response in data systems. Excellent communication and stakeholder management skills. Passion for mentoring and growing engineering talent. To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Our AI platform : https://www.youtube.com/watch?v=g06q2V_kb-s

Posted 1 week ago

Apply

0 years

2 - 4 Lacs

Bhopal

On-site

Job Title: SEO – Technical SEO Expert Job Overview: We are looking for a Technical SEO Expert with strong hands-on experience in optimizing websites for search engine visibility and performance. The ideal candidate must have be well-versed with current SEO tools, white-hat practices, and Google algorithm updates. Key Responsibilities: Perform comprehensive technical SEO audits and implement necessary changes Ensure website indexing and page ranking Manage and build quality backlinks (white-hat techniques only)\ Optimize existing written content by integrating strategic keywords Track keyword rankings and site performance over time Keep up-to-date with SEO trends and algorithm updates via social media and industry sources Required Skills and Tools: Strong knowledge of Technical SEO Keyword research & integration Backlink building (white-hat only) Google Search Console & Google Analytics Email marketing basics Knowledge of how SEO works in international markets is a plus Location: Bhopal / Indore (On-site preferred) Salary Range: ₹20,000 – ₹35,000/month Type: Full-time Job Type: Full-time Pay: ₹20,000.00 - ₹35,000.00 per month Schedule: Day shift Morning shift Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

122618-Proven experience as an AEM Developer or in a similar role. 2. Strong understanding of AEM architecture, components, templates, indexing, Content fragments, headless and workflows. 3. Proficiency in Java, HTML, CSS, JavaScript, and other relevant technologies. 3. Experience with RESTful services and API integrations. 4. Knowledge of version control systems, such as Git. 5. Excellent problem-solving and analytical skills. 6. Integration with 3rd party APIs 7. Strong communication and collaboration skills. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Job Title: Data Engineer Experience: 5+ Years Location: Pan India Mode: Hybrid Skill combination- Python AND AWS AND Databricks AND Pyspark AND Elastic Search We are looking for a Data Engineer to join our Team to build, maintain, and enhance scalable, high-performance data pipelines and cloud-native solutions. The ideal candidate will have deep experience in Databricks , Python , PySpark , Elastic Search , and SQL , and a strong understanding of cloud-based ETL services, data modeling, and data security best practices. Key Responsibilities: Design, implement, and maintain scalable data pipelines using Databricks , PySpark , and SQL . Develop and optimize ETL processes leveraging services like AWS Glue , GCP DataProc/DataFlow , Azure ADF/ADLF , and Apache Spark . Build, manage, and monitor Airflow DAGs to orchestrate data workflows. Integrate and manage Elastic Search for data indexing, querying, and analytics. Write advanced SQL queries using window functions and analytics techniques. Design data schemas and models that align with various business domains and use cases. Optimize data warehousing performance and storage using best practices. Ensure data security, governance, and compliance across all environments. Apply data engineering design patterns and frameworks to build robust solutions. Collaborate with Product, Data, and Engineering teams; support executive data needs. Participate in Agile ceremonies and follow DevOps/DataOps/DevSecOps practices. Respond to critical business issues as part of an on-call rotation. Must-Have Skills: Databricks (3+ years): Development and orchestration of data workflows. Python & PySpark (3+ years): Hands-on experience in distributed data processing. Elastic Search (3+ years): Indexing and querying large-scale datasets. SQL (3+ years): Proficiency in analytical SQL including window functions . ETL Services : AWS Glue GCP DataProc/DataFlow Azure ADF / ADLF Airflow : Designing and maintaining data workflows. Data Warehousing : Expertise in performance tuning and optimization. Data Modeling : Understanding of data schemas and business-oriented data models. Data Security : Familiarity with encryption, access control, and compliance standards. Cloud Platforms : AWS (must), GCP and Azure (preferred). Skills Python,Databricks,Pyspark,Elastic Search

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: AWS Devops Engineer – Manager Business solutions Location: Gurgaon, India Experience Required: 8-12 years Industry: IT We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes. Key Deliverables (Essential functions & Responsibilities of the Job) : · Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS. · Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance. · Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines. · Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK. · Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack. · Enforce cloud security best practices — IAM, VPC setups, encryption, certificate management, and compliance controls. · Work closely with development teams to improve application reliability, scalability, and performance. · Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS. · Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding. · Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution. Knowledge Skills and Abilities: · 7+ years of hands-on AWS DevOps experience, especially with middleware services. · Strong expertise in MongoDB Atlas or other cloud MongoDB services. · Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK. · Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc. · Excellent scripting skills in Python, Bash, or PowerShell. · Experience in containerization and orchestration: Docker, EKS, ECS. · Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana. · Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups. · Ability to solve complex problems and thrive in a fast-paced environment. Preferred Qualifications · AWS Certified DevOps Engineer – Professional or AWS Solutions Architect – Associate/Professional. · MongoDB Certified DBA or Developer. · Experience with serverless services like AWS Lambda, Step Functions. · Exposure to multi-cloud or hybrid cloud environments. Mail updated resume with current salary- Email: jobs@glansolutions.com Satish; 88O2749743 Website: www.glansolutions.com

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies