Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
5 - 8 Lacs
Bengaluru
On-site
The Purpose of the role Chubb is seeking a highly skilled and experienced Deep Learning Engineer with Generative AI experience to develop and scale our Generative AI capabilities. The ideal candidate will be responsible for designing, finetuning and training large language models and developing Generative AI systems that can create and improve conversational abilities and decision-making skills of our machines. Location: Bangalore, India Responsibilities Develop and improve Generative AI systems to enable high quality decision making, to refine answers for queries, and to enhance automated communication capabilities. Own the entire process of data collection, training, and deploying machine learning models. Continuous research and implementation of cutting-edge techniques in deep learning, NLP and Generative AI to build state-of-the-art models. Work closely with Data Scientists and other Machine Learning Engineers to design and implement end-to-end solutions. Optimize and streamline deep learning training pipelines. Develop performance metrics to track the efficiency and accuracy of deep learning models. Required knowledge, Skills and qualifications: Minimum of 4 years of industry experience in developing deep learning models with a focus on NLP and Generative AI. Expertise in deep learning frameworks such as Tensorflow, PyTorch and Keras. Experience working with cloud-based services such as Azure for training and deployment of deep learning models. Experience with Hugging Face's Transformer libraries. Expertise in developing and scaling Generative AI systems. Experience in large dataset processing, including pre-processing, cleaning and normalization. Proficient in programming languages such as Python and C++. Experience with natural language processing (NLP) techniques and libraries. Excellent analytical and problem-solving skills.
Posted 1 month ago
2.0 years
0 - 0 Lacs
Noida
On-site
modelling We are looking for a highly skilled Sr. Developer with 2+ years of experience in web-based project development. The successful candidate will be responsible for designing, developing, and implementing web applications using PHP and various open-source frameworks. Key Responsibilities: Collaborate with cross-functional teams to identify and prioritize project requirements Develop and maintain high-quality, efficient, and well-documented code Troubleshoot and resolve technical issues Implement Social Networks Integration, Payment Gateways Integration, and Web 2.0 in web-based projects Work with RDBMS design, normalization, Data modelling, Transactions, and distributed databases Develop and maintain database PL/SQL, stored procedures, and triggers Requirements: 2+ years of experience in web-based project development using PHP Experience with various open-source frameworks such as Laravel, WordPress, Drupal, Joomla, OsCommerce, OpenCart, TomatoCart, VirtueMart, Magento, Yii 2, CakePHP 2.6, Zend 1.10, and Kohana Strong knowledge of Object-Oriented PHP, Curl, Ajax, Prototype.Js, JQuery, Web services, Design Patterns, MVC architecture, and Object-Oriented Methodologies Experience with RDBMS design, normalization, Data modelling, Transactions, and distributed databases Well-versed with RDBMS MySQL (can work with other SQL flavors too) Experience with Social Networks Integration, Payment Gateways Integration, and Web 2.0 in web-based projects Job Type: Full-time Pay: ₹25,000.00 - ₹40,000.00 per month Benefits: Health insurance Provident Fund Location Type: In-person Schedule: Day shift Morning shift Education: Bachelor's (Required) Experience: PHP: 1 year (Required) Laravel: 1 year (Required) Total: 2 years (Required) Location: Noida, Uttar Pradesh (Required) Work Location: In person
Posted 1 month ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
PostgreSQL DBA About THE ROLE We are looking for an experienced PostgreSQL Administrator with expertise in database schema design, query optimization, performance optimization, cloud service management of AWS Aurora PostgreSQL. The role is essential for supporting our product development teams in building efficient and scalable data-driven applications for our container shipping industry. Key Responsibilities: Database Design & Management: Collaborate with the product development team to design, implement, and maintain scalable database schemas that meet business and application requirements. Develop and maintain data models, ensuring consistency and optimal performance. Design tables, indexes, and constraints for high data integrity and performance. Performance Tuning & Optimization: Analyse slow-running or poor performing queries and optimize performance through proper indexing, query restructuring, or caching mechanisms. Conduct performance tuning, including tuning the PostgreSQL parameters for optimal database performance. Work on improving database performance, scaling database operations, and addressing bottlenecks. Cloud Database Management (AWS Aurora PostgreSQL): Manage and administer AWS Aurora PostgreSQL clusters, ensuring high availability, backup, recovery, and disaster recovery planning. Optimize the use of cloud-based resources in AWS Aurora to ensure cost-effective and efficient use. Monitor and maintain database systems in cloud environments, ensuring data security and availability. Security & Compliance: Ensure that the database architecture complies with organizational security policies and best practices. Implement database encryption, user management, and access controls. Monitor database security and address any vulnerabilities or compliance concerns. Automation & Maintenance: Automate routine database tasks such as backups, failovers, maintenance windows, etc. Develop and maintain database monitoring and alerting mechanisms to ensure system stability. Documentation & Training: Create and maintain detailed documentation for database designs, performance optimizations, and cloud database configurations. Provide technical guidance and training to developers on best practices for schema design, query development, and database management. what we are looking for Experience: Over 7 to 11 years of technology experience working in a multi-national company. 5+ years of experience in PostgreSQL database administration, with a strong focus on query optimization, schema design, and performance tuning. Proven experience managing PostgreSQL on AWS Aurora. Technical Skills: Strong expertise in PostgreSQL database design, including normalization, indexing, partitioning, and data modeling. In-depth knowledge of SQL, PL/pgSQL, and advanced PostgreSQL features like triggers, stored procedures, and replication. Familiarity with AWS services (Aurora, RDS, EC2, S3, etc.) and cloud database management practices. Experience with query tuning tools such as pg_stat_statements and EXPLAIN for query analysis. Experience with database backup, recovery, replication, and failover strategies. Performance Tuning: Expertise in tuning PostgreSQL databases for high performance, including memory usage optimization, connection pooling, and query optimization. Proficiency in analyzing and resolving database performance issues, especially in high-traffic and high-volume production environments. Soft Skills: Excellent problem-solving skills and the ability to work closely with developers, DevOps, and architects. Strong communication skills to convey technical solutions to both technical and non-technical stakeholders. Education: Engineering degree in computer science, Information Technology, or related field. Nice to Have: Experience with containerized databases using Docker or Kubernetes. Familiarity with event-driven architectures using Kafka. Experience with CI/CD pipelines and Flyway Script. Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
Remote
About PurpleMerit PurpleMerit is an AI-focused technology startup dedicated to building innovative, scalable, and intelligent software solutions. We leverage the latest advancements in artificial intelligence and cloud technology to deliver impactful products for a global audience. As a fully remote team, we value skill, curiosity, and a passion for solving real-world problems over formal degrees or prior experience. Job Description We are seeking motivated and talented Full Stack Developers to join our dynamic team. This position is structured as a mandatory internship-to-full-time pathway, designed to nurture and evaluate your technical and collaborative skills in a real-world environment. You will work on a variety of projects, including web applications, Chrome extensions, PWA apps, and full-stack software solutions, with a strong emphasis on system design and AI integration. Roles & Responsibilities Design, develop, and maintain robust, scalable web applications and software solutions. Build end-to-end applications, including websites, Chrome extensions, and PWAs. Architect systems with a strong focus on system design, scalability, and maintainability. Develop RESTful and GraphQL APIs for seamless frontend-backend integration. Implement secure authentication and authorization (OAuth, JWT, session management, role-based access). Integrate AI tools and APIs; demonstrate a basic understanding of AI agents and prompt engineering. Manage cloud infrastructure (AWS or Azure) and CI/CD pipelines for efficient deployment. Perform basic server management (Linux/Unix, Nginx, Apache). Design and optimize databases ( schema design , normalization, indexing, query optimization). Ensure code quality through testing and adherence to best practices. Collaborate effectively in a remote, agile startup environment. Required Skills Strong understanding of system design and software architecture. Experience with CI/CD pipelines and cloud platforms (AWS or Azure). Proficiency with version control systems (Git). Knowledge of API development (RESTful and GraphQL). Familiarity with authentication and authorization protocols (OAuth, JWT, sessions, RBAC). Basic server management skills (Linux/Unix, Nginx, Apache). Database design and optimization skills. Experience integrating AI tools or APIs; Basic understanding of AI agents. Basic knowledge of prompt engineering. Commitment to testing and quality assurance. Ability to build complete, production-ready applications. No formal degree or prior experience required; a strong willingness to learn is essential. Salary Structure 1. Pre-Qualification Internship (Mandatory) Duration: 2 months Stipend: ₹5,000/month Purpose: Evaluate foundational skills, work ethic, and cultural fit. 2. Internship (Mandatory) Duration: 3 months Stipend: ₹7,000–₹15,000/month (based on pre-qualification performance) Purpose: Deepen technical involvement and demonstrate capability. 3. Full-Time Employment Salary: ₹3 LPA – ₹9 LPA (performance-based, determined during internships) Note: Full-time offers are extended only upon successful completion of both internship stages. Why Our Salary Structure is Unique At PurpleMerit, we recognize the challenges of remote hiring in the AI era, where traditional interviews can be unreliable due to the widespread use of AI tools. To ensure genuine skills, cultural fit, and work ethic, we have implemented a structured pathway to full-time employment. This process allows both you and PurpleMerit to evaluate fit through real-world collaboration before making a long-term commitment. We believe in “try and then decide”—not just interviews—because we want to build a team based on real performance and trust. Why Join PurpleMerit? 100% remote work with a flexible schedule. Direct involvement in building AI-driven products from the ground up. Mentorship from experienced engineers and founders. Transparent growth path from internship to full-time employment. Merit-based culture—your skills and contributions are what matter. Opportunity to work on diverse projects and cutting-edge technologies. Your Impact At PurpleMerit, you will: Directly influence the architecture and development of innovative AI products. Solve complex challenges and see your solutions implemented in real products. Help shape our engineering culture and set high standards for quality. Accelerate your growth as a developer in a supportive, fast-paced environment. If you are passionate about building impactful software and eager to work in an AI-driven startup, we encourage you to apply. Join us at PurpleMerit and be a part of our journey to innovate and excel. Apply now to start your career with PurpleMerit! Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Hyderabad, Telangana
On-site
- 2+ years of data scientist experience - 3+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience - 3+ years of machine learning/statistical modeling data analysis tools and techniques, and parameters that affect their performance experience - Experience applying theoretical models in an applied environment Job Description Are you interested in applying your strong quantitative analysis and big data skills to world-changing problems? Are you interested in driving the development of methods, models and systems for capacity planning, transportation and fulfillment network? If so, then this is the job for you. Our team is responsible for creating core analytics tech capabilities, platforms development and data engineering. We develop scalable analytics applications and research modeling to optimize operation processes. We standardize and optimize data sources and visualization efforts across geographies, builds up and maintains the online BI services and data mart. You will work with professional software development managers, data engineers, scientists, business intelligence engineers and product managers using rigorous quantitative approaches to ensure high quality data tech products for our customers around the world, including India, Australia, Brazil, Mexico, Singapore and Middle East. Amazon is growing rapidly and because we are driven by faster delivery to customers, a more efficient supply chain network, and lower cost of operations, our main focus is in the development of strategic models and automation tools fed by our massive amounts of available data. You will be responsible for building these models/tools that improve the economics of Amazon’s worldwide fulfillment networks in emerging countries as Amazon increases the speed and decreases the cost to deliver products to customers. You will identify and evaluate opportunities to reduce variable costs by improving fulfillment center processes, transportation operations and scheduling, and the execution to operational plans. You will also improve the efficiency of capital investment by helping the fulfillment centers to improve storage utilization and the effective use of automation. Finally, you will help create the metrics to quantify improvements to the fulfillment costs (e.g., transportation and labor costs) resulting from the application of these optimization models and tools. Major responsibilities include: · Translating business questions and concerns into specific analytical questions that can be answered with available data using BI tools; produce the required data when it is not available. · Apply Statistical and Machine Learning methods to specific business problems and data. · Create global standard metrics across regions and perform benchmark analysis. · Ensure data quality throughout all stages of acquisition and processing, including such areas as data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. · Communicate proposals and results in a clear manner backed by data and coupled with actionable conclusions to drive business decisions. · Collaborate with colleagues from multidisciplinary science, engineering and business backgrounds. · Develop efficient data querying and modeling infrastructure. · Manage your own process. Prioritize and execute on high impact projects, triage external requests, and ensure to deliver projects in time. · Utilizing code (Python, R, Scala, etc.) for analyzing data and building statistical models. Experience in Python, Perl, or another scripting language Experience in a ML or data scientist role with a large technology company Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This job is with Kyndryl, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward - always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Within our Database Administration team at Kyndryl, you'll be a master of managing and administering the backbone of our technological infrastructure. You'll be the architect of the system, shaping the base definition, structure, and documentation to ensure the long-term success of our business operations. Your expertise will be crucial in configuring, installing and maintaining database management systems, ensuring that our systems are always running at peak performance. You'll also be responsible for managing user access, implementing the highest standards of security to protect our valuable data from unauthorized access. In addition, you'll be a disaster recovery guru, developing strong backup and recovery plans to ensure that our system is always protected in the event of a failure. Your technical acumen will be put to use, as you support end users and application developers in solving complex problems related to our database systems. As a key player on the team, you'll implement policies and procedures to safeguard our data from external threats. You will also conduct capacity planning and growth projections based on usage, ensuring that our system is always scalable to meet our business needs. You'll be a strategic partner, working closely with various teams to coordinate systematic database project plans that align with our organizational goals. Your contributions will not go unnoticed - you'll have the opportunity to propose and implement enhancements that will improve the performance and reliability of the system, enabling us to deliver world-class services to our customers. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career, from Junior Administrator to Architect. We have training and upskilling programs that you won't find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. One of the benefits of Kyndryl is that we work with customers in a variety of industries, from banking to retail. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You're good at what you do and possess the required experience to prove it. However, equally as important - you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused - someone who prioritizes customer success in their work. And finally, you're open and borderless - naturally inclusive in how you work with others. Required Technical and Professional Expertise Bachelor's degree in Computer Science, Information Technology, or a related field. 5-8 years of proven hands-on experience in SQL database design, development, administration, and performance tuning. Expertise in a specific SQL Server platform (e.g., Microsoft SQL Server, PostgreSQL, MySQL). Experience with multiple platforms is a plus. Strong proficiency in writing complex SQL queries, stored procedures, functions, and triggers. Solid understanding of database concepts, including relational database theory, normalization, indexing, and transaction management. Experience with database performance monitoring and tuning tools. Experience with database backup and recovery strategies. Knowledge of database security principles and best practices. Experience with data migration and integration tools and techniques (e.g., ETL processes). Excellent analytical, problem-solving, and troubleshooting skills. Strong communication and collaboration skills. Ability to work independently and as part of a team Preferred Technical And Professional Experience Relevant certifications (e.g., Microsoft Certified: Database Administrator, Oracle Database Administrator). Experience with cloud-based database services (e.g., Azure SQL Database, AWS RDS, Google Cloud SQL). Experience with NoSQL databases. Knowledge of scripting languages (e.g., Python, PowerShell). Experience with data warehousing concepts and technologies. Familiarity with Agile development methodologies Being You Diversity is a whole lot more than what we look like or where we come from, it's how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we're not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you - and everyone next to you - the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That's the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter - wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked 'How Did You Hear About Us' during the application process, select 'Employee Referral' and enter your contact's Kyndryl email address. Show more Show less
Posted 1 month ago
170.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Area(s) of responsibility About Us Empowered By Innovation Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft, with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Job Description: DB Developer Position: DB Developer Location: Mumbai Experience: 4-6 years Position Overview: We are seeking a skilled Database Developer to design, develop, and maintain efficient database systems. The ideal candidate will have strong expertise in database programming, optimization, and troubleshooting to ensure high availability and performance of database solutions that support our applications. Responsibilities Design, develop, and maintain scalable database systems based on business needs. Write complex SQL queries, stored procedures, triggers, and functions. Optimize database performance, including indexing, query tuning, and normalization. Implement and maintain database security, backup, and recovery strategies. Collaborate with developers to integrate databases with application solutions. Troubleshoot database issues and ensure high availability and reliability. Design and maintain data models and database schema. Create ETL (Extract, Transform, Load) processes for data migration and transformation. Monitor database performance and provide recommendations for improvements. Document database architecture, procedures, and best practices. Qualifications Bachelor’s degree in computer science, Information Technology, or related field. Proven experience as a Database Developer or similar role. Proficiency in database technologies such as SQL Server, Oracle, MySQL, or PostgreSQL. Expertise in writing complex SQL scripts and query optimization. Experience with database tools like SSIS, SSRS, or Power BI. Familiarity with NoSQL databases like MongoDB or Cassandra (optional). Strong knowledge of database security, data modeling, and performance tuning. Hands-on experience with ETL processes and tools. Knowledge of cloud-based database solutions (AWS RDS, Azure SQL, etc.). Excellent problem-solving skills and attention to detail. Preferred Skills Experience in Agile/Scrum methodologies. Knowledge of scripting languages like Python, PowerShell, or Shell scripting. Familiarity with DevOps practices for database deployment and CI/CD pipelines. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Tamil Nadu, India
On-site
AbhiDoc AbhiDoc is a next-generation digital healthcare platform on a mission to revolutionize patient–doctor interactions through AI-driven diagnostics, intuitive virtual assistants, and data analytics. We make healthcare seamless by securely centralizing medical records, offering teleconsultations, and delivering proactive health alerts. Join us as a founding member of our engineering team and help build the future of digital health. Role Overview As a AI Software Engineer , you will work under the guidance of senior engineers and data scientists to develop and maintain the core AI components of our platform. You will also support field operations at partner hospitals and clinics, ensuring our solutions are correctly installed, configured, and performing optimally in real-world environments. This is an ideal position for recent graduates who are passionate about AI/ML and eager to apply their academic knowledge to real-world healthcare problems. Key Responsibilities Model Development & Experimentation Assist in developing and training machine learning models (e.g., classification, regression) using Python libraries such as scikit-learn or TensorFlow. Participate in data preprocessing: cleaning, normalization, and feature engineering for healthcare datasets. Application Integration Help integrate trained models into microservices built with FastAPI or Flask. Write client-side code (Python or JavaScript) to call our AI APIs and process responses. Testing & Validation Write unit tests for data pipelines and model inference code. Validate model performance on test datasets and assist in error analysis. Field Operations Support Assist in deploying and configuring our software stack on-site at hospitals and clinics. Troubleshoot and resolve technical issues in collaboration with clinical IT staff. Collect performance metrics and user feedback to inform product improvements. Collaboration & Documentation Contribute to Agile ceremonies: daily stand-ups, sprint planning, and retrospectives. Document code, deployment procedures, and operational checklists using Markdown or UML. Learning & Growth Shadow senior engineers on code reviews, deployments (Docker, Kubernetes), and CI/CD pipeline configuration (GitHub Actions). Stay up to date on AI/ML best practices and emerging trends in digital healthcare. Required Qualifications Education: Bachelor’s degree in Computer Science, Software Engineering, Data Science, or a related field, completed within the last 12 months. Programming Skills: Proficient in Python; familiarity with at least one ML framework (scikit-learn, TensorFlow, or PyTorch). Basic understanding of RESTful APIs and experience with a web framework (Flask or FastAPI). Projects / Internships: Academic or personal projects demonstrating end-to-end ML workflows (data ingestion → model training → deployment). Internship experience in software development or data analytics is a plus. Field Readiness: Willingness to travel to partner hospitals and clinics for on-site support and deployment. Soft Skills: Strong problem-solving aptitude and willingness to learn. Clear written and verbal communication. Preferred Qualifications Familiarity with using AI technologies. Introductory knowledge of deploying web/Android applications. Experience with version control (Git) in a team setting. Basic exposure to cloud platforms (AWS, Azure, or GCP). Why Join AbhiDoc? Meaningful Impact: Build solutions that directly improve patient care and clinical workflows. Mentorship & Development: Pair with senior engineers, attend workshops, and access our learning stipend. Innovative Culture: Work in a diverse, supportive team that values creativity and growth. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job title: R&D Data Steward manager Associate Location: Hyderabad, India About The Job Sanofi is a global life sciences company committed to improving access to healthcare and supporting the people we serve throughout the continuum of care. From prevention to treatment, Sanofi transforms scientific innovation into healthcare solutions, in human vaccines, rare diseases, multiple sclerosis, oncology, immunology, infectious diseases, diabetes and cardiovascular solutions and consumer healthcare. More than 110,000 people in over 100 countries at Sanofi are dedicated to making a difference on patients’ daily life, wherever they live and enabling them to enjoy a healthier life. As a company with a global vision of drug development and a highly regarded corporate culture, Sanofi is recognized as one of the best pharmaceutical companies in the world and is pioneering the application of Artificial Intelligence (AI) with strong commitment to develop advanced data standards to increase reusability & interoperability and thus accelerate impact on global health. The R&D Data Office serves as a cornerstone to this effort. Our team is responsible for cross-R&D data strategy, governance, and management. We sit in partnership with Business and Digital, and drive data needs across priority and transformative initiatives across R&D. Team members serve as advisors, leaders, and educators to colleagues and data professionals across the R&D value chain. As an integral team member, you will be responsible for defining how R&D's structured, semi-structured and unstructured data will be stored, consumed, integrated / shared and reported by different end users such as scientists, clinicians, and more. You will also be pivotal in the development of sustainable mechanisms for ensuring data are FAIR (findable, accessible, interoperable, and reusable). Position Summary The R&D Data Steward plays a critical role in the intersection between business and data, where stewards will guide business teams on how to unlock value from data. This role will drive definition and documentation of R&D data standards in line with enterprise. Data stewards will place heavily cross-functional roles and must be comfortable with R&D data domains, data policies, and data cataloguing. Main Responsibilities Work in collaboration with R&D Data Office leadership (including the Data Capability and Strategy Leads), business, R&D Digital subject matter experts and other partners to: Understand the data-related needs for various cross-R&D capabilities (E.g., data catalog, master data management etc) and associated initiatives Influence, design, and document data governance policies, standards and procedures for R&D data Drive data standard adoption across capabilities and initiatives; manage and maintain quality and integrity of data via data enrichment activities (E.g., cleansing, validating, enhancing etc) Understand and adopt data management tools such as R&D data catalogue, etc Develop effective data sharing artifacts for appropriate usage of data across R&D data domains Ensure the seamless running of the data-related activities and verify data standard application from ingest through access Maintain documentation and act as an expert on data definitions, data flows, legacy data structures, access rights models, etc. for assigned domain Oversee data pipeline and availability and escalate issues where they surface; ensure on-schedule/on-time delivery and proactive management of risks/issues Educate and guide R&D teams on standards and information management principles, methodologies, best practices, etc. Facilitate communication between business and technical teams to ensure mutual understanding and alignment on data management practices Understanding the comprehensive processes and governance frameworks to ensure data protection and compliance Deliverables Defines Data quality and communication metrics for assigned domains and 1-2 business functions Implements continuous improvement opportunities such as functional training Accountable for data quality and data management activities for the assigned domains. Facilitates data issue resolution Defines business terms and data elements (metadata) according to data standards, and ensures standardization/normalization of metadata Leads working groups to identify data elements, perform root cause and impact analysis, and identify improvements for metadata and data quality Regularly communicates with other data leads, expert Data Steward and escalates issues as appropriate About You Experience in Business Data Management, Information Architecture, Technology, or related fields Demonstrated ability to understand end-to-end data use and needs Knowledge of R&D data domains (e.g., across research, clinical, regulatory etc) Solid grasp of data governance practices and track record of implementation Ability to understand data processes and requirements, particularly in R&D at an enterprise level Demonstrated strong attention to detail, quality, time management and customer focus Excellent written and oral communications skills Strong networking, influencing and negotiating skills and superior problem-solving skills Demonstrated willingness to make decisions and to take responsibility for such Excellent interpersonal skills (team player) People management skills either in matrix or direct line function Familiar with data management practices and technologies (e.g., Collibra, Informatica etc); experience in practices not required Knowledge of pharma R&D industry regulations and compliance requirements related to data governance Education: Bachelor's in computer science, Business, Engineering, Information Technology null Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job title: R&D Data Steward manager Location: Hyderabad, India About The Job Sanofi is a global life sciences company committed to improving access to healthcare and supporting the people we serve throughout the continuum of care. From prevention to treatment, Sanofi transforms scientific innovation into healthcare solutions, in human vaccines, rare diseases, multiple sclerosis, oncology, immunology, infectious diseases, diabetes and cardiovascular solutions and consumer healthcare. More than 110,000 people in over 100 countries at Sanofi are dedicated to making a difference on patients’ daily life, wherever they live and enabling them to enjoy a healthier life. As a company with a global vision of drug development and a highly regarded corporate culture, Sanofi is recognized as one of the best pharmaceutical companies in the world and is pioneering the application of Artificial Intelligence (AI) with strong commitment to develop advanced data standards to increase reusability & interoperability and thus accelerate impact on global health. The R&D Data Office serves as a cornerstone to this effort. Our team is responsible for cross-R&D data strategy, governance, and management. We sit in partnership with Business and Digital, and drive data needs across priority and transformative initiatives across R&D. Team members serve as advisors, leaders, and educators to colleagues and data professionals across the R&D value chain. As an integral team member, you will be responsible for defining how R&D's structured, semi-structured and unstructured data will be stored, consumed, integrated / shared and reported by different end users such as scientists, clinicians, and more. You will also be pivotal in the development of sustainable mechanisms for ensuring data are FAIR (findable, accessible, interoperable, and reusable). Position Summary The R&D Data Steward plays a critical role in the intersection between business and data, where stewards will guide business teams on how to unlock value from data. This role will drive definition and documentation of R&D data standards in line with enterprise. Data stewards will place heavily cross-functional roles and must be comfortable with R&D data domains, data policies, and data cataloguing. Main Responsibilities Work in collaboration with R&D Data Office leadership (including the Data Capability and Strategy Leads), business, R&D Digital subject matter experts and other partners to: Understand the data-related needs for various cross-R&D capabilities (E.g., data catalog, master data management etc) and associated initiatives Influence, design, and document data governance policies, standards and procedures for R&D data Drive data standard adoption across capabilities and initiatives; manage and maintain quality and integrity of data via data enrichment activities (E.g., cleansing, validating, enhancing etc) Understand and adopt data management tools such as data catalogue, Master Data Management, Data Quality etc. using tools from Informatica (CDGC, CDQ, Marketplace and others in their suite of cloud data management products) Develop effective data sharing artifacts for appropriate usage of data across R&D data domains Ensure the seamless running of the data-related activities and verify data standard application from ingest through access Maintain documentation and act as an expert on data definitions, data flows, legacy data structures, access rights models, etc. for assigned domain Oversee data pipeline and availability and escalate issues where they surface; ensure on-schedule/on-time delivery and proactive management of risks/issues Educate and guide R&D teams on standards and information management principles, methodologies, best practices, etc. Oversee junior data stewards and/or business analysts based on complexity or size of initiatives/functions supported Facilitate communication between business and technical teams to ensure mutual understanding and alignment on data management practices Understanding the comprehensive processes and governance frameworks to ensure data protection and compliance. Deliverables Defines Data quality and communication metrics for assigned domains and 1-2 business functions Implements continuous improvement opportunities such as functional training Accountable for data quality and data management activities for the assigned domains. Facilitates data issue resolution Defines business terms and data elements (metadata) according to data standards, and ensures standardization/normalization of business and technical metadata Defines, maintains the data governance documentation, monitoring data governance performance metrics to identify areas of improvement Leads working groups to identify data elements, perform root cause and impact analysis, and identify improvements for metadata and data quality Regularly communicates with other data leads, expert Data Steward and escalates issues as appropriate About You Experience in Business Data Management, Information Architecture, Technology, or related fields, demonstrated ability to understand end-to-end data use and needs Knowledge of R&D data domains (e.g., across research, clinical, regulatory etc), Solid grasp of data governance practices and track record of implementation Ability to understand data processes and requirements, particularly in R&D at an enterprise level,Demonstrated strong attention to detail, quality, time management and customer focus Excellent written and oral communications skills, Strong networking, influencing and negotiating skills and superior problem-solving skills Demonstrated willingness to make decisions and to take responsibility ,Excellent interpersonal skills (team player) People management skills either in matrix or direct line function, Familiar with data management practices and technologies (e.g., Collibra, Informatica etc); experience in practices not required Knowledge of pharma R&D industry regulations and compliance requirements related to data governance, bachelor’s in computer science, Business, Engineering, Information Technology null Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
India
Remote
Company Description Aarcalev Technology Solutions Pvt. Limited is a global company focused on transforming organizations and individuals with services in technology jobs, business optimization, IT development, cybersecurity, cloud technology, and emerging technologies like AI and blockchain. We are dedicated to environmental protection and sustainability. Role Description This is a full-time hybrid role for a Full Stack Engineer - C#, .Net at Aarcalev Technology Solutions. The engineer will be responsible for both back-end and front-end web development tasks, software development, and using CSS. This role will be primarily remote work. Qualifications Full-Stack Developer Requirements C# (.NET) · Minimum 3 years of hands-on experience with C# and the .NET framework (.NET 6+ preferred) · Strong understanding of object-oriented programming, dependency injection, and asynchronous programming · Experience building and maintaining RESTful APIs and microservices · Familiarity with Entity Framework (EF Core) and LINQ .NET · Proven experience with .NET (Web API) · Ability to build secure, scalable, and testable backend services · Experience with middleware, routing, model binding, and authentication/authorization patterns (e.g., JWT, OAuth) · Familiarity with unit testing frameworks like xUnit or NUnit Angular · Minimum 2–3 years of experience with Angular (v10+) · Strong knowledge of component-based architecture, RxJS, and Angular CLI · Experience with state management (e.g., NgRx, BehaviorSubjects) is a plus · Ability to consume and integrate REST APIs into Angular services HTML / CSS / JavaScript · Strong command of semantic HTML5, modern CSS (Flexbox/Grid), and vanilla JavaScript (ES6+) · Experience creating responsive UI/UX using frameworks like Angular Material is a plus · Familiarity with cross-browser compatibility, accessibility standards (WCAG), and browser dev tools SQL Server · 2–3 years of experience with SQL Server (2016 or newer) · Ability to write and optimize complex T-SQL queries, stored procedures, views, and functions · Familiarity with database normalization, indexing, query performance tuning, and data migration · Experience with SQL Server Management Studio (SSMS) and database design API Development · Experience designing, developing, and documenting RESTful APIs · Knowledge of OpenAPI/Swagger, Postman, and versioning best practices · Understanding of HTTP methods, status codes, authentication, and rate limiting · Bonus: experience with GraphQL or third-party integrations Preferred Experience · Minimum 3–5 years total experience as a full-stack developer · Comfortable working in agile/scrum environments · Familiarity with Git, CI/CD pipelines, and basic DevOps workflows · Strong communication skills, capable of working remotely and independently Show more Show less
Posted 1 month ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description eClerx is hiring a Product Data Management Analyst who will work within our Product Data Management team to help our customers enhance online product data quality for Electrical, Mechanical & Electronics products. It will also involve creating technical specifications and product descriptions for online presentation. The candidate will also be working on consultancy projects on redesigning e-commerce customer’s website taxonomy and navigation. The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Apprentice_Analyst Roles and responsibilities: Data enrichment/gap fill, standardization, normalization, and categorization of online and offline product data via research through different sources like internet, specific websites, database, etc. Data quality check and correction Data profiling and reporting (basic) Email communication with the client on request acknowledgment, project status and response on queries Help customers in enhancing their product data quality (electrical, mechanical, electronics) from the technical specification and description perspective Provide technical consulting to the customer category managers around the industry best practices of product data enhancement Technical And Functional Skills Bachelor’s Degree in Engineering from Electrical, Mechanical OR Electronics stream Excellent technical knowledge of engineering products (Pumps, motors, HVAC, Plumbing, etc.) and technical specifications Intermediate knowledge of MS Office/Internet. About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law. Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
Remote
SQL Developer Internship Opportunity (Remote, 1 Month – Unpaid) ✅ No THOUSANDS OF registration fees, no THOUSANDS OF joining fees, no course purchases — this is a 100% skill-focused internship designed to give you hands-on experience in SQL Development using real-world data and industry-standard tools. 📍 Location: Remote ⏳ Duration: 1 Month 💸 Compensation: Unpaid 🎓 Eligibility: Open to all 1st, 2nd, 3rd, and 4th Year Students, as well as Recent Graduates 🔍 About the Internship: Elevate Labs offers a practical internship tailored to individuals interested in database development and SQL. This internship provides real exposure to working with relational databases, writing optimized queries, designing schemas, and managing data through hands-on tasks and mentorship. 🎯 No fluff — just database logic, query optimization, data handling, and best practices for real-world applications. ✨ What You’ll Gain: ✔️ MSME Registered Internship Certificate ✔️ Letter of Recommendation (LOR) for top performers ✔️ Top Performer Badge to enhance your resume and LinkedIn profile ✔️ Opportunity for a Full-Time Role — Top 10 performers will be considered 🌟 Who Should Apply? Students from any year (1st–4th) Recent Graduates Anyone interested in SQL Development, Databases, or Data Engineering 🧠 Skills You’ll Practice: Core SQL Syntax and Commands (SELECT, JOIN, GROUP BY, etc.) Writing and Optimizing Complex Queries Database Design and Normalization Stored Procedures, Views, and Triggers (Intro) Data Filtering, Sorting, and Aggregation Understanding Relational Database Concepts Basic Data Modeling and Schema Design 🔧 Tools & Technologies: MySQL / PostgreSQL / SQL Server DBMS Tools (MySQL Workbench, pgAdmin, etc.) SQL Query Editors Git, GitHub (for script versioning) Optional: Basics of Python/Excel for Data Handling 🚀 Ready to query, analyze, and manage data with precision? This internship offers practical SQL development experience and prepares you for roles in data analytics, backend development, or database administration. Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Role: SQL Server Database developer Location: Offshore/India Who are we looking for? We are looking for 6+ years of experience in SQL Server Database Development, who will be responsible for the development, testing, and maintenance of SQL databases . The ideal candidate will have a deep understanding of relational database theory, SQL queries and procedures, and experience working with large datasets. Technical Skills Proven work experience as a SQL Server Database Developer In-depth understanding of data management (e.g. permissions, recovery, security and monitoring) Experience with SQL Server Reporting Services and SQL Server Analysis Services Knowledge of software development and user interface web applications Hands on experience with SQL Server Familiarity with the practical application of NoSQL/NewSQL databases Strong written and oral communication skills Excellent problem-solving and quantitative skills Demonstrated ability to work as part of a team. Investment Management experience in the past Responsibilities Design and implement database in accordance to end users information needs and views Define and implement database schemas and normalization levels Develop, implement, and optimize stored procedures and functions using T-SQL Analyze existing SQL queries for performance improvements Implement data integrity and security protocols Collaborate with developers on database design and integration with software applications Provide data management support to users Ensure performance, security, and availability of databases Prepare documentations and specifications Handle common database procedures, such as upgrade, backup, recovery, migration, etc. Qualification Somebody who has at least 6+ years of experience in SQL Server Database Development . Education qualification: Any degree from a reputed college Show more Show less
Posted 1 month ago
3.0 years
3 - 8 Lacs
Hyderābād
On-site
Job Description: Core Responsibilities: This position is responsible for application and system database administration, which includes the development and design of databases that support application and system. Database configuration, performance, reliability, recoverability, and maintaining and upgrading database software and related components. Operational database support for various DBMS software levels, versions, and operating systems. Ensuring availability, performance, integrity, security, and confidentiality of databases, managing backups and recoveries, analyzing and resolving problems, managing disk space, applying patches and upgrades, and working with database vendor support. Developing and implementing best practices and standards, SQL tuning, automation, and project implementation activities. Design, implement, and trouble-shoot scalable and reusable software systems: 3-tier and Microsoft Azure cloud-based systems. Design specifications and effort estimates. Actively support configuration management of code and software. Support detailed documentation of systems and features. Act as liaison between external vendors and internal product, business, engineering, and design teams. Actively participate in coding exercises and peer code reviews as part of the development life cycles and change management. Actively participate in daily stand-up meetings. Requires 3-8 years of experience. Deep technical knowledge and subject matter expert. Skillset for DBA: SQL Programming (SQL queries, stored procedures, functions, and triggers) Proficiency in systems like Oracle, MySQL, PostgreSQL, SQL Server, NoSQL, MongoDB, and DB2 Experience in Relational and Non-Relational DBMS Knowledge in database schema design, normalization, and indexing. Expertise in backup strategies, disaster recovery, and high availability solutions. Skills in optimizing database performance, query tuning, and monitoring. Implementation of security protocols, encryption, and access control measures Creating, implementing, and maintaining disaster recovery plan Familiarity with various OS like Windows, Linux, and Unix. Configuring alerts for proactive management. Proficiency in scripting languages such as Shell, Python, Perl, or PowerShell for automation. Knowledge of Azure Services. DB infrastructure and management services Proficiency in Azure SQL Database, including creation, configuration, and management. Understanding of high availability, backup, and scaling on Azure services. VM management. Networking Configuration and Management Familiarity with command line tools Azure resource monitoring. Familiar with ADO Familiar with Vertical and Horizontal Scaling Proficiency in automating tasks using Azure Automation, PowerShell, and Azure CLI. Experience in writing scripts for routine database tasks and incident response. Setting up and using Azure Monitor, Log Analytics, and Application Insights for monitoring databases. Expertise in tuning performance on Azure SQL Databases and Managed Instances. Use of tools like Query Performance Insight and SQL Analytics. Skills, Knowledge, and Experience: Extensive Full Stack Engineering experience, with an emphasis on frontend & backend programming, ideally a minimum of 3+ years. Strong technical leadership and project delivery including via vendors. Extensive experience, ideally a minimum of 3+ years in the following: Software Design/Architecture. Object-oriented programming experience (e.g., Java, C#, Python, PHP, Perl, etc.). Database concepts: Relational databases (MSSQL, Oracle, MySQL, etc.) and NoSQL databases (Cosmos DB, Mongo DB, etc.). HTML, CSS, JavaScript. SOLID Principles, Design patterns. Web API experience and architectural styles (e.g., REST). Familiarity with unit testing, TDD, and BDD. Modern JavaScript frameworks (e.g., React, Angular 6+). Configuration management experience (e.g., GitHub, Jenkins, Git etc.) Experience in the following areas would be desirable: Microsoft Azure cloud-based technologies. Container technologies (e.g., Docker, etc.). Software methodologies (Waterfall, Scrum, etc.). Azure DevOps a plus. Education Qualifications: Bachelor level degree or equivalent in Computer Science, or related field of study. 3+ years of experience as a Full Stack Developer. Technical or Professional Certification in Domain. Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. Job ID R-65755 Date posted 05/02/2025 Benefits Your needs? Met. Your wants? Considered. Take a look at our comprehensive benefits. Paid Time Off Tuition Assistance Insurance Options Discounts Training & Development
Posted 1 month ago
7.0 years
3 - 10 Lacs
Hyderābād
Remote
Hyderabad, India Job ID: R-1075881 Apply prior to the end date: June 6th, 2025 When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... Flexera Implementation Consultant As a consultant in the Transformation Advisory Governance team, you will be working on Flexera Implementation and spearheading the deployment and configuration of Flexera's leading SAM solution within our dynamic organization. In this pivotal role, you will leverage your deep technical mastery of Flexera tools, comprehensive understanding of software licensing, compliance, and optimization strategies to drive significant value and efficiency. What we are looking for : Leading End-to-End Flexera Implementation: Architect and execute the complete implementation lifecycle of Flexera One / Flexera IT Asset Management, encompassing discovery, seamless system integration, robust data normalization, and insightful reporting functionalities. Collaborating and Define Solution Scope: Partner closely with internal IT, procurement, and compliance teams to meticulously gather business and technical requirements, translating them into a clearly defined and effective Flexera solution scope. Configuring for Optimal SAM: Expertly configure Flexera to accurately capture and report on critical software usage metrics, licensing entitlements, potential compliance gaps, and opportunities for cost optimization. Driving Data Integration and Automation: Implement efficient data loading processes and establish robust inbound and outbound API integrations, along with configuring catalog items and automated workflows within the Flexera platform. Integrating with Enterprise Ecosystem: Seamlessly integrate Flexera with key enterprise systems, including SCCM, Active Directory, ServiceNow, and leading cloud platforms such as AWS and Azure, ensuring comprehensive asset visibility. Mastering Device Inventory and Discovery: Implement and manage agent deployment strategies, optimize device inventory processes, and refine discovery mechanisms to build a comprehensive and accurate software catalog within Flexera. Ensuring License Compliance and Optimization: Proactively manage license reconciliation processes and guarantee ongoing compliance with major software vendors, including Microsoft, Oracle, Adobe, and IBM, identifying and implementing cost-saving optimization strategies. Developing Actionable Insights through Reporting: Design and develop custom dashboards and insightful reports to effectively monitor software usage patterns, track costs, and ensure continuous compliance adherence. Empowering Internal Teams: Conduct comprehensive training sessions for internal stakeholders on effective Flexera utilization and champion Software Asset Management best practices across the organization. Providing Ongoing Expertise and Optimization: Serve as the subject matter expert for post-implementation support, manage system upgrades, and continuously identify and implement optimizations to maximize the value of the Flexera solution. You’ll need to have: Proven Flexera Implementation Expertise: Demonstrated hands-on experience leading and executing Flexera (Flexera One / Flexera ITAM) implementations within medium to large-scale enterprise environments. Extensive ITAM/SAM Experience: A minimum of 7 years of progressive experience in IT Asset Management and Software Asset Management, with significant hands-on involvement in Flexera tool implementation projects. Deep Understanding of SAM Principles: Comprehensive knowledge of IT Asset Management (ITAM) and Software Asset Management (SAM) methodologies, best practices, and industry standards. Strong Software Licensing Acumen: In-depth understanding of complex enterprise software licensing models and agreements for major vendors such as Microsoft, Oracle, IBM, and Adobe. Proficiency in Discovery and Inventory: Hands-on experience with software discovery tools, inventory agents, and data connector technologies. Familiarity with ITSM Ecosystem: Working knowledge of IT Service Management (ITSM) tools, including ServiceNow, SCCM, and JAMF. Data Analysis and Reporting Skills: Proven ability to create custom reports and visualizations using SQL, Power BI, or native Flexera reporting tools. Exceptional Communication and Collaboration: Excellent interpersonal, written, and verbal communication skills with a proven ability to effectively manage stakeholders at all levels. Educational Foundation: Bachelor's degree in Computer Science, Information Technology, or a related field. Flexera Certification Advantage: Flexera Certified Implementation Professional or equivalent certification is highly desirable. Even better if you have one or more of the following: Experience implementing and leveraging ITIL processes and frameworks within an ITAM/SAM context. Hands-on experience in cloud asset management and integrating Flexera with major cloud platforms (AWS, Azure, etc.). Possession of a Flexera Certified Implementation Professional or an equivalent advanced Flexera certification. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Apply Now Save Saved Open sharing options Share Related Jobs Associate Director - Tech Strategy - Microsoft Dynamics 365 Sales Save Chennai, India Technology Senior Engineer Consultant - Adobe Experience Platform / Marketo Save Hyderabad, India, +1 other location Technology Senior Engineer Consultant-Systems Operations Save Hyderabad, India, +1 other location Technology Shaping the future. Connect with the best and brightest to help innovate and operate some of the world’s largest platforms and networks.
Posted 1 month ago
7.5 - 10.0 years
2 - 8 Lacs
Hyderābād
On-site
Job Description: Roles & Responsibilities: Design, develop and implement ServiceNow solutions using Integration, Flow designer, Orchestration, Custom Application Development, OMT, Process Automation and other ServiceNow features and functionalities. Experience with space reservation, allocation, and utilization tracking features. Knowledge of floor plans, locations, room bookings, and occupancy management. Understanding of real estate and facilities management processes to bridge technical and business requirements effectively. Collaborate with business analysts, process owners and stakeholders to understand the requirements and translate them into technical specifications and solutions. Guiding a team or team members with technical knowledge and path forward for implementation Follow the best practices and standards for ServiceNow development and ensure the quality and performance of the deliverables. Troubleshoot and resolve issues related to ServiceNow applications and modules, as well as provide support and guidance to end users. Stay updated with the latest ServiceNow releases, features and enhancements and leverage them to improve the existing solutions or create new ones. Provide (technical) leadership to build, motivate, guide, scale, and mentor team members including performance management coaching. Actively participate in daily stand-up meetings Leveraging modern technologies such as cloud capabilities from various platforms to provide efficient solutions. Reusing and scaling components to accommodate future growth and eliminate junk code. Support detailed documentation of systems and features. Skills Knowledge and Experience: 7.5-10 years of experience in ServiceNow development, configuration and administration. Should have good experience in Space management modules . Experience in working with Integration, flow designer, Orchestration, Custom Application Development, Integration Hub, Glide API, Custom Fields and Forms, ETL skills along with Data Mapping, Normalization, OMT, Process Automation, notifications and other ServiceNow modules and functionalities. Experience working with ServiceNow Data Model, Import Set, transform map, table API and Robust Transform Engine Experience in integrating ServiceNow with other systems and platforms using REST/SOAP APIs, web services, MID server etc. (Basic/OAuth) Experience in working on complex notification logic. Deployment experience. Knowledge of ITIL processes and frameworks and how they are implemented in ServiceNow. Good understanding of web-based Application Architectures and Application interfaces Proficiency in client side and server-side Scripting. UI Policies, Business Rules, Runbook Automation, Workflow development Reusing and scaling components to accommodate future growth. Experience in Jelly Script/HTML/AngularJS and TM Forum Open APIs a plus. Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. Job ID R-68725 Date posted 05/27/2025 Benefits Your needs? Met. Your wants? Considered. Take a look at our comprehensive benefits. Paid Time Off Tuition Assistance Insurance Options Discounts Training & Development
Posted 1 month ago
15.0 years
0 Lacs
Hyderābād
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. You will apply GenAI models as part of the solution, including deep learning, neural networks, chatbots, and image processing. Roles & Responsibilities: - Expected to be a SME with deep knowledge and experience. - Should have Influencing and Advisory skills. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Expected to provide solutions to problems that apply across multiple teams. - Lead AI/ML projects from conception to deployment. - Research and implement cutting-edge AI algorithms. - Collaborate with cross-functional teams to drive AI initiatives. Professional & Technical Skills: - Must To Have Skills: Proficiency in Machine Learning. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 15 years of experience in Machine Learning. - This position is based at our Hyderabad office. - A 15 years full time education is required. 15 years full time education
Posted 1 month ago
5.0 years
0 Lacs
Vellore, Tamil Nadu, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Madurai, Tamil Nadu, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Faridabad, Haryana, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Forbes Advisor) What do you need for this opportunity? Must have skills required: Python, Postgre SQL, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda Forbes Advisor is Looking for: Job Description: Data Research - Database Engineer Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. Position Overview At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Responsibilities: Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
15.0 years
0 Lacs
Bhubaneshwar
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production-ready quality. You will apply GenAI models as part of the solution, including deep learning, neural networks, chatbots, and image processing. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Expected to provide solutions to problems that apply across multiple teams. - Lead research and development efforts in AI/ML technologies. - Implement and optimize machine learning models. - Conduct data analysis and interpretation for business insights. Professional & Technical Skills: - Must To Have Skills: Proficiency in Machine Learning. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 12 years of experience in Machine Learning. - This position is based at our Bhubaneswar office. - A 15 years full-time education is required. 15 years full time education
Posted 1 month ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
We are seeking a highly skilled and experienced SQL Developer with 2–4 years of experience to join our data team. This role requires a strong background in writing optimized SQL queries, designing data models, building ETL processes, and supporting analytics teams with reliable, high-performance data solutions. You will play a key role in ensuring data quality, integrity, and accessibility across the organization. What you will do: Develop, optimize, and maintain complex SQL queries, stored procedures, views, and functions. Design and implement efficient data models and database objects to support applications and reporting needs. Build, schedule, and monitor ETL processes for ingesting, transforming, and exporting data across systems. Collaborate with business analysts and developers to understand data requirements. Tune SQL queries and indexes to ensure high performance of large-scale datasets. Perform data profiling, validation, and cleansing activities to maintain data integrity. Support ad-hoc data requests and report development for internal teams. Create and maintain technical documentation for data architecture, ETL workflows, and query logic. Assist in database deployments, migrations, and version control as part of the release process. What we need from you: Strong command of Microsoft T-SQL development Experience with writing and optimizing complex stored procedures and queries. Experience with performance tuning and query optimization. Solid understanding of normalization, indexing, and relational data modeling. Understanding of data governance, data quality, and security practices. Familiarity with ETL tools like SSIS and data integration processes. Familiarity with reporting tools like SSRS and/or Power BI Strong problem-solving skills and attention to detail. What we would like from you: Bachelor’s (or above) degree in Computer Science, Information Systems, Engineering, or a related field. 2-4 years of experience in SQL development and relational database management. Excellent communication and collaboration skills.. Someone who will embody our SEI Values of courage, integrity, collaboration, inclusion, connection and fun. Please see our website for more information. https://www.seic.com/ SEI’s competitive advantage To help you stay energized, engaged and inspired, we offer a wide range of benefits including comprehensive care for your physical and mental well-being, hybrid working environment and a work-life balance that enables you to relax, recharge and be there for the people you care about. Our benefits include Medical Insurance, Term Life Insurance, Voluntary Provident Fund, 10 Predefined Holidays and 2 Floating Holidays in a year, Paid Time off and more. We are a technology and asset management company delivering on our promise of building brave futures—for our clients, our communities, and ourselves. Come build your brave future at SEI. SEI is an Equal Opportunity Employer and so much more… After 50 years in business, SEI is a leading global provider of investment processing, investment management, and investment operations solutions. Reflecting our experience within financial services and financial technology our offices encompass an open floor plan and numerous art installations designed to encourage innovation and creativity in our workforce. We recognize that our people are our most valuable asset and that a healthy, happy, and motivated workforce is key to our continued growth. At SEI, we’re (literally) invested in your success. We offer our employees paid parental leave, paid volunteer days, professional development assistance and access to thriving employee networks. Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France