Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
kolkata, west bengal
On-site
Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With over 125,000 employees spanning across 30+ countries, we are deeply motivated by our curiosity, agility, and the desire to create enduring value for our clients. We are driven by our purpose - the relentless pursuit of a world that works better for people. We cater to and transform leading enterprises, including the Fortune Global 500, leveraging our profound business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Assistant Vice President, Databricks Squad Delivery Lead. As the Databricks Delivery Lead, you will be responsible for overseeing the complete delivery of Databricks-based solutions for our clients. Your role will involve ensuring the successful implementation, optimization, and scaling of big data and analytics solutions. You will play a crucial role in promoting the adoption of Databricks as the preferred platform for data engineering and analytics, while effectively managing a diverse team of data engineers and developers. Your key responsibilities will include: - Leading and managing Databricks-based project delivery, ensuring that all solutions adhere to client requirements, best practices, and industry standards. - Serving as the subject matter expert (SME) on Databricks, offering guidance to teams on architecture, implementation, and optimization. - Collaborating with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads. - Acting as the primary point of contact for clients, ensuring alignment between business requirements and technical delivery. - Maintaining effective communication with stakeholders, providing regular updates on project status, risks, and achievements. - Overseeing the setup, deployment, and optimization of Databricks workspaces, clusters, and pipelines. - Ensuring that Databricks solutions are optimized for cost and performance, utilizing best practices for data storage, processing, and querying. - Continuously evaluating the effectiveness of the Databricks platform and processes, and proposing improvements or new features to enhance delivery efficiency and effectiveness. - Driving innovation within the team by introducing new tools, technologies, and best practices to improve delivery quality. Qualifications we are looking for: Minimum Qualifications / Skills: - Bachelor's degree in Computer Science, Engineering, or a related field (Masters or MBA preferred). - Relevant years of experience in IT services with a specific focus on Databricks and cloud-based data engineering. Preferred Qualifications / Skills: - Demonstrated experience in leading end-to-end delivery of data engineering or analytics solutions on Databricks. - Strong expertise in cloud technologies (AWS, Azure, GCP), data pipelines, and big data tools. - Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies. - Proficiency in data engineering concepts, including ETL, data lakes, data warehousing, and distributed computing. Preferred Certifications: - Databricks Certified Associate or Professional. - Cloud certifications (AWS Certified Solutions Architect, Azure Data Engineer, or equivalent). - Certifications in data engineering, big data technologies, or project management (e.g., PMP, Scrum Master). If you are passionate about driving innovation, leading a high-performing team, and shaping the future of data engineering and analytics, we welcome you to apply for this exciting opportunity of Assistant Vice President, Databricks Squad Delivery Lead at Genpact.,
Posted 6 days ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
J ob Title: Senior Data Engineer – Big Data, ETL & Java Experience Level: 5 + Years Employment Type: Full - time About The Role EXL is seeking a Senior Software Engineer with a strong foundation in Java , along with expertise in Big Data technologies and ETL development . In this role, you'll design and implement scalable, high - performance data and backend systems for clients in retail, media, and other data - driven industries. You’ll work across cloud platforms such as AWS and GCP to build end - to - end data and application pipelines. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL workflows using Apache Spark, Apache Airflow, and cloud platforms (AWS/GCP). Build and support Java - based backend components , services, or APIs as part of end - to - end data solutions. Work with large - scale datasets to support transformation, integration, and real - time analytics. Optimize Spark, SQL, and Java processes for performance, scalability, and reliability. Collaborate with cross - functional teams to understand business requirements and deliver robust solutions. Follow engineering best practices in coding, testing, version control, and deployment. Required Qualifications 5 + years of hands - on experience in software or data engineering. Proven experience in developing ETL pipelines using Java and Spark . Strong programming experience in Java (preferably with frameworks such as Spring or Spring Boot). Experience in Big Data tools including Apache Spark , Apache Airflow , and cloud services such as AWS EMR, Glue, S3, Lambda or GCP BigQuery, Dataflow, Cloud Functions. Proficiency in SQL and experience with performance tuning for large datasets. Familiarity with data modeling, warehousing , and distributed systems. Experience working in Agile development environments. Strong problem - solving skills and attention to detail. Excellent communication skills Preferred Qualifications Experience building and integrating RESTful APIs or microservices using Java. Exposure to data platforms like Snowflake, Databricks, or Kafka. Background in retail, merchandising, or media domains is a plus. Familiarity with CI/CD pipelines , DevOps tools, and cloud - based development workflows.
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As an offshore Techlead with Databricks engineer experience, your primary responsibility will be to lead the team from offshore. You will be tasked with developing and maintaining a metadata-driven generic ETL framework for automating ETL code. This includes designing, building, and optimizing ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS. Your role will involve ingesting data from various structured and unstructured sources such as APIs, RDBMS, flat files, and streaming. Moreover, you will be expected to develop and maintain robust data pipelines for both batch and streaming data using Delta Lake and Spark Structured Streaming. Implementing data quality checks, validations, and logging mechanisms will also be part of your responsibilities. It will be crucial for you to optimize pipeline performance, cost, and reliability, while collaborating with data analysts, BI, and business teams to deliver fit-for-purpose datasets. You will also support data modeling efforts, including star, snowflake schemas, and de-norm tables approach, as well as assist with data warehousing initiatives. Working with orchestration tools like Databricks Workflows to schedule and monitor pipelines will be essential. Following best practices for version control, CI/CD, and collaborative development is expected from you. In terms of required skills, you should have hands-on experience in ETL/Data Engineering roles and strong expertise in Databricks (PySpark, SQL, Delta Lake), with Databricks Data Engineer Certification being preferred. Experience with Spark optimization, partitioning, caching, and handling large-scale datasets is crucial. Proficiency in SQL and scripting in Python or Scala is required, along with a solid understanding of data lakehouse/medallion architectures and modern data platforms. Additionally, experience working with cloud storage systems like AWS S3, familiarity with DevOps practices (Git, CI/CD, Terraform, etc.), and strong debugging, troubleshooting, and performance-tuning skills are necessary for this role. In summary, as an offshore Techlead with Databricks engineer experience, you will play a vital role in developing and maintaining ETL frameworks, optimizing data pipelines, collaborating with various teams, and ensuring data quality and reliability. Your expertise in Databricks, ETL processes, data modeling, and cloud platforms will be instrumental in driving the success of the projects you undertake. About Virtusa: At Virtusa, we value teamwork, quality of life, and professional and personal development. Joining our team means becoming part of a global workforce of 27,000 individuals who are dedicated to your growth. We offer exciting projects, opportunities, and exposure to state-of-the-art technologies throughout your career with us. We believe in collaboration, a team-oriented environment, and providing a dynamic space for great minds to nurture new ideas and achieve excellence.,
Posted 6 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Selected Intern's Day-to-day Responsibilities Include Peer-cohort building – Identify, approach, and onboard a minimum of ten fellow students to form a fundraising cohort that will work under your guidance. Training & coaching – Conduct quick WhatsApp or in-person sessions to teach your cohort proven fundraising techniques, share scripts, and set weekly targets. Supervision & performance tracking – Monitor each student’s progress, provide daily nudges, and ensure the team collectively meets or exceeds campaign goals. Personal fundraising – lead by example and raise funds through your own network About Company: Aapka Sahara Foundation is driven by the belief that every small act of kindness can spark lasting change. We stand shoulder-to-shoulder with children, families, and communities facing disability, educational barriers, and economic hardship. Together, we bring hope, dignity, and opportunity to those who need it most because when you give to ASF, you give without compromise.
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Working with data on a day-to-day basis excites you, and you are interested in building robust data architecture to identify data patterns and optimize data consumption for customers who will forecast and predict actions based on data. If this excites you, then working in our intelligent automation team at Schneider AI Hub is the perfect fit for you. As a Lead Data Engineer at Schneider AI Hub, you will play a crucial role in the AI transformation of Schneider Electric by developing AI-powered solutions. Your responsibilities will include expanding and optimizing data and data pipeline architecture, ensuring optimal data flow and collection for cross-functional teams, and supporting software engineers, data analysts, and data scientists on data initiatives. You will be responsible for creating and maintaining optimal data pipeline architecture, designing the right schema to support functional requirements, and building production data pipelines from ingestion to consumption. Additionally, you will create preprocessing and postprocessing for various forms of data, develop data visualization and business intelligence tools, and implement internal process improvements for automating manual data processes. To qualify for this role, you should hold a bachelor's or master's degree in computer science, information technology, or other quantitative fields and have a minimum of 8 years of experience as a data engineer supporting large data transformation initiatives related to machine learning. Strong analytical skills, experience with Azure cloud services, ETLs using Spark, and proficiency in scripting languages like Python and Pyspark are essential requirements for this position. As a team player committed to the success of the team and projects, you will collaborate with various stakeholders to ensure data delivery architecture is consistent and secure across multiple data centers. Join us at Schneider Electric, where we create connected technologies that reshape industries, transform cities, and enrich lives, with a diverse and inclusive culture that values the contribution of every individual. If you are passionate about success and eager to contribute to cutting-edge projects, we invite you to be part of our dynamic team at Schneider Electric in Bangalore, India.,
Posted 6 days ago
2.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
JOB_POSTING-3-72797-2 Job Description Role Title: Manager, Model Risk Management (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~52% women talent. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview Synchrony's Risk Team is a dynamic and innovative team dedicated to provide oversight as 2nd Line of Defense. As a member of this Team, you'll play a pivotal role for high quality model validation and to ensure modeling techniques and results are consistent with the respective strategic uses, models performing as intended, and complying with related MRM policies, standards, procedures as well as regulations. This role requires expertise in supporting model validation initiatives related to quantitative analytic modeling with the Synchrony Model Governance and Validation team. If you are passionate about Model validation and Modelling techniques then Synchrony’s Risk team is the place to be. Role Summary/Purpose The Manager, Model Validation is responsible for model validation focusing on statistical, Machine Learning (ML) and other models and ensure they are meeting the related Model Risk Management policies, standards, procedures as well as regulations (SR 11-7). This role requires expertise in supporting model validation initiatives related to quantitative analytic modeling with the Synchrony Model Governance and Validation team. This is an individual contributor role. Key Responsibilities Conduct full scope model review, annual review, ongoing monitoring model performance etc. for both internally and vendor-developed models, including new and existing, statistical/ML or non-statistical models, with effective challenges to identify potentials issues Evaluate model development data quality, methodology conceptual soundness and accuracy, and conduct model performance testing including back-testing, sensitivity analysis, benchmarking, etc. and timely identify/highlight issues. Perform proper documentation within expected timeframes for effectively highlighting the findings for further review/investigation and facilitate informed discussions on key analytics. Conduct in-depth analysis of large data sets and support the review and maintenance process of relevant models and model validation documentation. Communicate technical information verbally and in writing to both technical and business team effectively. Additionally the role requires the capability to write detailed validation documents/reports for management Support in additional book of work or special projects as in when required. Required Skills/Knowledge Bachelor’s/Master's degree (or foreign equivalent) in Statistics, Mathematics, or Data Science and 2+ years' experience in model development or model validation experience in the retail section of a U.S. financial services or banking; in lieu of a Master’s degree, 4+ years’ experience in model development / model validation experience in the retail section of financial services or banking. Knowledge and experience of customer facing models including fraud acquisition, transaction fraud, credit acquisition, credit account management and marketing models. Understanding of quantitative analysis methods or approaches in relation to credit risk models. Strong programing skills with 2+ years’ hands-on and proven experience utilizing Python, Spark , SAS, SQL, Data Lake to perform statistical analysis and manage complex or large amounts of data Desired Skills/Knowledge 2+ years of proven experience in Model Risk Management or model development in the financial services industry including both analytic/modeling/quantitative experience and governance or other credit/financial discipline. Ability to apply analytical skills to solve problems creatively. Sharp focus on accuracy with extreme attention to detail and able to make recommendations as opportunities arise. Be self-motivated, act promptly and effectively when assigned tasks. Excellent written and oral communication and presentation skills. Eligibility Criteria Bachelor’s/Master's degree (or foreign equivalent) in Statistics, Mathematics, or Data Science and 2+ years' experience in model development or model validation experience in the retail section of a U.S. financial services or banking; in lieu of a Master’s degree, 4+ years’ experience in model development / model validation experience in the retail section of financial services or banking. Work Timings: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying. Inform your Manager or HRM before applying for any role on Workday. Ensure that your Professional Profile is updated (fields such as Education, Prior experience, Other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (Formal/Final Formal, PIP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L4+ employees can apply. Grade/Level: 09 Job Family Group Credit
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm focused on delivering outcomes for clients across various industries. With a workforce of over 125,000 professionals in more than 30 countries, we are driven by curiosity, agility, and a commitment to creating lasting value. Our purpose is to pursue a world that works better for people, transforming leading enterprises worldwide, including Fortune Global 500 companies. We leverage our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI to drive innovation and success. We are currently seeking applications for the position of Assistant Manager, Azure Data Engineer. In this role, you will be responsible for designing, developing, and maintaining data integration and provision solutions for data analytics and reporting teams. Key responsibilities include: - Designing, developing, and maintaining data pipelines using Azure Data Factory, Databricks, and other Azure services. - Monitoring and optimizing Azure data pipelines for high performance and reliability. - Orchestrating dataflow and developing real-time and batch data processing solutions using Azure Synapse Analytics, Azure Data Lake, or equivalent platforms. - Implementing data validation and cleansing procedures to ensure data quality and integrity. - Collaborating with various data teams to provide data for analytics and reporting purposes. - Automating data pipelines for scalability and monitoring ease using tools like Azure Logic Apps or Azure Automation. Qualifications we seek in you: Minimum Qualifications/Skills: - Bachelor's degree in computer science, Information Technology, or related field. - Relevant experience in Azure cloud-based data engineering or similar roles. - Proficiency in Azure Data Factory, Azure Databricks, Azure SQL Database, and Azure Synapse Analytics. - Experience in ETL development, data orchestration, Python, SQL, and Spark for data engineering tasks. - Familiarity with CI/CD pipeline development using Azure DevOps or similar tools. Preferred Qualifications/Skills: - Strong problem-solving skills and attention to detail. - Excellent communication skills for collaboration with cross-functional teams and business stakeholders. - Certification in Microsoft Azure Data Engineer or Azure Solutions Architect. If you are looking to join a dynamic team and work in a fast-paced environment, this role offers the opportunity to contribute to the success of our data operations. Join us in shaping the future of data analytics and reporting at Genpact. Job Details: - Position: Assistant Manager - Location: India-Gurugram - Schedule: Full-time - Education Level: Bachelor's/Graduation/Equivalent - Job Posting: May 1, 2025, 6:23:46 AM - Unposting Date: Oct 28, 2025, 2:23:46 AM - Master Skills List: Operations - Job Category: Full Time,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You will be responsible for developing and maintaining Python-based REST APIs with a strong emphasis on adhering to OpenAPI (Swagger) specifications and writing clean, testable code. It will be crucial to collaborate effectively with internal teams to ensure alignment on data structures, endpoints, versioning strategies, and deployment timelines. You will utilize tools such as Postman and Swagger UI for validating and documenting API endpoints. Monitoring and improving the performance, reliability, and security of deployed APIs will be a key part of your role. Additionally, you will provide support to API consumers by maintaining clear documentation and assisting with technical queries. Your contributions will extend to continuous improvement initiatives in development practices, code quality, and system observability, including logging and error handling. Version control and CI/CD workflows will be managed using tools like GitHub, Azure DevOps, or similar platforms. The ideal candidate should possess a minimum of 3 years of experience in backend development using Python, with familiarity working with frameworks like FastAPI and Flask. A solid understanding of REST API design, versioning, authentication, and documentation, particularly OpenAPI/Swagger, is required. Proficiency in tools such as Postman, VS Code, GitHub, and SQL databases is essential. Knowledge of Azure Functions or cloud-based deployment patterns is advantageous, and experience with Azure is preferred but not mandatory. Troubleshooting technical issues, analyzing logs, and collaborating with support or development teams to identify root causes will be part of your day-to-day responsibilities. Experience or interest in distributed data processing with Spark or real-time data pipelines using Kafka is a plus. You should be a team player with a collaborative mindset, proactive in sharing knowledge, and adept at problem-solving. Proficiency in English, both written and spoken, is necessary for effective communication within the team. If you do not find a suitable role among the current openings but are a passionate and skilled engineer, we encourage you to reach out to us at careers@hashagile.com. Our company is growing rapidly, and we are always looking for enthusiastic individuals to join our team.,
Posted 6 days ago
1.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We're looking for a Associate Software Engineer This role is Office Based, Pune Office As a Software Engineer , you will be designing and delivering solutions that scale to meet the needs of some of the largest and most innovative organizations in the world. You will work with team members to understand and exceed the expectations of users, constantly pushing the technical envelope, and helping Cornerstone deliver great results. Working in an agile software development framework focused on development sprints and regular release cycles, you’ll own the complete feature story and mentor juniors. In this role, you will… Design, develop, and enhance .NET applications and services for legacy and cloud platforms, utilizing ASP.NET, C#, .NET, React, and CI/CD tools Analyze product and technical user stories and convey technical specifications in a concise and effective manner. Code & deliver a working deliverable, with a ‘first time right’ approach. Contribute to architectural decisions and participate in designing robust, scalable solutions. Troubleshoot and resolve complex production issues, deliver detailed root cause analysis (RCA), and collaborate with global Engineering, Product, and Release teams. Participate in sprint planning, and technical design reviews; provide input as appropriate. Partner with engineers, product managers, and other team members as appropriate Continuously expand and maintain deep knowledge of our products and technologies. You’ve Got What It Takes If You Have.. Bachelor’s/Master’s in Computer Science or related field 1 - 2 years’ hands-on experience with ASP.NET, C#, and .NET. Basic exposure to Gen AI and familiarity with AI tools and their applications. Strong in OOP and SOLID design principles. Should be very good at analyzing and Debugging/Troubleshooting functional and technical issues Proficient experience with relational databases such as Microsoft SQL Server/Postgres. Able to optimize designs/queries for scale. Proven experience in developing Microservices and RESTful services. Strong TDD skills with experience in unit testing frameworks like NUnit or xUnit. Proficiency with ORMs such as Entity Framework or NHibernate. Good understanding on secure development practices. Proactively codes to avoid Security issues whilst able to resolve all security findings Excellent analytical, quantitative and problem-solving abilities. Conversant in algorithms, software design patterns, and their best usage. Good understanding on how to deal with concurrency and parallel work streams. Self-motivated, requiring minimal oversight. Effective team player with strong communication skills and an ability to manage multiple priorities. Passion for continuous learning and technology improvement. Good to have Exposure to modern java script frameworks like Angular or React Exposure to non-relational DBs like MongoDB .Experience developing RESTful services, or other SOA development experience (preferably AWS) Our Culture Spark Greatness. Shatter Boundaries. Share Success. Are you ready? Because here, right now – is where the future of work is happening. Where curious disruptors and change innovators like you are helping communities and customers enable everyone – anywhere – to learn, grow and advance. To be better tomorrow than they are today. Who We Are Cornerstone powers the potential of organizations and their people to thrive in a changing world. Cornerstone Galaxy, the complete AI-powered workforce agility platform, meets organizations where they are. With Galaxy, organizations can identify skills gaps and development opportunities, retain and engage top talent, and provide multimodal learning experiences to meet the diverse needs of the modern workforce. More than 7,000 organizations and 100 million+ users in 180+ countries and in nearly 50 languages use Cornerstone Galaxy to build high-performing, future-ready organizations and people today. Check us out on LinkedIn , Comparably , Glassdoor , and Facebook !
Posted 6 days ago
12.0 - 18.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for an experienced Manager Data Engineering with expertise in Databricks or the Apache data stack to lead complex data platform implementations. As the Manager Data Engineering, you will play a crucial role in spearheading high-impact data engineering projects for global clients, delivering scalable solutions, and catalyzing digital transformation. You should have a total of 12-18 years of experience in data engineering, with at least 3-5 years in a leadership or managerial capacity. Hands-on experience in Databricks or core Apache stack components such as Spark, Kafka, Hive, Airflow, NiFi, etc., is essential. Proficiency in one or more cloud platforms like AWS, Azure, or GCP is preferred, ideally with Databricks on the cloud. Strong programming skills in Python, Scala, and SQL are required, along with experience in building scalable data architectures, delta lakehouses, and distributed data processing. Familiarity with modern data governance, cataloging, and data observability tools is advantageous. Your responsibilities will include leading the architecture, development, and deployment of modern data platforms utilizing Databricks, Apache Spark, Kafka, Delta Lake, and other big data tools. You will design and implement data pipelines (batch and real-time), data lakehouses, and large-scale ETL frameworks, ensuring delivery accountability for data engineering programs across various industries. Collaboration with global stakeholders, product owners, architects, and business teams to understand requirements and deliver data-driven outcomes will be a key aspect of your role. Additionally, you will be responsible for ensuring best practices in DevOps, CI/CD, infrastructure-as-code, data security, and governance. Managing and mentoring a team of 10-25 engineers, conducting performance reviews, capability building, and coaching will also be part of your responsibilities. At GlobalLogic, we prioritize a culture of caring where people come first. You will have opportunities for continuous learning and development, engaging in interesting and meaningful work that makes an impact. We believe in providing balance and flexibility to help you integrate your work and life effectively. GlobalLogic is a high-trust organization built on integrity and ethical values, providing a safe and reliable environment for your professional growth and success. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with leading companies worldwide to create innovative digital products and experiences. Join us to be a part of transforming businesses and redefining industries through intelligent products, platforms, and services.,
Posted 6 days ago
3.0 - 8.0 years
0 Lacs
haryana
On-site
As a key member of the Data consulting team, you will work directly with partners and senior stakeholders of clients to design and implement big data and analytics solutions. Your role requires excellent communication and organizational skills, along with a problem-solving attitude. You will have the opportunity to collaborate with a world-class team of business consultants and engineers to solve complex business problems using data and analytics techniques. Working in a highly entrepreneurial environment, you can expect fast-track career growth and a best-in-industry remuneration package. Your primary responsibilities will include developing data solutions within Big Data Azure and/or other cloud environments, working with diverse data sets to meet the requirements of Data Science and Data Analytics teams, and building and designing Data Architectures using tools such as Azure Data Factory, Databricks, Data Lake, and Synapse. You will liaise with CTO, Product Owners, and other Operations teams to deliver engineering roadmaps, perform data mapping activities, assist the Data Analyst team in developing KPIs and reporting, and maintain relevant documentation and knowledge bases. Additionally, you will research and suggest new database products, services, and protocols. To excel in this role, you must have technical expertise in emerging Big Data technologies such as Python, Spark, Hadoop, Clojure, Git, SQL, and Databricks, as well as experience with visualization tools like Tableau and PowerBI. Proficiency in cloud, container, and microservice infrastructures, data modeling, query techniques, and complexity analysis is essential. Experience or knowledge of agile methodologies like Scrum, working with development teams, and product owners will be beneficial. Certifications in any of the mentioned areas are preferred. You should be able to work independently, communicate effectively with remote teams, and demonstrate curiosity to learn and apply emerging technologies to solve business problems. Timely communication and escalation of issues or dependencies to higher management are crucial aspects of this role. If you are interested in this opportunity, please send your resume to sakshi.vohra@invokhr.com and careers@invokhr.com.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You should have around 3 years of experience working as an NLP Engineer or in a similar role where you have demonstrated an understanding of NLP techniques for text representation and semantic extraction. Your expertise should also extend to data structures and modeling, enabling you to effectively design software architecture. A deep understanding of text representation techniques like n-grams, bag of words, sentiment analysis, as well as statistics and classification algorithms is essential for this role. Your proficiency in Python is crucial, along with a solid grasp of data structures and algorithms. Experience with Machine Learning Libraries such as scikit-learn, PyTorch, and TensorFlow will be highly beneficial. As a Python AI/ML Developer, you should possess an analytical mind with strong problem-solving abilities to excel in this position.,
Posted 6 days ago
1.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As a Medicare Risk Adjustment Data Analyst, you’ll play a crucial role in supporting the development and enhancement of new analytical applications related to Medicare risk adjustment as well as supporting existing applications such as Coding Transformation Modernization, Attribution Analytics, Financial Risk Adjustment and Management Engine. Primary Responsibilities This position is for OI Clinical Solutions - Decision intelligence team and upon selection, you will be part of dynamic team working on developing and delivering Best in-class Analytics for end users. Your work will focus on understanding CMS Medicare Advantage business and developing best in-class Analytics for OI Clinical Solutions - Decision Intelligence team according to Business/Technical requirements. Here are the key responsibilities, qualities and experience we will look for in an ideal candidate: Gather and analyze business and/ or functional requirements from 1 or more client business teams. Validate requirements with stakeholders and day to day project team, provide suggestions and recommendations in line with industry best practices. Developing and delivering Best in-class Analytics for end users using Big Data and Cloud platforms. Document, discuss and resolve business, data, data processing and BI/ reporting issues within the team, across functional teams, and with business stakeholders. Present written and verbal data analysis findings, to both the project team and business stakeholders as required to support the requirements gathering phase and issue resolution activities. Manage changing business priorities and scope and work on multiple projects concurrently Self - motivated and proactive with the ability to work in a fast - paced environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience Tech Skills/Experience: 1+ years of work experience with python, spark and HIVE and solid experience in developing Analytics at scale using Python, Spark and HIVE 1+ years of work experience in developing E2E Analytics pipeline on Hadoop/Bigdata platform 1+ years of work experience in SQL or associated languages. 1+ years of work experience - Ability to convert Business requirements into technical requirements and ability to develop Best in class code as per Technical/Business requirements. Proven interpersonal, collaboration, diplomatic, influencing, planning and organizational skills Consistently demonstrate clear and concise written and verbal communication Proven ability to effectively use complex analytical, interpretive and problem-solving techniques Proven relationship management skills to partner and influence across organizational lines Demonstrated ability to be work under pressure and to meet tight deadlines with proactive, decisiveness and flexibility Preferred Qualifications AWS/GCP or any other cloud-based platform development experience Understanding of Medicare risk adjustment programs Understanding of CMS datasets such as MMR/MOR/EDPS etc. At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Data Engineering Specialist, you will be responsible for assessing, capturing, and translating complex business issues into structured technical tasks for the data engineering team. This includes designing, building, launching, optimizing, and extending full-stack data and business intelligence solutions. Your role will involve supporting the build of big data environments, focusing on improving data pipelines and data quality, and working with stakeholders to meet business needs. You will create data access tools for the analytics and data scientist team, conduct code reviews, assist other developers, and train team members as required. Additionally, you will ensure that developed systems comply with industry standards and best practices while meeting project requirements. To excel in this role, you should possess a Bachelor's degree in computer science engineering or equivalent, or relevant experience. Certification in cloud technologies, especially Azure, would be beneficial. You should have 2-3+ years of development experience in building and maintaining ETL/ELT pipelines on various sources and operational programming tasks. Experience with Apache data projects or cloud platform equivalents and proficiency in programming languages like Python, Scala, R, Java, Golang, Kotlin, C, or C++ is required. Your work will involve collaborating closely with data scientists, machine learning engineers, and stakeholders to understand requirements and develop data-driven solutions. Troubleshooting, debugging, and resolving issues within generative AI system development, as well as documenting processes, specifications, and training procedures will be part of your responsibilities. In summary, this role requires a strong background in data engineering, proficiency in cloud technologies, experience with data projects and programming languages, and the ability to collaborate effectively with various stakeholders to deliver high-quality data solutions.,
Posted 6 days ago
10.0 - 14.0 years
0 Lacs
hyderabad, telangana
On-site
We are seeking a Senior Staff Engineer to contribute to the development of innovative programs and products tailored to meet the requirements of Experian's clients, particularly those in the financial services sector. Our focus includes addressing critical questions such as enhancing the robustness and scalability of our loan origination modeling approach and addressing crucial aspects in the lending industry such as bias, fairness, and explainable AI. As an evolving team, we embody a startup mindset within the framework of a larger organization. Our emphasis lies on agility, impact, and transformation of the organizational culture around us through our outcomes and operational methodologies. Your responsibilities will include taking on challenging assignments within the development teams, offering substantial technical expertise across the entire development cycle, and guiding junior team members. You will play a pivotal role in executing technical and business strategies, ensuring the achievement of functional objectives, and comprehensively supporting products by grasping the interconnection of various components. In this role, you will be instrumental in supporting complex software development projects by contributing to planning, system design, and mentoring junior developers. You will be expected to innovate and architect solutions for intricate technical issues or system enhancements, as well as steer the technical direction for product development, encompassing technology selection and enhancement plans. Your tasks will involve developing Java and Scala components for our analytics product platforms on AWS, actively engaging with the platform and applications, and collaborating with geographically dispersed cross-functional teams to elevate the value of Analytics offerings. Additionally, you will be involved in enhancing the product to optimize cost efficiency while maximizing scalability and stability. You will be reporting to a Senior Manager, with your primary workplace being in Hyderabad, and a requirement for working from office two days a week for a Hybrid work model. Key Skills Required: - Proficiency in distributed data processing frameworks like Spark - Familiarity with public cloud platforms such as AWS, Azure, GCP (preferably AWS) - Experience with Docker, Kubernetes, CI/CD pipelines, and observability tools - Hands-on expertise in Scala, Java, and Python Qualifications: - Over 10 years of industry experience in object-oriented programming and asynchronous programming - Bachelor's degree in computer science or a related field We welcome individuals who are passionate about leveraging their technical expertise to drive innovation and contribute to the growth and success of our dynamic team.,
Posted 6 days ago
0.0 - 31.0 years
6 - 16 Lacs
Navi Mumbai
On-site
🌟 Golden Job Opportunity for ITI Electricians, Wiremen – Utility Sector 🌟* 📍 *Job Locations (Local): *Nearby location will provide Vasai | Nalasopara I Virar I Thane | Kalyan | Dombivali I Badlapur I Ulhasnagar I Navi Mumbai | Vashi I Palghar I Boisar I Dahanu I Saphale I Bhandup I Mulund I South Mumbai 🗓️ *Interview Date: 23-07-2025 (Meet Parvesh ) ⏰ *Time:* 11:00 AM – 4:00 PM 🔗 Apply Now: https://forms.gle/N29orFYh6mwWqsyP9 💼 *Salary & Incentives for ITI Meter Installers:* Incentives: Urban Area Per Meter: 90 200 meters: 200 x 90 = 18000 400 meters: 400 x 90 = 36000 600 meters: 600 x 90 = 54000 Example : Install 700 meters and earn ₹63,000/month! 💸 💰 *Potential Monthly Earnings: Up to ₹1,00,000!* Incentives : Rural Area Per Meter: 120 200 meters: 200 x 120 = 24000 400 meters: 400 x 120 = 48000 600 meters: 600 x 120 = 72000 Example : Install 700 meters and earn ₹84,000/month! 💸 Potential Monthly Earnings: Up to ₹1 Lakh! 😎 🔧 *Job Roles:* * Electric Meter Installation * Replacing DT, Feeder & CT Meters 🎓 *Eligibility:* * ITI in Electrical (Freshers & Experienced welcome) * 10th Pass Candidates with Meter Installation Experience Also Eligible 📞 *For More Information, Contact:* 📱 Parvesh: 9920266168 📱 Aditya: 970271208 🚀 *Apply now and spark your career in the power sector!* **Don't miss out – limited slots available!**
Posted 6 days ago
0.0 - 31.0 years
3 - 14 Lacs
Mumbai/Bombay
On-site
**🌟 Golden Job Opportunity for ITI Electricians, Wiremen – Utility Sector 🌟** 📍 **Job Locations (Local):** Thane 🗓️ **Interview Date:**Saturday, *26th July 2025* ⏰ **Time:** 9:00 AM – 4:00 PM 📌 **Venue:** **Quess Corp Limited** Ahura Centre, B-Wing, 5th Floor, Mahakali Caves Rd, Andheri East, Mumbai, Maharashtra 400093 🔗 **Google Map:** [Click here](https://g.co/kgs/ZuMdtYc) 💼 **Salary & Incentives for ITI Meter Installers:** *Incentives:* *Urban Area* Per Meter: 90 200 meters: 200 x 90 = 18000 400 meters: 400 x 90 = 36000 600 meters: 600 x 90 = 54000 *Example : Install 700 meters and earn ₹63,000/month!* 💸 💰 **Potential Monthly Earnings: Up to ₹1,00,000!** *Incentives* : *Rural Area* Per Meter: 120 200 meters: 200 x 120 = 24000 400 meters: 400 x 120 = 48000 600 meters: 600 x 120 = 72000 *Example : Install 700 meters and earn ₹84,000/month!* 💸 *Potential Monthly Earnings: Up to ₹1 Lakh! 😎* 🔧 **Job Roles:** * Electric Meter Installation * Survey & Technical Support * Replacing DT, Feeder & CT Meters 🎓 **Eligibility:** * ITI in Electrical (Freshers & Experienced welcome) * Bike & Driving License required for Installer role 📞 **For More Information, Contact:** Amit * 📞 :- 9702835982 🚀 **Apply now and spark your career in the power sector!** ***Don't miss out – limited slots available!***
Posted 6 days ago
0.0 - 31.0 years
3 - 12 Lacs
Dombivali
On-site
**🌟 Golden Job Opportunity for ITI Electricians, Wiremen – Utility Sector 🌟** 📍 **Job Locations (Local):** kalyan, Dombivali 🗓️ **Interview Date:**Saturday, *26th July 2025* ⏰ **Time:** 9:00 AM – 4:00 PM 📌 **Venue:** **Quess Corp Limited** Ahura Centre, B-Wing, 5th Floor, Mahakali Caves Rd, Andheri East, Mumbai, Maharashtra 400093 🔗 **Google Map:** [Click here](https://g.co/kgs/ZuMdtYc) 💼 **Salary & Incentives for ITI Meter Installers:** *Incentives:* *Urban Area* Per Meter: 90 200 meters: 200 x 90 = 18000 400 meters: 400 x 90 = 36000 600 meters: 600 x 90 = 54000 *Example : Install 700 meters and earn ₹63,000/month!* 💸 💰 **Potential Monthly Earnings: Up to ₹1,00,000!** *Incentives* : *Rural Area* Per Meter: 120 200 meters: 200 x 120 = 24000 400 meters: 400 x 120 = 48000 600 meters: 600 x 120 = 72000 *Example : Install 700 meters and earn ₹84,000/month!* 💸 *Potential Monthly Earnings: Up to ₹1 Lakh! 😎* 🔧 **Job Roles:** * Electric Meter Installation * Survey & Technical Support * Replacing DT, Feeder & CT Meters 🎓 **Eligibility:** * ITI in Electrical (Freshers & Experienced welcome) * Bike & Driving License required for Installer role 📞 **For More Information, Contact:** Amit * 📞 :- 9702835982 🚀 **Apply now and spark your career in the power sector!** ***Don't miss out – limited slots available!***
Posted 6 days ago
1.0 - 31.0 years
1 - 3 Lacs
Somnath Nagar, Mysore/Mysuru
On-site
We are looking for an enthusiastic Business development associate to join our team. The ideal candidate will be responsible for making outbound calls to potential customers, introducing our services, and ensuring excellent customer engagement. Responsibilities Outbound Calling: Make outbound calls to individuals or businesses to promote services. Engage with prospects in a friendly, persuasive, and professional manner. Present detailed information about products or services to potential customers. Highlight key features and benefits to spark interest and generate leads. Ensure customer inquiries are addressed with appropriate information and solutions. Customer Relationship Building: Build and maintain strong relationships with customers. Maintain accurate records of customer interactions and feedback. Follow up on leads and potential opportunities for service engagement. Performance Metrics: Meet or exceed outbound call targets and customer engagement goals. Achieve individual performance metrics related to lead conversion and customer satisfaction. Collaboration & Reporting: Work closely with team members and management to achieve team goals. Report on daily activities, progress, and challenges as required. Skills & Qualifications: Strong verbal communication skills. Persuasive and confident attitude. Ability to handle customer inquiries and objections professionally. Previous experience in voice process or customer service is a plus. Basic knowledge of computer applications.
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
The company believes in conducting business every day based on core values of Inclusion, Innovation, Collaboration, and Wellness, ensuring a global team works together with customers at the center. As part of the team, you will have the opportunity to impact the business by identifying AI ML opportunities and building solutions that drive results. You will lead ML projects, conduct research to discover new ML techniques, and innovate to enhance team and business efficiencies. Collaborating closely with engineers, analysts, and leaders, you will implement and optimize ML models, establish best practices for model management, deployment, and monitoring, and integrate ML models into products and services. Additionally, you will assist in troubleshooting technical issues, maintain documentation, project tracking, and quality controls. The ideal candidate will have a degree in engineering, science, statistics, or mathematics, possessing a strong technical background in machine learning. Excellent communication skills, an analytical mindset, and a passion for problem-solving are essential. Candidates should have at least 3 years of hands-on experience in problem-solving using Machine Learning, proficiency in Python or Java, and familiarity with technologies like Spark, Hadoop, BigQuery, and SQL. Deep knowledge of machine learning algorithms, explainable AI methods, GenAI, and NLP is required, along with experience with Cloud frameworks such as GCP and AWS. Experience in Lending and Financial services is considered a plus. The company offers a range of benefits, and it is committed to Diversity and Inclusion. To understand more about the company's culture and community, visit https://about.pypl.com/who-we-are/default.aspx. If you are interested in joining the Talent Community or have any questions related to your skills, please don't hesitate to apply, as the company values all candidates and aims to bridge the confidence gap and imposter syndrome.,
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for fetching and transforming data from various systems, conducting in-depth analyses to identify gaps, opportunities, and insights, and providing recommendations that support strategic business decisions. Your key responsibilities will include data extraction and transformation, data analysis and insight generation, visualization and reporting, collaboration with cross-functional teams, and building strong working relationships with external stakeholders. You will report to the VP Business Growth and work closely with clients. To excel in this role, you should have proficiency in SQL for data querying and Python for data manipulation and transformation. Experience with data engineering tools such as Spark and Kafka, as well as orchestration tools like Apache NiFi and Apache Airflow, will be essential for ETL processes and workflow automation. Expertise in data visualization tools such as Tableau and Power BI, along with strong analytical skills including statistical techniques, will be crucial. In addition to technical skills, you should possess soft skills such as flexibility, excellent communication skills, business acumen, and the ability to work independently as well as within a team. Your academic qualifications should include a Bachelors or Masters degree in Applied Mathematics, Management Science, Data Science, Statistics, Econometrics, or Engineering. Extensive experience in Data Lake architecture, building data pipelines using AWS services, proficiency in Python and SQL, and experience in the banking domain will be advantageous. Overall, you should demonstrate high motivation, a good work ethic, maturity, personal initiative, and strong oral and written communication skills to succeed in this role.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Python REST API Developer at our Coimbatore location, you will be responsible for developing and maintaining Python-based REST APIs with a strong emphasis on adhering to OpenAPI (Swagger) specifications and producing clean, testable code. Your role will involve collaborating with internal teams to ensure alignment on data structures, endpoints, versioning strategies, and deployment timelines. Using tools such as Postman and Swagger UI, you will validate and document API endpoints effectively. Monitoring and enhancing the performance, reliability, and security of deployed APIs will be a key part of your responsibilities. Supporting API consumers by maintaining clear documentation and assisting with technical queries is essential. To excel in this role, you should have a minimum of 3 years of strong experience in backend development using Python frameworks like FastAPI and Flask. A solid understanding of REST API design, versioning, authentication, and documentation, particularly OpenAPI/Swagger, is crucial. Proficiency in tools such as Postman, VS Code, GitHub, and working with SQL-based databases is required. While experience with Azure Functions or cloud-based deployment patterns is advantageous, it is not mandatory. You should be comfortable troubleshooting technical issues, analyzing logs, and collaborating with support or development teams to identify root causes. Additionally, having experience or interest in distributed data processing with Spark or real-time data pipelines using Kafka is a plus, but not a requirement. A team player with a collaborative mindset and a proactive approach to sharing knowledge and problem-solving will thrive in our environment. Fluency in written and spoken English is necessary for effective communication within our team.,
Posted 6 days ago
0.0 - 4.0 years
0 Lacs
pune, maharashtra
On-site
We are seeking exceptional Python Developers interested in collaborating with a US Startup. If you have a genuine passion for creating and implementing machine learning solutions using Python, desire a job offering complete flexibility to work from any location, and are eager to gain experience in a Startup environment, then this role is tailored for you. Whether you prefer working from your dream vacation spot or a serene countryside setting, as long as you have a stable internet connection, you can effectively work remotely from your chosen destination. Bid farewell to long commutes and the hassle of rushing to in-person meetings. Are you ready to dedicate yourself to hard work while also enjoying some well-deserved downtime Let's team up. As part of the application process, interested candidates must complete the pre-screening behavioral assessment form. Without this essential step, candidates will not be considered for this position. **Requirements** - Bachelor's or Master's degree in Statistics/Math, Computer Science, Finance/Economics, Computer Engineering, or a related quantitative field (Ph.D. candidates are encouraged to apply) - Proficiency in Python Web Development, particularly Flask - Familiarity with SQL, Unix, Docker, Git, and relational Databases - Strong analytical, design, problem-solving, and troubleshooting/debugging abilities - Capability to work independently in a home office without the need for constant supervision - Proficient in DevOps and deployment pipelines for software deployment to servers (both On-premise hosting and Azure) - Experience in Analytics/Machine Learning projects, including a solid understanding of how SkLearn/Spark libraries and other machine learning packages function in web servers - Knowledge of software design patterns and software engineering best practices - Flexible schedule, with a focus on evening work post-college hours (approximately 3-5 hours daily) **What You'll Do, But Not Limited To:** - Develop Python code with emphasis on scalability, supportability, and maintainability - Engage in software development, configuration, and customization - Identify and resolve issues in production - Enhance and expand all components of the company's technology suite through collaboration with development teams to determine application requirements - Evaluate and prioritize client feature requests **Who You Are:** - Reliable, Independent, and adept at multitasking - Honest individual who values transparency - Team Player who enjoys collaborative work - Effective Communicator capable of translating goals to team members - Self-Starter who takes ownership of projects and tasks - Builder with a strong commitment to delivering superior products/experiences to customers and taking responsibility for their work - Experimental mindset, always willing to explore new tools, techniques, and approaches, even if failures occur **Nice To Have:** - Pursuing/Completed MS/Ph.D. in Computing, Physics, or other STEM fields - Curious and enthusiastic about learning Analytics/Machine Learning - Prior experience in Financial Services is advantageous **Benefits** - Remote First Company with 100% remote work to accommodate your schedule - Flexible Hours - Competitive Stipend/Salary Please note: - Strict intolerance towards plagiarism during the screening test; any evidence of AI-generated solutions like ChatGPT will result in immediate disqualification. - Submit your assignment only as a zip attachment via email; other forms of submission will be automatically rejected. - Preference will be given to candidates from top schools at Pune University, Mumbai University, NIT, IISER, TIFR, IIT, ISI, or leading schools in the USA/UK.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Java with Hadoop Developer at Airlinq in Gurgaon, India, you will play a vital role in collaborating with the Engineering and Development teams to establish and maintain a robust testing and quality program for Airlinq's products and services. Your responsibilities will include but are not limited to: - Being part of a team focused on creating end-to-end IoT solutions using Hadoop to address various industry challenges. - Building quick prototypes and demonstrations to showcase the value of technologies such as IoT, Machine Learning, Cloud, Micro-Services, DevOps, and AI to the management. - Developing reusable components, frameworks, and accelerators to streamline the development cycle of future IoT projects. - Operating effectively with minimal supervision and guidance. - Configuring Cloud platforms for specific use-cases. To excel in this role, you should have a minimum of 3 years of IT experience with at least 2 years dedicated to working with Cloud technologies like AWS or Azure. You must possess expertise in designing and implementing highly scalable enterprise applications and establishing continuous integration environments on the targeted cloud platform. Proficiency in Java, Spring Framework, and strong knowledge of IoT principles, connectivity, security, and data streams are essential. Familiarity with emerging technologies such as Big Data, NoSQL, Machine Learning, AI, and Blockchain is also required. Additionally, you should be adept at utilizing Big Data technologies like Hadoop, Pig, Hive, and Spark, with hands-on experience in any Hadoop platform. Experience in workload migration between on-premise and cloud environments, programming with MapReduce and Spark, as well as Java (core Java), J2EE technologies, Python, Scala, Unix, and Bash Scripts is crucial. Strong analytical, problem-solving, and research skills are necessary, along with the ability to think innovatively and independently. This position requires 3-7 years of relevant work experience and is based in Gurgaon. The ideal educational background includes a B.E./B.Tech., M.E./M. Tech. in Computer Science, Electronics Engineering, or MCA.,
Posted 6 days ago
4.0 - 9.0 years
8 - 13 Lacs
Hyderabad
Work from Office
About ValGenesis ValGenesis is a leading digital validation platform provider for life sciences companies. ValGenesis suite of products are used by 30 of the top 50 global pharmaceutical and biotech companies to achieve digital transformation, total compliance and manufacturing excellence/intelligence across their product lifecycle. Learn more about working for ValGenesis, the de facto standard for paperless validation in Life Sciences: https://www.youtube.com/watch?v=tASq7Ld0JsQ About the Role: We are looking for experienced product development engineers/experts who could join our cloud product engineering team to build the next gen applications for our global customers. If you are a technology enthusiast and have passion to develop enterprise cloud products with quality, security, and performance, we are eager to discuss with you about the potential role. Responsibilities: Understand the business requirements and technical constraints and architect/design/develop. Participate in the complete development life cycle. Review the architecture/design/code of self and others. Develop enterprise application features/services using Azure cloud services, C# .NET Core, ReactJS etc, implementing DevSecOps principles. Own and be accountable for the Quality, Performance, Security and Sustenance of the respective product deliverables. Strive for self-excellence along with enabling success of the team/stakeholders. Requirements 4 to 10 years of experience in developing enterprise software products Strong knowledge of C#, .NET Core, Azure DevOps Working knowledge of the JS frameworks – Preferably ReactJS Experience in container-based development, AKS, Service Fabric etc Experience in messaging queue like RabbitMQ, Kafka Experience in Azure Services like Azure Logic Apps, Azure Functions Experience in databases like SQL Server, PostgreSQL Knowledge of reporting solutions like PowerBI, Apache SuperSet etc Knowledge of Micro-Services and/or Micro-Frontend architecture Knowledge of Code Quality, Code Monitoring, Performance Engineering, Test Automation Tools We’re on a Mission In 2005, we disrupted the life sciences industry by introducing the world’s first digital validation lifecycle management system. ValGenesis VLMS® revolutionized compliance-based corporate validation activities and has remained the industry standard. Today, we continue to push the boundaries of innovation enhancing and expanding our portfolio beyond validation with an end-to-end digital transformation platform. We combine our purpose-built systems with world-class consulting services to help every facet of GxP meet evolving regulations and quality expectations. The Team You’ll Join Our customers’ success is our success. We keep the customer experience centered in our decisions, from product to marketing to sales to services to support. Life sciences companies exist to improve humanity’s quality of life, and we honor that mission. We work together. We communicate openly, support each other without reservation, and never hesitate to wear multiple hats to get the job done. We think big. Innovation is the heart of ValGenesis. That spirit drives product development as well as personal growth. We never stop aiming upward. We’re in it to win it. We’re on a path to becoming the number one intelligent validation platform in the market, and we won’t settle for anything less than being a market leader. How We Work Our Chennai, Hyderabad and Bangalore offices are onsite, 5 days per week. We believe that in-person interaction and collaboration fosters creativity, and a sense of community, and is critical to our future success as a company. ValGenesis is an equal-opportunity employer that makes employment decisions on the basis of merit. Our goal is to have the best-qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristics protected by local law.
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi