Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
9.0 years
5 - 10 Lacs
Thiruvananthapuram
On-site
9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes: Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures of Outcomes: Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected: Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management : Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control and Review : Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development : Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement gathering and Analysis: Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management: Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management: Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting: In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation and Thought Leadership: Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support: Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management: Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design: Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples: Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples: Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments: AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: • Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. • Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. • Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. • Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. • Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. • Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. • Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. • Key Skills & Technology Areas: • AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. • LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. • Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. • Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. • Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. • Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) • Observability Integration: Splunk, ELK Stack, Prometheus. • DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. • Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: • Proven ability to work independently and collaboratively in agile, innovation-driven teams. • Strong problem-solving mindset and product-oriented thinking. • Excellent communication and technical storytelling skills. • Flexibility to work across multiple opportunities based on business priorities. • Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
3.0 - 7.0 years
5 - 9 Lacs
Cochin
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY GDS – Data and Analytics (D&A) – Azure Data Engineer - Senior We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your key responsibilities Develop & deploy azure databricks in a cloud environment using Azure Cloud services ETL design, development, and deployment to Cloud Service Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills and attributes for success 3 to 7 years of Experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc. Familiar with cloud services like Azure Hands on experience on spark Hands on experience in programming like python/scala Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems To qualify for the role, you must have Be a computer science graduate or equivalent with 3 to 7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Strong analytical skills and enjoys solving complex technical problems Proficiency in Software Development Best Practices Excellent debugging and optimization skills Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc Excellent communicator (written and verbal formal and informal). Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Client management skills Ideally, you’ll also have Client management skills What we look for People with technical experience and enthusiasm to learn new things in this fast-moving environment What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Thiruvananthapuram
On-site
5 - 7 Years 2 Openings Trivandrum Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Data Engineering Role Summary: Skilled Data Engineer with strong Python programming skills and experience in building scalable data pipelines across cloud environments. The candidate should have a good understanding of ML pipelines and basic exposure to GenAI solutioning. This role will support large-scale AI/ML and GenAI initiatives by ensuring high-quality, contextual, and real-time data availability. ________________________________________ Key Responsibilities: • Design, build, and maintain robust, scalable ETL/ELT data pipelines in AWS/Azure environments. • Develop and optimize data workflows using PySpark, SQL, and Airflow. • Work closely with AI/ML teams to support training pipelines and GenAI solution deployments. • Integrate data with vector databases like ChromaDB or Pinecone for RAG-based pipelines. • Collaborate with solution architects and GenAI leads to ensure reliable, real-time data availability for agentic AI and automation solutions. • Support data quality, validation, and profiling processes. ________________________________________ Key Skills & Technology Areas: • Programming & Data Processing: Python (4–6 years), PySpark, Pandas, NumPy • Data Engineering & Pipelines: Apache Airflow, AWS Glue, Azure Data Factory, Databricks • Cloud Platforms: AWS (S3, Lambda, Glue), Azure (ADF, Synapse), GCP (optional) • Databases: SQL/NoSQL, Postgres, DynamoDB, Vector databases (ChromaDB, Pinecone) – preferred • ML/GenAI Exposure (basic): Hands-on with Pandas, scikit-learn, knowledge of RAG pipelines and GenAI concepts • Data Modeling: Star/Snowflake schema, data normalization, dimensional modeling • Version Control & CI/CD: Git, Jenkins, or similar tools for pipeline deployment ________________________________________ Other Requirements: • Strong problem-solving and analytical skills • Flexible to work on fast-paced and cross-functional priorities • Experience collaborating with AI/ML or GenAI teams is a plus • Good communication and a collaborative, team-first mindset • Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. Skills ETL,BIGDATA,PYSPARK,SQL About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
4.0 years
1 - 3 Lacs
Hyderābād
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: We are seeking an experienced **AI Engineer** with a minimum of 4 years of experience in developing machine learning models and at least 3 years of experience deploying AI solutions into production. The ideal candidate will be proficient in Python, TensorFlow or PyTorch, and experienced with MLOps tools and cloud platforms. As an AI Engineer in the retail home improvement space, you'll help build intelligent systems that enhance the customer experience, optimize inventory, and drive smarter business decisions. Responsibilities: - Design, develop, and deploy AI and machine learning solutions tailored to retail challenges—such as personalized product recommendations, dynamic pricing, and demand forecasting. - Collaborate with data scientists, product managers, engineers, and retail analysts to develop AI-driven features that improve customer experience and operational efficiency. - Build and manage data pipelines that support large-scale training and inference workloads using structured and semi-structured retail data. - Develop and optimize deep learning models using TensorFlow or PyTorch for applications like visual product search, customer segmentation, and chatbot automation. - Integrate AI models into customer-facing platforms (e.g., mobile apps, websites) and backend retail systems (e.g., inventory management, logistics). - Monitor model performance post-deployment and implement continuous improvement strategies based on business KPIs and real-time data. - Contribute to model governance, testing, and documentation to ensure models are fair, explainable, and secure. - Stay informed about AI trends in the retail and e-commerce industry to help the team stay competitive and innovative. Mandatory skill sets: ‘Must have’ knowledge, skills and experiences · AI Engineer - Tensorflow, Python, Pytorch, Scikit learn, NLP, Deep learning, Supervised learning, MLOPs, CICD, API development with FastAPI, ML System Integrations, Understanding of Jenkins, GitHub Actions and Airflow. Docker and Kubernetes, Model design, development, deployment and maintenance Preferred skill sets: ‘Good to have’ knowledge, skills and experiences · Experience with Front end applications such as Streamlit Years of experience required: 4 Years to 12 years Education qualification: BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Python (Programming Language), PyTorch, Tensorflow Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 1 week ago
8.0 years
3 - 6 Lacs
Hyderābād
Remote
Job Title : Engineering Manager (Java17 / Spring Boot, AWS) – Remote Leadership Role Location : Remote Employment Type : Full-time Shift time : 12:00 noon to 09:00 pm IST Experience : 8-12 Years About Company : We offer the most accurate company and contact data on the market. Our unique approach to data collection, enhancement, verification and growth solidifies our position as the best B2B data partner to revenue teams. We're looking for a passionate and technically adept leader with a deep understanding of modern software development to join our leadership team and guide two critical teams: Market Positioning Team: This team owns the development of features and functionalities that define our unique market position and drive user adoption. Integrations Team: This team tackles the challenge of seamless integration with our ecosystem of partners and third-party applications. As Engineering Manager, you'll wear many hats. You'll be a coach, mentor, and technical leader, guiding your teams to achieve ambitious goals with clarity and vision. You'll set the tone for technical excellence, collaboration, and a culture of continuous learning. About the Opportunity: We’re looking for an Engineering Manager to guide our micro-service platform and mentor a fully remote backend team. You’ll blend hands-on technical ownership with people leadership—shaping architecture, driving cloud best practices, and coaching engineers in their careers and craft. Key Responsibilities: 1. Architecture & Delivery : Define and evolve backend architecture built on Java 17+, Spring Boot 3, AWS (Containers, Lambdas, SQS, S3), Elasticsearch, PostgreSQL/MySQL, Databricks, Redis etc... Lead design and code reviews; enforce best practices for testing,CI/CD, observability, security, and cost-efficient cloud operations. Drive technical roadmaps, ensuring scalability (billions of events, 99.9%+ uptime) and rapid feature delivery. 2.Team Leadership & Growth Manage and inspire a distributed team of 6-10 backend engineers across multiple time zones. Set clear growth objectives, run 1-on-1s, deliver feedback, and foster an inclusive, high-trust culture. Coach the team on AI-assisted development workflows (e.g., GitHubCopilot, LLM-based code review) to boost productivity and code quality. 3.Stakeholder Collaboration Act as technical liaison to Product, Frontend, SRE, and Data teams, translating business goals into resilient backend solutions. Communicate complex concepts to both technical and non-technical audiences; influence cross-functional decisions. 4.Technical Vision & Governance Own coding standards, architectural principles, and technology selection. Evaluate emerging tools and frameworks (especially around GenAI and cloud-native patterns) and create adoption strategies. Balance technical debt and new feature delivery through data-driven prioritization. Required Qualifications: 8+ years designing, building, and operating distributed backend systems with Java & Spring Boot Proven experience leading or mentoring engineers; direct people-management a plus Expert knowledge of AWS services and cloud-native design patterns Hands-on mastery of Elasticsearch, PostgreSQL/MySQL, and Redis for high-volume, low-latency workloads Demonstrated success scaling systems to millions of users or billions of events Strong grasp of DevOps practices: containerization (Docker), CI/CD (GitHub Actions), observability stacks Excellent communication and stakeholder-management skills in a remote-first environment Nice-to-Have: Hands-on experience with Datadog (APM, Logs, RUM) and a data-driven approach to debugging/performance tuning Startup experience—comfortable wearing multiple hats and juggling several projects simultaneously Prior title of Principal Engineer, Staff Engineer, or Engineering Manager in a high-growth SaaS company Familiarity with AI-assisted development tools (Copilot, CodeWhisperer, Cursor) and a track record of introducing them safely
Posted 1 week ago
0 years
2 - 9 Lacs
Hyderābād
On-site
Job Description: Role Overview As a Specialist Data/AI Engineer – QA at AT&T, you will be responsible for ensuring the quality, reliability, and performance of data pipelines, AI models, and analytics solutions. You will design and execute comprehensive testing strategies for data and AI systems, including validation of data integrity, model accuracy, and system scalability. Your role is critical to delivering robust, production-ready AI and data solutions that meet AT&T’s high standards. Key Responsibilities Develop and implement QA frameworks, test plans, and automated testing scripts for data pipelines and AI/ML models. Validate data quality, consistency, and accuracy across ingestion, transformation, and storage processes. Test AI/ML model performance including accuracy, bias, robustness, and drift detection. Utilize cloud platforms (AWS, Azure, GCP) and modern data technologies (e.g., Snowflake, Databricks, Kafka) to manage large-scale data workflows. Collaborate with data engineers, data scientists, and product teams to identify test requirements and ensure comprehensive coverage. Perform regression, integration, system, and performance testing on data and AI workflows. Automate testing processes using appropriate tools and frameworks to enable continuous testing in CI/CD pipelines. Monitor production systems to detect issues proactively and support root cause analysis for defects or anomalies. Document test results, defects, and quality metrics, communicating findings to technical and non-technical stakeholders. Advocate for quality best practices and contribute to improving testing methodologies across the CDO. Stay current with industry trends and emerging tools in data engineering, AI, and QA automation. Qualifications Required: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. Experience in quality assurance or testing roles focused on data engineering, AI, or machine learning systems. Proficiency in programming and scripting languages such as Python, SQL, and experience with test automation frameworks. Strong understanding of data pipelines, ETL/ELT processes, and data validation techniques. Familiarity with machine learning concepts and model evaluation metrics. Experience with cloud platforms (AWS, Azure, GCP) and data platforms (Snowflake, Databricks) is preferred. Knowledge of CI/CD tools and integration of automated testing within deployment pipelines. Excellent analytical, problem-solving, and communication skills. Preferred: Experience with AI/ML model testing frameworks and bias/fairness testing. Familiarity with containerization (Docker) and orchestration (Kubernetes) environments. Understanding of data governance, compliance, and responsible AI principles. Experience with real-time data streaming and testing associated workflows. #DataEngineering Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 1 week ago
2.0 years
0 Lacs
Hyderābād
On-site
JOB DESCRIPTION You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you As a Software Engineer II at JPMorgan Chase within the Consumer and Community Banking Risk Technology, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. You execute software solutions through design, development, and technical troubleshooting of multiple components within a technical product, application, or system. Job responsibilities Executes standard software solutions, design, development, and technical troubleshooting Implements and optimizes workflows using Databricks, Spark, and Kafka/Streaming Writes secure and high-quality code using the syntax of at least one programming language with limited guidance Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 2+ years applied experience Proficiency in event-driven and big data technologies, including Databricks, Spark, and Kafka/Streaming Hands-on practical experience in system design, application development, testing, and operational stability Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Demonstrable ability to code in one or more languages Experience across the whole Software Development Life Cycle Exposure to agile methodologies such as CI/CD, Application Resiliency, and Security Preferred qualifications, capabilities, and skills Familiarity with modern front-end technologies Exposure to cloud technologies ABOUT US
Posted 1 week ago
8.0 years
0 Lacs
Delhi
Remote
Job Title : Engineering Manager (Java17 / Spring Boot, AWS) – Remote Leadership Role Location : Remote Employment Type : Full-time Shift time : 12:00 noon to 09:00 pm IST Experience : 8-12 Years About Company : We offer the most accurate company and contact data on the market. Our unique approach to data collection, enhancement, verification and growth solidifies our position as the best B2B data partner to revenue teams. We're looking for a passionate and technically adept leader with a deep understanding of modern software development to join our leadership team and guide two critical teams: Market Positioning Team: This team owns the development of features and functionalities that define our unique market position and drive user adoption. Integrations Team: This team tackles the challenge of seamless integration with our ecosystem of partners and third-party applications. As Engineering Manager, you'll wear many hats. You'll be a coach, mentor, and technical leader, guiding your teams to achieve ambitious goals with clarity and vision. You'll set the tone for technical excellence, collaboration, and a culture of continuous learning. About the Opportunity: We’re looking for an Engineering Manager to guide our micro-service platform and mentor a fully remote backend team. You’ll blend hands-on technical ownership with people leadership—shaping architecture, driving cloud best practices, and coaching engineers in their careers and craft. Key Responsibilities: 1. Architecture & Delivery : Define and evolve backend architecture built on Java 17+, Spring Boot 3, AWS (Containers, Lambdas, SQS, S3), Elasticsearch, PostgreSQL/MySQL, Databricks, Redis etc... Lead design and code reviews; enforce best practices for testing,CI/CD, observability, security, and cost-efficient cloud operations. Drive technical roadmaps, ensuring scalability (billions of events, 99.9%+ uptime) and rapid feature delivery. 2.Team Leadership & Growth Manage and inspire a distributed team of 6-10 backend engineers across multiple time zones. Set clear growth objectives, run 1-on-1s, deliver feedback, and foster an inclusive, high-trust culture. Coach the team on AI-assisted development workflows (e.g., GitHubCopilot, LLM-based code review) to boost productivity and code quality. 3.Stakeholder Collaboration Act as technical liaison to Product, Frontend, SRE, and Data teams, translating business goals into resilient backend solutions. Communicate complex concepts to both technical and non-technical audiences; influence cross-functional decisions. 4.Technical Vision & Governance Own coding standards, architectural principles, and technology selection. Evaluate emerging tools and frameworks (especially around GenAI and cloud-native patterns) and create adoption strategies. Balance technical debt and new feature delivery through data-driven prioritization. Required Qualifications: 8+ years designing, building, and operating distributed backend systems with Java & Spring Boot Proven experience leading or mentoring engineers; direct people-management a plus Expert knowledge of AWS services and cloud-native design patterns Hands-on mastery of Elasticsearch, PostgreSQL/MySQL, and Redis for high-volume, low-latency workloads Demonstrated success scaling systems to millions of users or billions of events Strong grasp of DevOps practices: containerization (Docker), CI/CD (GitHub Actions), observability stacks Excellent communication and stakeholder-management skills in a remote-first environment Nice-to-Have: Hands-on experience with Datadog (APM, Logs, RUM) and a data-driven approach to debugging/performance tuning Startup experience—comfortable wearing multiple hats and juggling several projects simultaneously Prior title of Principal Engineer, Staff Engineer, or Engineering Manager in a high-growth SaaS company Familiarity with AI-assisted development tools (Copilot, CodeWhisperer, Cursor) and a track record of introducing them safely
Posted 1 week ago
6.0 years
2 Lacs
Gurgaon
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Senior Data Scientist Job Title - Senior Data Scientist – Data & Analytics Our Purpose We work to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. We cultivate a culture of inclusion for all employees that respects their individual strengths, views, and experiences. We believe that our differences enable us to be a better team – one that makes better decisions, drives innovation and delivers better business results. Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Our Team As consumer preference for digital payments continues to grow, ensuring a seamless and secure consumer experience is top of mind. Optimization Solutions team focuses on tracking of digital performance across all products and regions, understanding the factors influencing performance and the broader industry landscape. This includes delivering data-driven insights and business recommendations, engaging directly with key external stakeholders on implementing optimization solutions (new and existing), and partnering across the organization to drive alignment and ensure action is taken. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data-driven decision-making? Are you motivated to be part of a team that builds large-scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data science & analytics in the company? The Role You will be part of AI Centre of Excellence, in Core Products Mastercard working hands on ML and AI projects. The candidate, will be the technical lead on solving and identifying Merchant Localization across various global markets. In this role, you will be required to build new ML models to catch merchant localization and scale existing models for recurring inference. You will be required to work closely in collaboration with multiple internal business groups across Mastercard. You are also responsible for creating design documents, including data models, data flow diagrams, and system architecture diagrams. All about You Majors in Computer Science, Data Science, Analytics, Mathematics, Statistics, or a related engineering field or equivalent work experience 6+ Years of experience in using Python and SQL with knowledge of distributed data systems like Data Warehouses 4+ Years of experience on building, deploying and maintaining ML models Demonstrated success interacting with stakeholders to understand technical needs and ensuring analyses and solutions meet their needs effectively. Able to work in a fast-paced, deadline-driven environment as part of a team and as an individual contributor. Ability to easily move between business, analytical, and technical teams and articulate solution requirements for each group. Experience with Enterprise Business Intelligence Platform/Data platform i.e. Tableau, PowerBI, Streamlit will be a plus. Experience with cloud-based (SaaS) solutions, ETL processes or API integrations will be a plus. Experience on Cloud Data Platforms Azure/AWS/Databricks will be a plus. Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills Analytical/Problem Solving Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency Project Management/Risk Mitigation Able to prioritize and perform multiple tasks simultaneously Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 1 week ago
6.0 - 8.0 years
2 - 5 Lacs
Gurgaon
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Azure Cloud and Python Developer for ML - Senior 1/2 EY GDS Consulting digital engineering, is seeking an experienced Azure Cloud and Python Developer for ML to join our Emerging Technologies team in DET, GDS. This role presents an exciting opportunity to contribute to innovative projects and be a key player in shaping our technological advancements. The opportunity We are seeking an experienced Azure Cloud and Python Developer with 6-8 years of hands-on experience in machine learning (ML) development. This role involves developing and deploying ML models on the Azure cloud platform, designing efficient data pipelines, and collaborating with data scientists and stakeholders to deliver technical solutions. Your key responsibilities Develop and deploy machine learning models on Azure cloud platform using Python programming language, ensuring scalability and efficiency. Design and implement scalable and efficient data pipelines for model training and inference, optimizing data processing workflows. Collaborate closely with data scientists and business stakeholders to understand requirements, translate them into technical solutions, and deliver high-quality ML solutions. Implement best practices for ML development, including version control using tools like Git, testing methodologies, and documentation to ensure reproducibility and maintainability. Design and optimize ML algorithms and data structures for performance and accuracy, leveraging Azure cloud services and Python libraries such as TensorFlow, PyTorch, or scikit-learn. Monitor and evaluate model performance, conduct experiments, and iterate on models to improve predictive accuracy and business outcomes. Work on feature engineering, data preprocessing, and feature selection techniques to enhance model performance and interpretability. Collaborate with DevOps teams to deploy ML models into production environments, ensuring seamless integration and continuous monitoring. Stay updated with the latest advancements in ML, Azure cloud services, and Python programming, and apply them to enhance ML capabilities and efficiency. Provide technical guidance and mentorship to junior developers and data scientists, fostering a culture of continuous learning and innovation. Skills and attributes Soft Skills Bachelor's or master's degree in computer science, data science, or related field, with a strong foundation in ML algorithms, statistics, and programming concepts. Minimum 6-8 years of hands-on experience in developing and deploying ML models on Azure cloud platform using Python programming language. Expertise in designing and implementing scalable data pipelines for ML model training and inference, utilizing Azure Data Factory, Azure Databricks, or similar tools. Proficiency in Python programming language, including libraries such as TensorFlow, PyTorch, scikit-learn, pandas, and NumPy for ML model development and data manipulation. Strong understanding of ML model evaluation metrics, feature engineering techniques, and data preprocessing methods for structured and unstructured data. Experience with cloud-native technologies and services, including Azure Machine Learning, Azure Kubernetes Service (AKS), Azure Functions, and Azure Storage. Familiarity with DevOps practices, CI/CD pipelines, and containerization tools like Docker for ML model deployment and automation. Excellent problem-solving skills, analytical thinking, and attention to detail, with the ability to troubleshoot and debug complex ML algorithms and systems. Effective communication skills, both verbal and written, with the ability to explain technical concepts to non-technical stakeholders and collaborate in cross-functional teams. Proactive and self-motivated attitude, with a passion for learning new technologies and staying updated with industry trends in ML, cloud computing, and software development. Strong organizational skills and the ability to manage multiple projects, prioritize tasks, and deliver results within project timelines and specifications. Business acumen and understanding of the impact of ML solutions on business operations and decision-making processes, with a focus on delivering value and driving business outcomes. Collaboration and teamwork skills, with the ability to work effectively in a global, diverse, and distributed team environment, fostering a culture of innovation and continuous improvement.. To qualify for the role, you must have A bachelor's or master's degree in computer science, data science, or related field, along with a minimum of 6-8 years of experience in ML development and Azure cloud platform expertise. Strong communication skills and consulting experience are highly desirable for this position. Ideally, you’ll also have Analytical ability to manage complex ML projects and prioritize tasks efficiently. Experience operating independently or with minimal supervision, demonstrating strong problem-solving skills. Familiarity with other cloud platforms and technologies such as AWS, Google Cloud Platform (GCP), or Kubernetes is a plus. What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
5.0 - 10.0 years
5 - 10 Lacs
Gurgaon
On-site
Manager EXL/M/1390788 ServicesGurgaon Posted On 18 Jul 2025 End Date 01 Sep 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code D005894 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 10.0000 - 25.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Insurance Organization Services LOB Consulting SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill AZURE Minimum Qualification B.TECH/B.E Certification No data available Job Description Analytics – JD (Azure DE) EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning Business EXLerator Framework™, which integrates analytics, automation, benchmarking, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 61,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients’ decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 12,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. Job Title: Consultant / Senior Consultant – Azure Data Engineering Location: India – Gurgaon preferred Industry: Insurance Analytics & AI Vertical Role Overview: We are seeking a hands-on Consultant / Senior Consultant with strong expertise in Azure-based data engineering to support end-to-end development and delivery of data pipelines for our insurance clients. The ideal candidate will have a deep understanding of Azure Data Factory, ADLS, Databricks (preferably with DLT and Unity Catalog), SQL, and Python and be comfortable working in a dynamic, client-facing environment. This is a key offshore role requiring both technical execution and solution-oriented thinking to support modern data platform initiatives. Collaborate with data scientists, analysts, and stakeholders to gather requirements and define data models that effectively support business requirements Demonstrate decision-making, analytical and problem-solving abilities Strong verbal and written communication skills to manage client discussions Familiar with working on Agile methodologies - daily scrum, sprint planning, backlog refinement Key Responsibilities & Skillsets: o Design and develop scalable and efficient data pipelines using Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS). o Build and maintain Databricks notebooks for data ingestion, transformation, and quality checks, using Python and SQL. o Work with Delta Live Tables (DLT) and Unity Catalog (preferred) to improve pipeline automation, governance, and performance. o Collaborate with data architects, analysts, and onshore teams to translate business requirements into technical specifications. o Troubleshoot data issues, ensure data accuracy, and apply best practices in data engineering and DevOps. o Support the migration of legacy SQL pipelines to modern Python-based frameworks. o Ensure adherence to data security, compliance, and performance standards, especially within insurance domain constraints. o Provide documentation, status updates, and technical insights to stakeholders as required. o Excellent communication skills and stakeholder management Required Skills & Experience: 3–7 years of strong hands-on experience in data engineering with a focus on Azure cloud technologies. Proficient in Azure Data Factory, Databricks, ADLS Gen2, and working knowledge of Unity Catalog. Strong programming skills in both SQL, Python especially within Databricks Notebooks. Pyspark expertise is good to have. Experience in Delta Lake / Delta Live Tables (DLT) is a plus. Good understanding of ETL/ELT concepts, data modeling, and performance tuning. Exposure to Insurance or Financial Services data projects is highly preferred. Strong communication and collaboration skills in an offshore delivery model. Required Skills & Experience: Experience working in Agile/Scrum teams Familiarity with Azure DevOps, Git, and CI/CD practices Certifications in Azure Data Engineering (e.g., DP-203) or Databricks What we offer: EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond. "EOE/Minorities/Females/Vets/Disabilities" Workflow Workflow Type Back Office
Posted 1 week ago
8.0 years
2 - 6 Lacs
Gurgaon
Remote
Job Title : Engineering Manager (Java17 / Spring Boot, AWS) – Remote Leadership Role Location : Remote Employment Type : Full-time Shift time : 12:00 noon to 09:00 pm IST Experience : 8-12 Years About Company : We offer the most accurate company and contact data on the market. Our unique approach to data collection, enhancement, verification and growth solidifies our position as the best B2B data partner to revenue teams. We're looking for a passionate and technically adept leader with a deep understanding of modern software development to join our leadership team and guide two critical teams: Market Positioning Team: This team owns the development of features and functionalities that define our unique market position and drive user adoption. Integrations Team: This team tackles the challenge of seamless integration with our ecosystem of partners and third-party applications. As Engineering Manager, you'll wear many hats. You'll be a coach, mentor, and technical leader, guiding your teams to achieve ambitious goals with clarity and vision. You'll set the tone for technical excellence, collaboration, and a culture of continuous learning. About the Opportunity: We’re looking for an Engineering Manager to guide our micro-service platform and mentor a fully remote backend team. You’ll blend hands-on technical ownership with people leadership—shaping architecture, driving cloud best practices, and coaching engineers in their careers and craft. Key Responsibilities: 1. Architecture & Delivery : Define and evolve backend architecture built on Java 17+, Spring Boot 3, AWS (Containers, Lambdas, SQS, S3), Elasticsearch, PostgreSQL/MySQL, Databricks, Redis etc... Lead design and code reviews; enforce best practices for testing,CI/CD, observability, security, and cost-efficient cloud operations. Drive technical roadmaps, ensuring scalability (billions of events, 99.9%+ uptime) and rapid feature delivery. 2.Team Leadership & Growth Manage and inspire a distributed team of 6-10 backend engineers across multiple time zones. Set clear growth objectives, run 1-on-1s, deliver feedback, and foster an inclusive, high-trust culture. Coach the team on AI-assisted development workflows (e.g., GitHubCopilot, LLM-based code review) to boost productivity and code quality. 3.Stakeholder Collaboration Act as technical liaison to Product, Frontend, SRE, and Data teams, translating business goals into resilient backend solutions. Communicate complex concepts to both technical and non-technical audiences; influence cross-functional decisions. 4.Technical Vision & Governance Own coding standards, architectural principles, and technology selection. Evaluate emerging tools and frameworks (especially around GenAI and cloud-native patterns) and create adoption strategies. Balance technical debt and new feature delivery through data-driven prioritization. Required Qualifications: 8+ years designing, building, and operating distributed backend systems with Java & Spring Boot Proven experience leading or mentoring engineers; direct people-management a plus Expert knowledge of AWS services and cloud-native design patterns Hands-on mastery of Elasticsearch, PostgreSQL/MySQL, and Redis for high-volume, low-latency workloads Demonstrated success scaling systems to millions of users or billions of events Strong grasp of DevOps practices: containerization (Docker), CI/CD (GitHub Actions), observability stacks Excellent communication and stakeholder-management skills in a remote-first environment Nice-to-Have: Hands-on experience with Datadog (APM, Logs, RUM) and a data-driven approach to debugging/performance tuning Startup experience—comfortable wearing multiple hats and juggling several projects simultaneously Prior title of Principal Engineer, Staff Engineer, or Engineering Manager in a high-growth SaaS company Familiarity with AI-assisted development tools (Copilot, CodeWhisperer, Cursor) and a track record of introducing them safely
Posted 1 week ago
8.0 years
3 - 7 Lacs
Chennai
Remote
Job Title : Engineering Manager (Java17 / Spring Boot, AWS) – Remote Leadership Role Location : Remote Employment Type : Full-time Shift time : 12:00 noon to 09:00 pm IST Experience : 8-12 Years About Company : We offer the most accurate company and contact data on the market. Our unique approach to data collection, enhancement, verification and growth solidifies our position as the best B2B data partner to revenue teams. We're looking for a passionate and technically adept leader with a deep understanding of modern software development to join our leadership team and guide two critical teams: Market Positioning Team: This team owns the development of features and functionalities that define our unique market position and drive user adoption. Integrations Team: This team tackles the challenge of seamless integration with our ecosystem of partners and third-party applications. As Engineering Manager, you'll wear many hats. You'll be a coach, mentor, and technical leader, guiding your teams to achieve ambitious goals with clarity and vision. You'll set the tone for technical excellence, collaboration, and a culture of continuous learning. About the Opportunity: We’re looking for an Engineering Manager to guide our micro-service platform and mentor a fully remote backend team. You’ll blend hands-on technical ownership with people leadership—shaping architecture, driving cloud best practices, and coaching engineers in their careers and craft. Key Responsibilities: 1. Architecture & Delivery : Define and evolve backend architecture built on Java 17+, Spring Boot 3, AWS (Containers, Lambdas, SQS, S3), Elasticsearch, PostgreSQL/MySQL, Databricks, Redis etc... Lead design and code reviews; enforce best practices for testing,CI/CD, observability, security, and cost-efficient cloud operations. Drive technical roadmaps, ensuring scalability (billions of events, 99.9%+ uptime) and rapid feature delivery. 2.Team Leadership & Growth Manage and inspire a distributed team of 6-10 backend engineers across multiple time zones. Set clear growth objectives, run 1-on-1s, deliver feedback, and foster an inclusive, high-trust culture. Coach the team on AI-assisted development workflows (e.g., GitHubCopilot, LLM-based code review) to boost productivity and code quality. 3.Stakeholder Collaboration Act as technical liaison to Product, Frontend, SRE, and Data teams, translating business goals into resilient backend solutions. Communicate complex concepts to both technical and non-technical audiences; influence cross-functional decisions. 4.Technical Vision & Governance Own coding standards, architectural principles, and technology selection. Evaluate emerging tools and frameworks (especially around GenAI and cloud-native patterns) and create adoption strategies. Balance technical debt and new feature delivery through data-driven prioritization. Required Qualifications: 8+ years designing, building, and operating distributed backend systems with Java & Spring Boot Proven experience leading or mentoring engineers; direct people-management a plus Expert knowledge of AWS services and cloud-native design patterns Hands-on mastery of Elasticsearch, PostgreSQL/MySQL, and Redis for high-volume, low-latency workloads Demonstrated success scaling systems to millions of users or billions of events Strong grasp of DevOps practices: containerization (Docker), CI/CD (GitHub Actions), observability stacks Excellent communication and stakeholder-management skills in a remote-first environment Nice-to-Have: Hands-on experience with Datadog (APM, Logs, RUM) and a data-driven approach to debugging/performance tuning Startup experience—comfortable wearing multiple hats and juggling several projects simultaneously Prior title of Principal Engineer, Staff Engineer, or Engineering Manager in a high-growth SaaS company Familiarity with AI-assisted development tools (Copilot, CodeWhisperer, Cursor) and a track record of introducing them safely
Posted 1 week ago
3.0 - 5.0 years
5 - 9 Lacs
Chennai
On-site
Senior Executive EXL/SE/1424780 Payment ServicesChennai Posted On 18 Jul 2025 End Date 01 Sep 2025 Required Experience 3 - 5 Years Basic Section Number Of Positions 1 Band A2 Band Name Senior Executive Cost Code G090505 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 500000.0000 - 900000.0000 Complexity Level Not Applicable Work Type Work From Office – Fully Working From EXL/ Client Offices Organisational Group Healthcare Sub Group Healthcare Organization Payment Services LOB Payment Services SBU Healthcare Products & Platforms Country India City Chennai Center IN Chennai C51 Skills Skill .NET REACT SQL CLOUD AZURE Minimum Qualification B.TECH/B.E Certification No data available Job Description Position Title Senior Executive - Programmer Analyst Location Chennai, India Band A2 Designation Lead Programmer Analyst Overview - We are seeking a highly skilled and experienced Lead Programmer Analyst specializing in Microsoft Technologies to join our dynamic team. As a Lead Programmer Analyst, you will play a critical role in shaping the success of our technology projects. Your responsibilities will include architecture design, implementation, and overseeing the development and deployment of Microsoft-based solutions that meet our internal needs. You’ll collaborate closely with architects, business analysts, project managers, developers, testers and other stakeholders to ensure the successful delivery of projects within scope, budget, and schedule. Technical Skills – Microsoft .NET Stack- Proficiency in .NET 8.0, C#, ASP.NET Core, and MVC. Experience with building Web APIs and Minimal APIs. Familiarity with front-end technologies such as React, TypeScript, and NodeJS. Data Persistence and Messaging- Hands-on experience with ORMs (Object-Relational Mappers). Knowledge of messaging and streaming technologies. NoSQL Databases- Understanding of NoSQL databases and their use cases. Microsoft Azure- Designing and implementing cloud-based solutions using Azure services: Azure App Services Azure Functions Azure Web Jobs Azure SQL Database Azure Storage Additional Skills and Value Additions - Experience working in Agile/Scrum environments. Familiarity with Agile methodologies and Scrum practices. Python: General Python skills. Data handling using Python. API development using FastAPI or Flask. Knowledge of PySpark. Big Data: Exposure to technologies such as Databricks and Snowflake. Familiarity with Spark. Good to Have – Relevant Microsoft certifications are a plus. Experience with healthcare data analytics, machine learning, or AI technologies. Certification in healthcare IT (e.g., Certified Professional in Healthcare Information and Management Systems, CHPS). Soft Skills – Strong communication skills - oral and verbal. Ability to work with various stakeholders across various geography. with the ability to build & sustain teams. Mentor people and create a high performing organization, fostering talent, resolving conflicts to build & sustain teams. Education – Master’s or Bachelor’s degree from top tier colleges with good grades from an Engineering Background Business Domain – US Healthcare Insurance & Payer Analytics Insurance Fraud, Waste & Abuse Recovery Audit & Utilization Review Compliance Adherence & Coding Accuracy Payer Management & Code Classification Management Requirements & Responsibilities - Architectural Design and Implementation Design scalable, reliable, and high-performance solutions based on Microsoft technologies, including but not limited to .NET, Azure, SQL Server, and SharePoint Online. Provide expertise in creating robust architectures that align with business objectives. Requirements Gathering and Analysis Collaborate with stakeholders to understand business objectives and technical requirements. Translate requirements into architectural blueprints and design specifications. Mentoring and Knowledge Transfer Mentor new engineers, helping them adapt to the software development environment. Share best practices and guide their learning journey. Alignment with Business Goals: Work closely with project managers, business analysts, and quality assurance teams and ensure that technical solutions align with business requirements. Code Quality and Security: Conduct thorough code reviews & Enforce coding standards, best practices, and security guidelines. Continuous Learning and Adaptation: Stay informed about emerging technologies, trends, and best practices and evaluate their applicability to ongoing projects and solutions. Troubleshooting and Issue Resolution: Assist in resolving complex technical issues during development or deployment. Cloud Migration: Lead the migration of on-premises applications to the cloud, specifically leveraging the Microsoft Azure platform. Workflow Workflow Type Digital Solution Center
Posted 1 week ago
0 years
1 - 5 Lacs
Chennai
On-site
Looking for a offshore Techlead with databricks engineer experience and lead the team from offshore. Develop and maintain a metadata driven generic ETL framework for automating ETL code Design, build, and optimize ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS . Ingest data from a variety of structured and unstructured sources (APIs, RDBMS, flat files, streaming). Develop and maintain robust data pipelines for batch and streaming data using Delta Lake and Spark Structured Streaming. Implement data quality checks, validations, and logging mechanisms. Optimize pipeline performance, cost, and reliability. Collaborate with data analysts, BI, and business teams to deliver fit for purpose datasets. Support data modeling efforts (star, snowflake schemas) de norm tables approach and assist with data warehousing initiatives. Work with orchestration tools Databricks Workflows to schedule and monitor pipelines. Follow best practices for version control, CI/CD, and collaborative development Skills Hands-on experience in ETL/Data Engineering roles. Strong expertise in Databricks (PySpark, SQL, Delta Lake), Databricks Data Engineer Certification preferred Experience with Spark optimization, partitioning, caching, and handling large-scale datasets. Proficiency in SQL and scripting in Python or Scala. Solid understanding of data lakehouse/medallion architectures and modern data platforms. Experience working with cloud storage systems like AWS S3 Familiarity with DevOps practices Git, CI/CD, Terraform, etc. Strong debugging, troubleshooting, and performance-tuning skills. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
3.0 - 7.0 years
5 - 20 Lacs
Noida
On-site
Lead Assistant Manager EXL/LAM/1411628 Healthcare AnalyticsNoida Posted On 03 Jul 2025 End Date 17 Aug 2025 Required Experience 3 - 7 Years Basic Section Number Of Positions 3 Band B2 Band Name Lead Assistant Manager Cost Code D010360 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 500000.0000 - 2000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Healthcare Organization Healthcare Analytics LOB Healthcare D&A SBU Healthcare Analytics Country India City Noida Center Noida-SEZ BPO Solutions Skills Skill AWS SQL PYSPARK AWS GLUE LAMBDA AWS SERVICES ATHENA GIT Minimum Qualification B.TECH/B.E Certification No data available Job Description Job Title: Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. Job Description: We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics. Responsibilities: 1. Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently. 2. Collaborate with analysts to understand data requirements and ensure data availability and quality. 3. Write and optimize SQL queries for data extraction, transformation, and loading. 4. Utilize Git for version control, ensuring proper documentation and tracking of code changes. 5. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval. 6. Develop and optimize high-performance, scalable databases using Amazon DynamoDB. 7. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations. 8. Automate workflows using AWS Cloud services like event bridge, step functions. 9. Monitor and optimize data processing workflows for performance and scalability. 10. Troubleshoot data-related issues and provide timely resolution. 11. Stay up-to-date with industry best practices and emerging technologies in data engineering. Qualifications: 1. Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus. 2. Strong proficiency in PySpark and Python for data processing and analysis. 3. Proficiency in SQL for data manipulation and querying. 4. Experience with version control systems, preferably Git. 5. Familiarity with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc. 6. Familiarity with Databricks and it’s concepts. 7. Excellent problem-solving skills and attention to detail. 8. Strong communication and collaboration skills to work effectively within a team. 9. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Preferred Skills: 1. Knowledge of data warehousing concepts and data modeling. 2. Familiarity with big data technologies like Hadoop and Spark. 3. AWS certifications related to data engineering. Workflow Workflow Type L&S-DA-Consulting
Posted 1 week ago
7.0 years
0 Lacs
Noida
On-site
Your Role The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions. Data is our bread and butter for innovation. We are looking for a Staff Data Scientist who understands healthcare data and can leverage the data to build algorithms to personalize treatments based on the clinical and behavioral history of patients. We are looking for a superstar who will define and build the next generation of predictive analytics tools in healthcare. Analytics at Innovaccer Our analytics team is dedicated to weaving analytics and data science magics across our products. They are the owners and custodians of intelligence behind our products. With their expertise and innovative approach, they play a crucial role in building various analytical models (including descriptive, predictive, and prescriptive) to help our end-users make smart decisions. Their focus on continuous improvement and cutting-edge methodologies ensures that they're always creating market leading solutions that propel our products to new heights of success A Day in the Life Design and lead the development of various artificial intelligence initiatives to help improve health and wellness of patients Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them. Define technical architecture to productize Innovaccer’s machine-learning algorithms and take them to market with partnerships with different organizations Proven ability to break down complex business problems into machine learning problems and design solution workflows. Work with our data platform and applications team to help them successfully integrate the data science capability or algorithms in their product/workflows. Work with development teams to build tools for repeatable data tasks that will accelerate and automate development cycle. Define and execute on the quarterly roadmap What You Need Masters in Computer Science, Computer Engineering or other relevant fields (PhD Preferred) 7+ years of experience in Data Science (healthcare experience will be a plus) Strong written and spoken communication skills Strong hands-on experience in Python - building enterprise applications alongwith optimization techniques. Strong experience with deep learning techniques to build NLP/Computer vision models as well as state of art GenAI pipelines - knowledge of implementing agentic workflows is a plus. Has demonstrable experience deploying deep learning models in production at scale with interactive improvements- would require hands-on expertise with at least 1 deep learning frameworks like Pytorch or Tensorflow. Has keen interest in research and stays updated with key advancements in the area of AI and ML in the industry. Deep understanding of classical ML techniques - Random Forests, SVM, Boosting, Bagging - and building training and evaluation pipelines. Demonstrate experience with global and local model explainability using LIME, SHAP and associated techniques. Hands on experience with at least one ML platform among Databricks, Azure ML, Sagemaker s Experience in developing and deploying production ready models Knowledge of implementing an MLOps framework. Possess a customer-focused attitude through conversations and documentation We offer competitive benefits to set you up for success in and outside of work. Here’s What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where and how we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our Px department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer Inc. is the data platform that accelerates innovation. The Innovaccer platform unifies patient data across systems and care settings, and empowers healthcare organizations with scalable, modern applications that improve clinical, financial, operational, and experiential outcomes. Innovaccer’s EPx-agnostic solutions have been deployed across more than 1,600 hospitals and clinics in the US, enabling care delivery transformation for more than 96,000 clinicians, and helping providers work collaboratively with payers and life sciences companies. Innovaccer has helped its customers unify health records for more than 54 million people and generate over $1.5 billion in cumulative cost savings. The Innovaccer platform is the #1 rated Best-in-KLAS data and analytics platform by KLAS, and the #1 rated population health technology platform by Black Book. For more information, please visit innovaccer.com. Check us out on YouTube, Glassdoor, LinkedIn, and innovaccer.com.
Posted 1 week ago
8.0 years
0 Lacs
Ahmedabad
On-site
Position Overview This role is responsible for defining and delivering ZURU’s next-generation data architecture—built for global scalability, real-time analytics, and AI enablement. You will lead the unification of fragmented data systems into a cohesive, cloud-native platform that supports advanced business intelligence and decision-making. Sitting at the intersection of data strategy, engineering, and commercial enablement, this role demands both deep technical acumen and strong cross-functional influence. You will drive the vision and implementation of robust data infrastructure, champion governance standards, and embed a culture of data excellence across the organisation. Position Impact In the first six months, the Head of Data Architecture will gain deep understanding of ZURU’s operating model, technology stack, and data fragmentation challenges. You’ll conduct a comprehensive review of current architecture, identifying performance gaps, security concerns, and integration challenges across systems like SAP, Odoo, POS, and marketing platforms. By month twelve, you’ll have delivered a fully aligned architecture roadmap—implementing cloud-native infrastructure, data governance standards, and scalable models and pipelines to support AI and analytics. You will have stood up a Centre of Excellence for Data, formalised global data team structures, and established yourself as a trusted partner to senior leadership. What are you Going to do? Lead Global Data Architecture: Own the design, evolution, and delivery of ZURU’s enterprise data architecture across cloud and hybrid environments. Consolidate Core Systems: Unify data sources across SAP, Odoo, POS, IoT, and media into a single analytical platform optimised for business value. Build Scalable Infrastructure: Architect cloud-native solutions that support both batch and streaming data workflows using tools like Databricks, Kafka, and Snowflake. Implement Governance Frameworks: Define and enforce enterprise-wide data standards for access control, privacy, quality, security, and lineage. Enable Metadata & Cataloguing: Deploy metadata management and cataloguing tools to enhance data discoverability and self-service analytics. Operationalise AI/ML Pipelines: Lead data architecture that supports AI/ML initiatives, including demand forecasting, pricing models, and personalisation. Partner Across Functions: Translate business needs into data architecture solutions by collaborating with leaders in Marketing, Finance, Supply Chain, R&D, and Technology. Optimize Cloud Cost & Performance: Roll out compute and storage systems that balance cost efficiency, performance, and observability across platforms. Establish Data Leadership: Build and mentor a high-performing data team across India and NZ, and drive alignment across engineering, analytics, and governance. Vendor and Tool Strategy: Evaluate external tools and partners to ensure the data ecosystem is future-ready, scalable, and cost-effective. What are we Looking for? 8+ years of experience in data architecture, with 3+ years in a senior or leadership role across cloud or hybrid environments Proven ability to design and scale large data platforms supporting analytics, real-time reporting, and AI/ML use cases Hands-on expertise with ingestion, transformation, and orchestration pipelines (e.g. Kafka, Airflow, DBT, Fivetran) Strong knowledge of ERP data models, especially SAP and Odoo Experience with data governance, compliance (GDPR/CCPA) , metadata cataloguing, and security practices Familiarity with distributed systems and streaming frameworks like Spark or Flink Strong stakeholder management and communication skills, with the ability to influence both technical and business teams Experience building and leading cross-regional data teams Tools & Technologies Cloud Platforms: AWS (S3, EMR, Kinesis, Glue), Azure (Synapse, ADLS), GCP Big Data: Hadoop, Apache Spark, Apache Flink Streaming: Kafka, Kinesis, Pub/Sub Orchestration: Airflow, Prefect, Dagster, DBT Warehousing: Snowflake, Redshift, BigQuery, Databricks Delta NoSQL: Cassandra, DynamoDB, HBase, Redis Query Engines: Presto/Trino, Athena IaC & CI/CD: Terraform, GitLab Actions Monitoring: Prometheus, Grafana, ELK, OpenTelemetry Security/Governance: IAM, TLS, KMS, Amundsen, DataHub, Collibra, DBT for lineage What do we Offer? Competitive compensation ️ 5 Working Days with Flexible Working Hours Medical Insurance for self & family Training & skill development programs Work with the Global team, Make the most of the diverse knowledge Several discussions over Multiple Pizza Parties A lot more! Come and discover us!
Posted 1 week ago
3.0 - 7.0 years
3 - 5 Lacs
Calcutta
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Collibra As part of our EY-GDS D&A (Data and Analytics) team, we assist our clients in overcoming complex business challenges through the power of data and technology. We delve deep into data to extract maximum value and uncover opportunities across key sectors, including Banking, Insurance, Manufacturing, Healthcare, Retail, Supply Chain, and Finance. The opportunity We are seeking candidates with a robust understanding of technology and data in the Data Governance and Data Cataloguing space, along with a proven track record of successful delivery. This is an excellent opportunity to join a leading firm and be part of a dynamic Data and Analytics team. Your key responsibilities Develop standardized practices for creating and deploying data cataloguing and metadata solutions using Collibra. Collaborate with client technology leaders to understand their business objectives and architect data governance solutions tailored to their needs. Define and implement best practices for metadata management specific to client requirements. Create and maintain data dictionaries and hierarchies within Collibra. Integrate Collibra into the broader Data Analytics ecosystem, ensuring seamless functionality with other tools. Skills and attributes for success 3 - 7 years of total IT experience. Strong experience in designing and building solutions using Collibra (Data Catalog, Metadata ingestion, Data Classification & Tagging, Data Lineage, Workflow creation). Extensive experience in developing and maintaining data cataloguing solutions. Proficient programming skills in Python/ Java/ Groovy. Solid understanding of Data Governance and Metadata management principles. Hands-on experience with API Integration. Familiarity with at least one other data governance tool such as MS Purview/Alation/Informatica. Knowledge of data engineering pipelines in with Informatica PowerCenter, IDMC is preferred. Knowledge of data engineering pipelines in Azure/Databricks is preferred. Experience with BI and data analytics databases is a plus. Ability to translate business challenges into technical solutions, considering security, performance, and scalability. To qualify for the role, you must have. Be a computer science graduate or equivalent with > 3 years of industry experience. Have working experience in an Agile base delivery methodology. Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills Solutioning skills What we look for People with technical experience and enthusiasm to learn new things in this fast-moving environment. What working at EY offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that’s right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
ABOUT EVERNORTH: Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable, and simple health care, we solve the problems others don’t, won’t or can’t. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Application Development Advisor - Pega Position Overview As a Advisor developer, you will be involved in design, development, testing of Pega PRPC (Pega Rules Process Commander) and related infrastructure. You will be in direct contact with technical leads, delivery managers, system architects, on- and off-shore team members as well as other engineers. Responsibilities Analyze, design and support implementation of business-specific Pega solutions and/or frameworks. Responsible for implementing technical solutions on Pega 8.8.X, and Pega Healthcare Management Ability to create reusable components that can be leveraged across the enterprise for providing top-notch customer experience Ability to translate complex business requirement into functional technical requirements using PegaSystems BPM methodology. Good hands on implementing PEGA integration services, good understanding of PEGA new case management, GetNext, Agents features. Perform regular code and design reviews. Assist with planning and execution of unit, integration and user acceptance testing. Provide regular updates team lead and project manager on project progress and outstanding issues Participates in peer-reviews of solution designs and related code and configurations Supports packaging and deployment of releases Develops, refines, and tunes integrations between applications About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Position Overview: The Full-stack Data Engineer is responsible for the delivery of a business need end-to-end starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a full-stack engineer among others is Ownership & Accountability. In addition to Delivery, the full-stack engineer should have an automation first and continuous improvement mindset. He/She should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Full stack engineers are able to articulate clear business objectives aligned to technical specifications and work in an iterative, agile pattern daily. They have ownership over their work tasks, and embrace interacting with all levels of the team and raise challenges when necessary. We aim to be cutting-edge engineers – not institutionalized developers. Roles & Responsibilities: Minimize "meetings" to get requirements and have direct business interactions Write referenceable & modular code Design and architect the solution independently Be fluent in particular areas and have proficiency in many areas Have a passion to learn Take ownership and accountability Understands when to automate and when not to Have a desire to simplify Be entrepreneurial / business minded Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have a business impact Take risks and champion new ideas Qualifications Primary Skills: Hands on with Python / PySpark programimng - 5yrs+ SQL exp - 4yr+ (NoSQL can also work, but should have SQL 3yrs atleast) Big data technologies such as Databricks Or Snowflake - 1yr+ (strong on theory) Exp working with Cloud Tech - 3yrs+ - Any (AWS preferred) DevOps practices - 1yrs+ Experience Desired: Experience with Git/SVN Experience with scripting (JavaScript, Python, R, Ruby, Perl, etc.) Experience being part of Agile teams – Scrum or Kanban. Airflow Databricks / Cloud Certifications Additional Skills: Excellent troubleshooting skills Strong communication skills Fluent in BDD and TDD development methodologies Work in an agile CI/CD environment (Jenkins experience a plus) Knowledge and/or experience with Health care information domains is a plus Location & Hours of Work Hyderabad /General Shift (11:30 AM - 8:30 PM IST / 1:00 AM - 10:00 AM EST / 2:00 AM - 11:00 AM EDT) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Embark on a transformative journey as Data Strategy Analyst at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences. The Data Strategy Team within Credit and Data Analytics (CDA) is in a long-term program to migrate data and analysis from on premise tools and platforms (SAS, Oracle, etc) to AWS with world class analysis using more modern tools, platforms and data (AWS, Databricks, Git, Python, etc). As part of this journey, Data Strategy works closely with both business units and Tech resources to craft the narrative of what the migration will look like and then validates that it was done successfully. To be successful in this role as a Data Strategy Analyst, you should possess the following skillsets: Technical skills consistent with performing the following functions: Use Python and various packages for the exploration of data within AWS/Athena environment. Read and potentially convert SAS scripts to Python – recognize data usage in SAS and be able to migrate the data steps into Python/Pyspark for analysis. Manage code and processes with version control platforms like Git, BitBucket and potentially GitHub. Communication skills as both the receiver of requests and the provider of results. Must be able to take relevant direction on a request and translate that into an approach for the analysis that drives to the right results. Must be able to compile results and provide to business teams in a meaningful manner to deliver value and drive insight to whether data migration is successful. Data Quality concepts to understand what makes data valid and how to assess it. Provide insight to Tech for standardized validation of data transformation. Collaborate with business teams to understand what “good data” means to them and translate this into requirements. Some Other Highly Valued Skills Include Team collaboration – the Data Strategy team is highly collaborative and each member provides input and insight for weekly meetings, monthly business reviews and other product/process sharing endeavors. You will also be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Noida office. Purpose of the role To use innovative data analytics and machine learning techniques to extract valuable insights from the bank's data reserves, leveraging these insights to inform strategic decision-making, improve operational efficiency, and drive innovation across the organisation. Accountabilities Identification, collection, extraction of data from various sources, including internal and external sources. Performing data cleaning, wrangling, and transformation to ensure its quality and suitability for analysis. Development and maintenance of efficient data pipelines for automated data acquisition and processing. Design and conduct of statistical and machine learning models to analyse patterns, trends, and relationships in the data. Development and implementation of predictive models to forecast future outcomes and identify potential risks and opportunities. Collaborate with business stakeholders to seek out opportunities to add value from data through Data Science. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 1 week ago
7.0 - 12.0 years
17 - 27 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Work from Office
Key Responsibilities: Requirement gathering and analysis Design of data architecture and data model to ingest data Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Hands on experience on Azure functions and other components like realtime streaming etc Oversee Azure billing processes, conducting analyses to ensure cost-effectiveness and efficiency in data operations. Provide optimized solution for any problem related to data engineering Ability to work with verity of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Business Analyst Experience: 6-16 Years Domain: Insurance, Finance/ Investment banking Experience: Bachelor’s degree in Finance, Economics, or a related discipline. 8+ years of experience as a BSA or similar role in data analytics or technology projects. 5+ years of domain experience in asset management, investment management, insurance, or financial services. Familiarity with Investment Operations concepts such as Critical Data Elements (CDEs), data traps, and reconciliation workflows. Working knowledge of data engineering principles: ETL/ELT, data lakes, and data warehousing. Proficiency in BI and analytics tools such as Power BI, Tableau, MicroStrategy, and SQL. Excellent communication, analytical thinking, and stakeholder engagement skills. Experience working in Agile/Scrum environments with cross-functional delivery teams. The Ideal Qualifications Technical Skills: Proven track record of Analytical and Problem-Solving skills. In-depth knowledge of investment data platforms, including GoldenSource, NeoXam, RIMES, JPM Fusion, etc. Expertise in cloud data technologies such as Snowflake, Databricks, and AWS/GCP/Azure data services. Strong understanding of data governance frameworks, metadata management, and data lineage. Familiarity with regulatory requirements and compliance standards in the investment management industry. Hands-on experience with IBOR’s such as Blackrock Alladin, CRD, Eagle STAR (ABOR), Eagle Pace, and Eagle DataMart. Familiarity with investment data platforms such as GoldenSource, FINBOURNE, NeoXam, RIMES, and JPM Fusion. Experience with cloud data platforms like Snowflake and Databricks. Background in data governance, metadata management, and data lineage frameworks. Soft Skills: Exceptional communication and interpersonal skills. Ability to influence and motivate teams without direct authority. Excellent time management and organizational skills, with the ability to prioritize multiple initiatives. Ability to lead cross-functional teams and manage complex projects.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Generative AI Developer at CGI, you will have the opportunity to lead the design and implementation of innovative AI solutions using cutting-edge technologies. You will be responsible for leveraging Microsoft Azure Cognitive Services, Azure Open-AI, Databricks, and advanced AI/ML technologies to create AI-powered systems that deliver optimal performance, scalability, and security within cloud environments. Your key responsibilities will include architecting and designing scalable AI solutions, developing AI/ML models focusing on Natural Language Processing (NLP) and generative AI use cases, collaborating with cross-functional teams for deploying AI models into production environments, integrating advanced analytics solutions using Python and various AI/ML frameworks, overseeing cloud architectures, mentoring junior AI engineers and data scientists, engaging with stakeholders to identify AI-driven improvements, acting as a subject matter expert on AI-related topics, and driving the implementation of DevOps practices for continuous integration and deployment in AI model development. To excel in this role, you should possess extensive experience with Microsoft Azure Cognitive Services, Azure AI, and Azure Open-AI, expertise in Python, AI/Machine Learning, and NLP technologies, proficiency in Databricks for large-scale data processing, deep understanding of Cloud Architecture, experience in implementing AI models in production environments using DevOps practices, strong analytical thinking skills, knowledge of AI, Machine Learning, and Data Science principles, excellent communication and presentation skills, leadership abilities, and a Bachelor's or Master's degree in Computer Science, Data Science, Artificial Intelligence, or a related field. If you are passionate about AI technologies, have a proven track record in solution architecture roles, and are looking to work in a collaborative and innovative environment, this role at CGI could be the perfect fit for you. Join us in turning meaningful insights into action and contribute to our collective success as a CGI Partner. Grow your career, develop your skills, and make a difference in one of the largest IT and business consulting services firms in the world.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Diageo's ambition is to be one of the best performing, most trusted, and respected consumer products companies in the world. The strategy is to support premiumisation in developed and emerging countries by offering a broad portfolio across different consumer occasions and price points. This approach also plays a crucial role in shaping responsible drinking trends in markets where international premium spirits are an emerging category. As a member of Diageo's Analytics & Insights team, you will be instrumental in designing, developing, and implementing analytics products to drive the company's competitive advantage and facilitate data-driven decisions. Your role will involve advancing the sophistication of analytics throughout Diageo, serving as a data evangelist to empower stakeholders, identifying meaningful insights from vast data sources, and communicating findings to drive growth, enhance consumer experiences, and optimize business processes. While the role does not entail budget ownership, understanding architecture resource costs is necessary. You will be supporting global initiatives and functions across various markets, working closely with key stakeholders to create possibilities, foster conditions for success, promote personal and professional growth, and maintain authenticity in all interactions. The purpose of the role includes owning and developing a domain-specific data visualization product portfolio, ensuring compliance with technological and business priorities, and contributing to the end-to-end build of analytics products meeting enterprise standards. You will lead agile teams in developing robust BI solutions, provide technical guidance, oversee data flow, and collaborate with internal and external partners to deliver innovative solutions. Your top accountabilities will involve technical leadership in analytics product builds, optimization of data visualization architecture, BAU support, and feedback to enhance data model standards. Business acumen is essential, particularly in working with marketing data and building relationships with stakeholders to drive data-led innovation. Required qualifications include multiple years of experience in BI solution development, a bachelor's degree in a relevant field, hands-on experience as a lead developer, proficiency in DAX & M language, knowledge of Azure architecture, and expertise in data acquisition and processing. Additionally, experience with Azure platform, technical documentation, DevOps solutions, Agile methodologies, and a willingness to deepen solution architecture skills are vital. Experience with structured and unstructured datasets, design collaboration, user experience best practices, and visualization trends are advantageous. A dynamic personality, proficiency in English, and excellent communication skills are key for success in this role.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi