Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Your Responsibilities We are seeking an experienced and highly motivated Sr Data Engineer - Data Ingestion to join our dynamic team. The ideal candidate will have strong hands-on experience with Azure Data Factory (ADF), a deep understanding of relational and non-relational data ingestion techniques, and proficiency in Python programming. You will be responsible for designing and implementing scalable data ingestion solutions that interface with Azure Data Lake Storage Gen 2 (ADLS Gen 2), Databricks, and various other Azure ecosystem services. The Data Ingestion Engineer will work closely with stakeholders to gather data ingestion requirements, create modularized ingestion solutions, and define best practices to ensure efficient, robust, and scalable data pipelines. This role requires effective communication skills, ownership, and accountability for the delivery of high-quality data solutions. Data Ingestion Strategy & Development: Design, develop, and deploy scalable and efficient data pipelines in Azure Data Factory (ADF) to move data from multiple sources (relational, non-relational, files, APIs, etc.) into Azure Data Lake Storage Gen 2 (ADLS Gen 2), Azure SQL Database, and other target systems. Implement ADF activities (copy, lookup, execute pipeline, etc.) to integrate data from on-premises and cloud-based systems. Build parameterized and reusable pipeline templates in ADF to standardize the data ingestion process, ensuring maintainability and scalability of ingestion workflows. Integrate custom data transformation activities within ADF pipelines, utilizing Python, Databricks, or Azure Functions when required. ADF Data Flows Design & Development: Leverage Azure Data Factory Data Flows for visually designing and orchestrating data transformation tasks, enabling complex ETL (Extract, Transform, Load) logic to process large datasets at scale. Design data flow transformations such as filtering, aggregation, joins, lookups, and sorting to process and transform data before loading it into target systems like ADLS Gen 2 or Azure SQL Database. Implement incremental loading strategies in Data Flows to ensure efficient and optimized data ingestion for large volumes of data while minimizing resource consumption. Develop reusable data flow components to streamline transformation processes, ensuring consistency and reducing development time for new data ingestion pipelines. Utilize debugging tools in Data Flows to troubleshoot, test, and optimize data transformations, ensuring accurate results and performance. ADF Orchestration & Automation: Use ADF triggers and scheduling to automate pipeline execution based on time or events, ensuring timely and efficient data ingestion. Configure ADF monitoring and alerting capabilities to proactively track pipeline performance, handle failures, and address issues in a timely manner. Implement ADF version control practices using Git to manage code changes, collaborate effectively with other team members, and ensure code integrity. Data Integration with Various Sources: Ingest data from diverse sources such as on-premise SQL Servers, REST APIs, cloud databases (e.g., Azure SQL Database, Cosmos DB), file-based systems (CSV, Parquet, JSON), and third-party services using ADF. Design and implement ADF linked services to securely connect to external data sources (databases, file systems, APIs, etc.). Develop and configure ADF datasets and dataflows to efficiently transform, clean, and load data into Azure Data Lake or other destinations. Pipeline Monitoring and Optimization: Continuously monitor and optimize ADF pipelines to ensure they run with high performance and minimal cost. Apply techniques like data partitioning, parallel processing, and incremental loading where appropriate. Implement data quality checks within the pipelines to ensure data integrity and handle data anomalies or errors in a systematic manner. Review pipeline execution logs and performance metrics regularly, and apply tuning recommendations to improve execution times and reduce operational costs. Collaboration and Communication: Work closely with business and technical stakeholders to capture and translate data ingestion requirements into ADF pipeline designs. Provide ADF-specific technical expertise to both internal and external teams, guiding them in the use of ADF for efficient and cost-effective data pipelines. Document ADF pipeline designs, error handling strategies, and best practices to ensure the team can maintain and scale the solutions. Conduct training sessions or knowledge transfer with junior engineers or other team members on ADF best practices and architecture. Security and Compliance: Ensure all data ingestion solutions built in ADF follow security and compliance guidelines, including encryption at rest and in transit, data masking, and identity and access management. Implement role-based access control (RBAC) and managed identities within ADF to manage access securely and reduce the risk of unauthorized access to sensitive data. Integration with Azure Ecosystem: Leverage other Azure services, such as Azure Logic Apps, Azure Function Apps, and Azure Databricks, to augment the capabilities of ADF pipelines, enabling more advanced data processing, event-driven workflows, and custom transformations. Incorporate Azure Key Vault to securely store and manage sensitive data (e.g., connection strings, credentials) used in ADF pipelines. Integrate ADF with Azure Data Lake Analytics, Synapse Analytics, or other data warehousing solutions for advanced querying and analytics after ingestion. Best Practices & Continuous Improvement: Develop and enforce best practices for building and maintaining ADF pipelines and data flows, ensuring the solutions are modular, reusable, and follow coding standards. Identify opportunities for pipeline automation to reduce manual intervention and improve operational efficiency. Regularly review and suggest new tools or services within the Azure ecosystem to enhance ADF pipeline performance and increase the overall efficiency of data ingestion workflows. Incident and Issue Management: Actively monitor the health of the data pipelines, swiftly addressing any failures, data quality issues, or performance bottlenecks. Troubleshoot ADF pipeline errors, including issues within Data Flows, and work with other teams to root-cause issues related to data availability, quality, or connectivity. Participate in post-mortem analysis for any major incidents, documenting lessons learned and implementing preventative measures for the future. Your Profile Experience with Azure Data Services: Strong experience with Azure Data Factory (ADF) for orchestrating data pipelines. Hands-on experience with ADLS Gen 2, Databricks, and various data formats (e.g., Parquet, JSON, CSV). Solid understanding of Azure SQL Database, Azure Logic Apps, Azure Function Apps, and Azure Container Apps. Programming and Scripting: Proficient in Python for data ingestion, automation, and transformation tasks. Ability to write clean, reusable, and maintainable code. Data Ingestion Techniques: Solid understanding of relational and non-relational data models and their ingestion techniques. Experience working with file-based data ingestion, API-based data ingestion, and integrating data from various third-party systems. Problem Solving & Analytical Skills Communication Skills #IncludingYou Diversity, equity, inclusion and belonging are cornerstones of ADM’s efforts to continue innovating, driving growth, and delivering outstanding performance. We are committed to attracting and retaining a diverse workforce and create welcoming, truly inclusive work environments — environments that enable every ADM colleague to feel comfortable on the job, make meaningful contributions to our success, and grow their career. We respect and value the unique backgrounds and experiences that each person can bring to ADM because we know that diversity of perspectives makes us better, together. For more information regarding our efforts to advance Diversity, Equity, Inclusion & Belonging, please visit our website here: Diversity, Equity and Inclusion | ADM. About ADM At ADM, we unlock the power of nature to provide access to nutrition worldwide. With industry-advancing innovations, a complete portfolio of ingredients and solutions to meet any taste, and a commitment to sustainability, we give customers an edge in solving the nutritional challenges of today and tomorrow. We’re a global leader in human and animal nutrition and the world’s premier agricultural origination and processing company. Our breadth, depth, insights, facilities and logistical expertise give us unparalleled capabilities to meet needs for food, beverages, health and wellness, and more. From the seed of the idea to the outcome of the solution, we enrich the quality of life the world over. Learn more at www.adm.com. Req/Job ID 97477BR Ref ID
Posted 22 hours ago
5.0 years
0 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role will be responsible for the below: Data Pipelines Integration and Management Design and implement scalable data architectures to support the bank's data needs. Develop and maintain ETL (Extract, Transform, Load) processes. Ensure the data infrastructure is reliable, scalable, and secure. Oversee the integration of diverse data sources into a cohesive data platform. Ensure data quality, data governance, and compliance with regulatory requirements. Develop and enforce data security policies and procedures. Monitor and optimize data pipeline performance. Troubleshoot and resolve data-related issues promptly. Implement monitoring and alerting systems for data processes Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in data engineering or related field Strong experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark) Proficiency in programming languages such as Python, Java, or Scala. Experience with cloud platforms (AWS, Azure, Google Cloud) and their data services. Deep understanding of ETL processes and data pipeline orchestration tools (Airflow, Apache NiFi). Knowledge of data modeling, data warehousing concepts, and data integration techniques. Strong problem-solving skills and ability to work under pressure. Experience in the banking or financial services industry. Familiarity with regulatory requirements related to data security and privacy in the banking sector. Certifications in cloud platforms (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). · Experience with machine learning and data science frameworks. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 22 hours ago
5.0 years
5 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Design and Develop ETL Processes: Lead the design and implementation of ETL processes using all kinds of batch/streaming tools to extract, transform, and load data from various sources into GCP. Collaborate with stakeholders to gather requirements and ensure that ETL solutions meet business needs. Data Pipeline Optimization: Optimize data pipelines for performance, scalability, and reliability, ensuring efficient data processing workflows. Monitor and troubleshoot ETL processes, proactively addressing issues and bottlenecks. Data Integration and Management: Integrate data from diverse sources, including databases, APIs, and flat files, ensuring data quality and consistency. Manage and maintain data storage solutions in GCP (e.g., BigQuery, Cloud Storage) to support analytics and reporting. GCP Dataflow Development: Write Apache Beam based Dataflow Job for data extraction, transformation, and analysis, ensuring optimal performance and accuracy. Collaborate with data analysts and data scientists to prepare data for analysis and reporting. Automation and Monitoring: Implement automation for ETL workflows using tools like Apache Airflow or Cloud Composer, enhancing efficiency and reducing manual intervention. Set up monitoring and alerting mechanisms to ensure the health of data pipelines and compliance with SLAs. Data Governance and Security: Apply best practices for data governance, ensuring compliance with industry regulations (e.g., GDPR, HIPAA) and internal policies. Collaborate with security teams to implement data protection measures and address vulnerabilities. Documentation and Knowledge Sharing: Document ETL processes, data models, and architecture to facilitate knowledge sharing and onboarding of new team members. Conduct training sessions and workshops to share expertise and promote best practices within the team. Requirements To be successful in this role, you should meet the following requirements: Experience: Minimum of 5 years of industry experience in data engineering or ETL development, with a strong focus on Data Stage and GCP. Proven experience in designing and managing ETL solutions, including data modeling, data warehousing, and SQL development. Technical Skills: Strong knowledge of GCP services (e.g., BigQuery, Dataflow, Cloud Storage, Pub/Sub) and their application in data engineering. Experience of cloud-based solutions, especially in GCP, cloud certified candidate is preferred. Experience and knowledge of Bigdata data processing in batch mode and streaming mode, proficient in Bigdata eco systems, e.g. Hadoop, HBase, Hive, MapReduce, Kafka, Flink, Spark, etc. Familiarity with Java & Python for data manipulation on Cloud/Bigdata platform. Analytical Skills: Strong problem-solving skills with a keen attention to detail. Ability to analyze complex data sets and derive meaningful insights. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 22 hours ago
4.0 - 6.0 years
7 - 9 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: Responsible for performing system development work around ETL, which can include both the development of new function and facilities, and the on-going systems support of live systems. Responsible for the documentation, coding, and maintenance of new and existing Extract, Transform, and Load (ETL) processes within the Enterprise Data Warehouse. Investigate live systems faults, diagnose problems, and propose and provide solutions. Work closely with various teams to design, build, test, deploy and maintain insightful MI reports. Support System Acceptance Testing, System Integration and Regression Testing. Identify any issues that may arise to delivery risk, formulate preventive actions or corrective measures, and timely escalate major project risks & issues to service owner. Execute test cases and log defects. Should be proactive in understanding the existing system, identifying areas for improvement, and taking ownership of assigned tasks. Ability to work independently with minimal supervision while ensuring timely delivery of tasks. Requirements To be successful in this role, you should meet the following requirements : 4-6 years of experience in Data Warehousing specialized in ETL. Given the current team is highly technical in nature, the expectation is that the candidate has experience in technologies like DataStage, Teradata Vantage, Unix Scripting and scheduling using Control-M and DevOps tools. Candidate should possess good knowledge on SQL and demonstrate the ability to write efficient and optimized queries effectively. Hands on experience or knowledge on GCP’s Data Storage and Processing services such as BigQuery, Dataflow, Bigtable, Cloud spanner, Cloud SQL would be an added advantage. Hands-on experience with Unix, Git and Jenkins and would be added advantage. This individual should be able to develop and implement solutions on both on-prem and google cloud platform (GCP). Conducting migration, where necessary, to bring tools and other elements into the cloud and software upgrades. Should have proficiency in using JIRA and Confluence and experienced in working in projects that have followed Agile methodologies. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 22 hours ago
0 years
0 Lacs
Hyderābād
On-site
We are looking for a passionate and detail-oriented Service Assurance Engineer to join our dynamic team. The Service Assurance Engineer will be responsible for ensuring the highest standards of quality for our products. This role involves designing test strategies, identifying bugs, collaborating with development teams, and ensuring that our software meets user expectations and is defect-free. As a Service Assurance Engineer, you are expected to extract the essential and most important aspects of data from multiple data sources like fleet usage data, customer usage patterns, service requests, customer feedback and convert them into Test Patterns, Test Case Improvements and Test Data requirements . The ideal candidate will have a strong understanding of software development life cycles, testing methodologies, and excellent problem-solving skills. You will work in a fast-paced environment with a focus on improving the quality and performance of the product. As a Service Assurance Engineer, you would be responsible for designing and executing test strategy to ensure software quality. You should be proficient in creating detailed documentation, software testing, and coordinating with cross-functional teams. You would be responsible for evaluating products to ensure they meet Quality, Security and Performance standards before they reach the market. You would also be responsible for conducting hands-on testing, document the findings, and provide feedback on usability, durability and functionality. As a Service Assurance Engineer, you are expected to extract the essential and most important aspects of data from multiple data sources into Test Patterns, Test Case Improvements, and Test Data requirements based on user stories and acceptance criteria, including fleet usage data, customer usage patterns, service requests, and customer feedback. Provide data-driven requirements for Automated Testing, emphasizing feature function, service excellence, monitoring, performance, resiliency, scalability, and reliability, specifying both positive and negative testing techniques. Define Quality Metrics to be utilized during development, testing and deployment to assess service quality. Drive and evolve the Development Roadmap for automated test suites for the ERP product family, including New Features and growth of automation test coverage, based on the input above. Partner with the Service Development team to influence Architectural Direction, Long-Term Investments, and Culture Change to support service monitoring, telemetry, and automated testing inside the service. Continually evaluate Existing Test Coverage and Automation Frameworks, identifying requirements for redesign, replacement, reusability, and improvement in efficiency and performance. Work with automation testing engineers to prioritize test defects based on business and technical risks. Participate in Functional and Technical Feature Design Reviews to drive service instrumentation requirements that enable automated testing. Additional Responsibilities for more senior Service Assurance Engineers: Include Executive Input, Technology Touch-Point Failures, Strategic Initiatives, and other escalated channels in the data sources used. Work with service leadership to identify areas of potential concern. Collaborate and drive testing dependencies and communicate testing plans and testing concerns. Work with service reliability architects and service owners to identify broad Testing Gaps with respect to operation parameters, e.g., template changes, instrumentation results, outages, feature throughput mandates, etc. Own the development roadmap for automated test suites for the ERP product family.
Posted 22 hours ago
5.0 years
0 Lacs
Pune
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer. In this role, you will: Data Pipelines Integration and Management. Design and implement scalable data architectures to support the bank's data needs. Develop and maintain ETL (Extract, Transform, Load) processes. Ensure the data infrastructure is reliable, scalable, and secure. Oversee the integration of diverse data sources into a cohesive data platform. Ensure data quality, data governance, and compliance with regulatory requirements. Develop and enforce data security policies and procedures. Monitor and optimize data pipeline performance. Troubleshoot and resolve data-related issues promptly. Implement monitoring and alerting systems for data processes Requirements To be successful in this role, you should meet the following requirements. 5+ years of experience in data engineering or related field Strong experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark) Proficiency in programming languages such as Python, Java, or Scala. Experience with cloud platforms (AWS, Azure, Google Cloud) and their data services. Deep understanding of ETL processes and data pipeline orchestration tools (Airflow, Apache NiFi).Knowledge of data modeling, data warehousing concepts, and data integration techniques. Strong problem-solving skills and ability to work under pressure. Experience in the banking or financial services industry. Familiarity with regulatory requirements related to data security and privacy in the banking sector. Certifications in cloud platforms (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). Experience with machine learning and data science frameworks. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 22 hours ago
8.0 years
2 - 9 Lacs
Bengaluru
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions. The Senior Data Engineer will be responsible for designing, building, and managing the data infrastructure and data pipeline processes for the bank. This role involves leading a team of data engineers, working closely with data scientists, analysts, and IT professionals to ensure data is accessible, reliable, and secure. The ideal candidate will have a strong background in data engineering, excellent leadership skills, and a thorough understanding of the banking industry's data requirements. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist Key Responsibilities: Leadership and Team Management . Lead, mentor, and develop a team of data engineers. Establish best practices for data engineering and ensure team adherence. Coordinate with other IT teams, business units, and stakeholders. Data Pipelines Integration and Management: Design and implement scalable data architectures to support the bank's data needs. Develop and maintain ETL (Extract, Transform, Load) processes. Ensure the data infrastructure is reliable, scalable, and secure. Oversee the integration of diverse data sources into a cohesive data platform. Ensure data quality, data governance, and compliance with regulatory requirements. Develop and enforce data security policies and procedures. Monitor and optimize data pipeline performance. Troubleshoot and resolve data-related issues promptly. Implement monitoring and alerting systems for data processes Requirements Qualifications – External To be successful in this role you should meet the following requirements: Bachelor’s degree in computer science engineering or related field. 8+ years of experience in data engineering or related field Strong experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark) Proficiency in programming languages such as Python, Java, or Scala. Experience with cloud platforms (AWS, Azure, Google Cloud) and their data services. Deep understanding of ETL processes and data pipeline orchestration tools (Airflow, Apache NiFi). Knowledge of data modeling, data warehousing concepts, and data integration techniques. Strong problem-solving skills and ability to work under pressure. Excellent communication and interpersonal skills. Experience in the banking or financial services industry. Familiarity with regulatory requirements related to data security and privacy in the banking sector. Certifications in cloud platforms (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). Experience with machine learning and data science frameworks You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI.
Posted 22 hours ago
5.0 years
5 Lacs
Bengaluru
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions. We are currently seeking an experienced professional to join our team in the role of Software Engineer Key Responsibilities: Expertise in Scala-Spark/Python-Spark development and should be able to Work with Agile application dev team to implement data strategies. Design and implement scalable data architectures to support the bank's data needs. Develop and maintain ETL (Extract, Transform, Load) processes. Ensure the data infrastructure is reliable, scalable, and secure. Oversee the integration of diverse data sources into a cohesive data platform. Ensure data quality, data governance, and compliance with regulatory requirements. Monitor and optimize data pipeline performance. Troubleshoot and resolve data-related issues promptly. Implement monitoring and alerting systems for data processes. Troubleshoot and resolve technical issues optimizing system performance ensuring reliability. Create and maintain technical documentation for new and existing system ensuring that information is accessible to the team. Implementing and monitoring solutions that identify both system bottlenecks and production issues. Requirements Qualifications – External To be successful in this role you should meet the following requirements: 5+ years of experience in IT. Bachelor’s degree in computer science engineering or related field. Experience in data engineering or related field and hands-on experience of building and maintenance of ETL Data pipelines Good experience in Designing and Developing Spark Applications using Scala or Python. Good experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark) Proficiency in programming languages such as Python, Java, or Scala. Optimization and Performance Tuning of Spark Applications GIT Experience on creating, merging and managing Repos. Perform unit testing and performance testing. Good understanding of ETL processes and data pipeline orchestration tools like Airflow, Control-M. Strong problem-solving skills and ability to work under pressure. Excellent communication and interpersonal skills. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI.
Posted 22 hours ago
0 years
4 - 8 Lacs
Bengaluru
On-site
About sync. sync. is a team of artists, engineers, and scientists building foundation models to edit and modify people in video. Founded by the creators of Wav2lip and backed by legendary investors, including YC, Google, and visionaries Nat Friedman and Daniel Gross, we've raised 6 million dollars in our seed round to evolve how we create and consume media. Within months of launch our flagship lipsync API scaled to millions in revenue and powers video translation, dubbing, and dialogue replacement workflows for thousands of editors, developers, and businesses around the world. That's only the beginning, we're building a creative suite to give anyone Photoshop-like control over humans video – zero-shot understanding and fine-grained editing of expressions, gestures, movement, identity, and more. Everyone has a story to tell, but not everyone's a storyteller – yet. We're looking for talented and driven individuals from all backgrounds to build inspired tools that amplify human creativity. Overview While our focus in research is to push the boundary on what’s possible by unlocking new capabilities, our focus in product is to craft intuitive experiences that delight users and extract maximal utility from the capabilities we have today. Key Responsibilities Architect and build intuitive experience to create and edit video with AI – from magical UX to scalable APIs Own complete user journeys: ideation, prototyping, shipping, and rapid iteration based on user data Interface seamlessly between model capabilities and intuitive user workflows Design and implement product features that become industry standards Champion performance, reliability and developer experience at scale Required Skills and Experience Exceptional full-stack engineer who has built technical products users love and businesses can build on top of Deep expertise in React ecosystem, modern API design, and real-time systems. Our current stack is NextJS, tRPC, and NestJS Strong product and design sensibilities - you know what makes an experience feel like magic Track record of shipping and owning 0 to 1 features that drove massive impact Experience with video manipulation, creative tools, or ML interfaces Experience working on fast and talented engineering teams with strong work ethics, and understanding how to collaborate and ship exceptional products Preferred Skills Built and scaled systems handling millions of daily active users Background implementing complex usage based billing systems Strong opinions on developer tooling and engineering productivity Experience with WebGL, Canvas, or video processing Comfort with ambiguity and rapid iteration Outcomes Build breakthrough features that define the future of AI video creation Create abstractions and APIs that accelerate entire team's velocity Drive 10x improvements in key metrics through technical innovation Set new standards for performance and reliability at scale Help us grow from millions to hundreds of millions by building things users can't live without Our goal is to keep the team lean, hungry, and shipping fast. These are the qualities we embody and look for: [1] Raw intelligence: we tackle complex problems and push the boundaries of what's possible. [2] Boundless curiosity: we're always learning, exploring new technologies, and questioning assumptions. [3] Exceptional resolve: we persevere through challenges and never lose sight of our goals. [4] High agency: we take ownership of our work and drive initiatives forward autonomously. [5] Outlier hustle: we work smart and hard, going above and beyond to achieve extraordinary results. [6] Obsessively data-driven: we base our decisions on solid data and measurable outcomes. [7] Radical candor: we communicate openly and honestly, providing direct feedback to help each other grow.
Posted 22 hours ago
5.0 years
8 - 10 Lacs
Bengaluru
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer Key Responsibilities: Expertise in Scala-Spark/Python-Spark development and should be able to Work with Agile application dev team to implement data strategies. Design and implement scalable data architectures to support the bank's data needs. Develop and maintain ETL (Extract, Transform, Load) processes. Ensure the data infrastructure is reliable, scalable, and secure. Oversee the integration of diverse data sources into a cohesive data platform. Ensure data quality, data governance, and compliance with regulatory requirements. Monitor and optimize data pipeline performance. Troubleshoot and resolve data-related issues promptly. Implement monitoring and alerting systems for data processes. Troubleshoot and resolve technical issues optimizing system performance ensuring reliability. Create and maintain technical documentation for new and existing system ensuring that information is accessible to the team. Implementing and monitoring solutions that identify both system bottlenecks and production issues. Requirements Qualifications – External To be successful in this role you should meet the following requirements: Bachelor’s degree in computer science engineering or related field. Minimum of 5+ years of experience in Hadoop. Experience in data engineering or related field and hands-on experience of building and maintenance of ETL Data pipelines Good experience in Designing and Developing Spark Applications using Scala or Python. Good experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark) Proficiency in programming languages such as Python, Java, or Scala. Optimization and Performance Tuning of Spark Applications GIT Experience on creating, merging and managing Repos. Perform unit testing and performance testing. Good understanding of ETL processes and data pipeline orchestration tools like Airflow, Control-M. Strong problem-solving skills and ability to work under pressure. Excellent communication and interpersonal skills. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI.
Posted 22 hours ago
25.0 years
0 - 0 Lacs
Bengaluru
On-site
The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Description Summary: What you need to know about the role This job will lead the design, development, and implementation of advanced machine learning models and algorithms to solve complex problems. You will work closely with data scientists, software engineers, and product teams to enhance services through innovative AI/ML solutions. Your role will involve building scalable ML pipelines, ensuring data quality, and deploying models into production environments to drive business insights and improve customer experiences. Meet our team We are the Global Analytics and Data Science (GADS) team, working within GFCCP. Our expertise lies in leveraging machine learning and artificial intelligence to automate and optimize risk and dispute management processes in the back office, delivering scalable solutions. Job Description: Your way to impact As a Senior Machine Learning Engineer in the team, you will be customer-centric, strategic & analytical in decision making and laser-focused on executing at scale. You thrive on the challenge of building and optimizing platforms at scale, are deeply passionate about leveraging cutting-edge technologies, and are dedicated to innovation and market success. Your role will involve leveraging your extensive technical expertise and practical experience in AI/ML to lead substantive discussions with technical and data science teams on solution implementation, and integration within complex environments . Your day-to-day Develop and optimize machine learning models for various applications. Preprocess and analyze large datasets to extract meaningful insights. Deploy ML solutions into production environments using appropriate tools and frameworks. Collaborate with cross-functional teams to integrate ML models into products and services. Monitor and evaluate the performance of deployed models. What do you need to bring Minimum of 5 years of relevant work experience and a bachelor’s degree or equivalent experience. Experience with ML frameworks like TensorFlow, PyTorch, or scikit-learn. Familiarity with cloud platforms (AWS, Azure, GCP) and tools for data processing and model deployment. Several years of experience in designing, implementing, and deploying machine learning models and scaling ML Systems MSc or equivalent experience in a quantitative field (Computer Science, Mathematics, Engineering, Artificial Intelligence, etc.) or a bachelor's degree in engineering, science, statistics or mathematics with a strong technical background in machine learning. Hands-on experience with Python or Java, along with relevant technologies such as Spark, Hadoop, Big-Query, SQL, is required. Candidates must possess a comprehensive understanding of machine learning algorithms and explainable AI techniques. Additionally, expertise in at least one of the following specialized areas is required: Computer Vision, Graph Mining, Natural Language Processing (NLP), or Generative AI (GenAI). Experience with Cloud frameworks such as GCP, AWS is preferred. Experience with developing machine learning models at scale from inception to business impact Experience in designing ML pipelines, including model versioning, model deployment, model testing, and monitoring Experience in mentoring and supporting junior data scientists or engineers. Experience working in a multi-cultural and multi-location organization – an advantage. Team player, responsible, delivery-oriented, details-oriented, outstanding communication skills. Good to Have: Experience with applying LLMs, prompt design, and fine-tuning methods Experience with developing Gen AI applications/services for sophisticated business use cases and large amounts of unstructured data. Knowledge of Payments industry, transaction risk domain Publications in prominent journals or conferences in the field of AI or successful AI/ML-related patent applications. For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com Who We Are: To learn more about our culture and community visit https://about.pypl.com/who-we-are/default.aspx Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com. Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply.
Posted 22 hours ago
4.0 years
10 - 10 Lacs
Bengaluru
On-site
Location: Bengaluru, KA, IN Company: ExxonMobil About us At ExxonMobil, our vision is to lead in energy innovations that advance modern living and a net-zero future. As one of the world’s largest publicly traded energy and chemical companies, we are powered by a unique and diverse workforce fueled by the pride in what we do and what we stand for. The success of our Upstream, Product Solutions and Low Carbon Solutions businesses is the result of the talent, curiosity and drive of our people. They bring solutions every day to optimize our strategy in energy, chemicals, lubricants and lower-emissions technologies. We invite you to bring your ideas to ExxonMobil to help create sustainable solutions that improve quality of life and meet society’s evolving needs. Learn more about our What and our Why and how we can work together . ExxonMobil’s affiliates in India ExxonMobil’s affiliates have offices in India in Bengaluru, Mumbai and the National Capital Region. ExxonMobil’s affiliates in India supporting the Product Solutions business engage in the marketing, sales and distribution of performance as well as specialty products across chemicals and lubricants businesses. The India planning teams are also embedded with global business units for business planning and analytics. ExxonMobil’s LNG affiliate in India supporting the upstream business provides consultant services for other ExxonMobil upstream affiliates and conducts LNG market-development activities. The Global Business Center - Technology Center provides a range of technical and business support services for ExxonMobil’s operations around the globe. ExxonMobil strives to make a positive contribution to the communities where we operate and its affiliates support a range of education, health and community-building programs in India. Read more about our Corporate Responsibility Framework. To know more about ExxonMobil in India, visit ExxonMobil India and the Energy Factor India. What role you will play in our team Design, build, and maintain data systems, architectures, and pipelines to extract insights and drive business decisions. Collaborate with stakeholders to ensure data quality, integrity, and availability. What you will do Design, develop, and maintain robust ETL pipelines using tools like Airflow, Azure Data Factory, Qlik Replicate, and Fivetran. Automate data extraction, transformation, and loading processes across cloud platforms (Azure, Snowflake). Build and optimize Snowflake data models in collaboration with system architects to support business needs. Develop and maintain CI/CD pipelines using GitHub and Azure DevOps (ADO). Create and manage data input and review screens in Sigma, including performance dashboards. Integrate third-party ETL tools for Cloud-to-Cloud (C2C) and On-Premises to Cloud (OP2C) data flows. Implement monitoring and alerting systems for pipeline health and data quality. Support data cleansing, enrichment, and curation to enable business use cases. Troubleshoot and resolve data issues, including missing or incorrect data, long-running queries, and Sigma screen problems. Collaborate with cross-functional teams to deliver data solutions for platforms like CEDAR. Manage Snowflake security, including roles, shares, and access controls. Optimize and tune SQL queries across Snowflake, MSSQL, Postgres, Oracle, and Azure SQL. Develop large-scale aggregate queries across multiple schemas and datasets. About You Skills and Qualifications Core Technical Skills Languages: Proficient in Python, with experience in C#, C++, F#, or Java. Databases: Strong experience with SQL and NoSQL, including Snowflake, Azure SQL, PostgreSQL, MSSQL, Oracle. ETL Tools: Expertise in Airflow, Qlik Replicate, Fivetran, Azure Data Factory. Cloud Platforms: Deep knowledge of Azure services including Azure Data Explorer (ADX), ADF, Databricks. Data Modeling: Hands-on experience with Snowflake modeling, including stored procedures, UDFs, Snowpipe, streams, shares. Monitoring & Optimization: Skilled in query tuning, performance measurement, and pipeline monitoring. CI/CD: Experience managing pipelines using GitHub and Azure DevOps. Additional Tools & Technologies Sigma: Experience designing and managing Sigma dashboards and screens (or strong background in Power BI/Tableau with willingness to learn Sigma). Streamlit: Experience developing Streamlit apps using Python. DBT: Experience managing Snowflake with DBT scripting. Preferred Qualifications 4+ years of hands-on experience as a Data Engineer. Proficiency in Snowflake with Data Modelling Experience in Change Management and working in Agile environments. Prior experience in the Energy industry is a plus. Bachelor’s or Master’s degree in Computer Science, IT, or related engineering disciplines with a minimum GPA of 7.0. Your benefits An ExxonMobil career is one designed to last. Our commitment to you runs deep: our employees grow personally and professionally, with benefits built on our core categories of health, security, finance and life. We offer you: Competitive compensation Medical plans, maternity leave and benefits, life, accidental death and dismemberment benefits Retirement benefits Global networking & cross-functional opportunities Annual vacations & holidays Day care assistance program Training and development program Tuition assistance program Workplace flexibility policy Relocation program Transportation facility Please note benefits may change from time to time without notice, subject to applicable laws. The benefits programs are based on the Company’s eligibility guidelines. Stay connected with us Learn more about ExxonMobil in India, visit ExxonMobil India and Energy Factor India . Follow us on LinkedIn and Instagram Like us on Facebook Subscribe our channel at YouTube EEO Statement ExxonMobil is an Equal Opportunity Employer: All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin or disability status. Business solicitation and recruiting scams ExxonMobil does not use recruiting or placement agencies that charge candidates an advance fee of any kind (e.g., placement fees, immigration processing fees, etc.). Follow the LINK to understand more about recruitment scams in the name of ExxonMobil. Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships. Competencies (B) Adapts (B) Applies Learning (B) Analytical (B) Collaborates (B) Communicates Effectively (B) Innovates Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships. Job Segment: Sustainability, SQL, Database, Oracle, Computer Science, Energy, Technology
Posted 22 hours ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Ingersoll Rand is committed to achieving workforce diversity reflective of our communities. We are an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. Job Title Data Analyst - SCM Location Bangalore, Coimbatore About Ingersoll Rand: Ingersoll Rand is a $7.2 billion company whose people and businesses around the world create progress for our customers in the industrial markets. These markets continue to expand as they address growing needs in developed and developing economies alike. Our products, systems, and solutions increase the efficiency and productivity of industrial and commercial operations and improve the security, safety, health, and comfort of people around the world. We offer opportunities for career growth through our diverse businesses, which manufacture many well-recognized brands including Club Car and Ingersoll Rand. In every line of business, Ingersoll Rand enables companies and their customers to inspire progress. For more information, visit www.ingersollrand.com. Ingersoll Rand is committed to achieving workforce diversity reflective of our communities. We are an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations, and ordinances. Job Summary The Data Analyst - Procurement & Supply Chain will be responsible for analyzing and interpreting data to support procurement and supply chain activities. This role will involve identifying cost-saving opportunities, optimizing supplier performance, and enhancing supply chain efficiency through data-driven insights. Responsibilities Analyze procurement data to identify trends, cost-saving opportunities, and areas for improvement. Monitor and evaluate supplier performance using key performance indicators (KPIs). Generate and present reports to support decision-making processes in procurement and supply chain management. Collaborate with procurement and supply chain teams to develop and implement data-driven strategies. Support Global supply chain activities, including Vendor Managed inventory planning, logistics optimization, and forecasting requirement plan with Plants. Conduct spend analysis and provide recommendations for cost reduction and efficiency improvements. Utilize data analysis tools and techniques to extract meaningful insights from large datasets. Ensure data accuracy and integrity in all analyses and reports using AI Tools. Basic Qualifications Bachelor degree in Business, Supply Chain Management, Data Science, or a related field. Experience 5-7 years in relevant field Proven experience as a Data Analyst, preferably in procurement and supply chain functions. Proficiency in data analysis tools such as Excel, SQL, and data visualization software (e.g.,Qlik, Tableau, Power BI). Strong analytical and problem-solving skills with the ability to interpret complex data sets. Excellent communication and presentation skills. Knowledge of procurement principles, supply chain management, and Global logistics. Ability to work collaboratively in a team environment and manage multiple tasks simultaneously. Preferred Qualifications Experience with ERP systems (e.g., SAP, Oracle) and procurement software (Jaggaer). Certification in supply chain management or procurement (e.g., CPSM, CSCP). Travel & Work Arrangements/Requirements Hybrid - 40% travel What We Offer We are all owners of the company! Stock options(Employee Ownership Program) that align your interests with the company's success. Yearly performance-based bonus, rewarding your hard work and dedication. Leave Encashments Maternity/Paternity Leaves Employee Health covered under Medical, Group Term Life & Accident Insurance Employee Assistance Program Employee development with LinkedIn Learning Employee recognition via Awardco Collaborative, multicultural work environment with a team of dedicated professionals, fostering innovation and teamwork. Ingersoll Rand Inc. (NYSE:IR), driven by an entrepreneurial spirit and ownership mindset, is dedicated to helping make life better for our employees, customers and communities. Customers lean on us for our technology-driven excellence in mission-critical flow creation and industrial solutions across 40+ respected brands where our products and services excel in the most complex and harsh conditions. Our employees develop customers for life through their daily commitment to expertise, productivity and efficiency. For more information, visit www.IRCO.com.
Posted 22 hours ago
6.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Roles & Responsibilities Web and A/B Testing & Optimization AnalystPosition Summary: As a Web / AB Testing & Optimization Analyst for the Adobe.com and Customer Insights team, you will be using many of the tools in the Adobe Marketing Cloud (i.e. Adobe Analytics, Target) as well as working in a new state-of-the-art Big Data (Hadoop) environment. As an extended member of a globally distributed analytics team, you will be responsible for analytics associated with Adobe’s web, mobile and in-product Testing Program. The successful candidate will be responsible for supporting an ongoing Marketing Optimization program in US/EMEA/APAC as well as driving insights from behavioral analysis. Specific responsibilities include calculating test results of A/B Split and Multivariate tests, supporting ideation processed and coming up with hypotheses, reviewing test charters (ie. critiquing test hypothesis and ensuring methodological rigor and proper configuration). The candidate will also be required to deep dive on customer journeys to extract insights on how the website and content is performing. The ideal candidate is a strategic, analytical thinker, results and detail oriented, and possesses the know-how to help optimize business performance. You must be a self-starter, have demonstrated ability to influence decision makers with data, be comfortable with change and have a desire to learn new tools and analytics techniques. Success in this role greatly depends on your ability to work in a matrixed environment, building rapport and developing effective working partnerships with internationally based team counterparts. Job Description: Responsibilities: Review test hypotheses, help develop comprehensive test plans and success metrics, performing quality assurance on test cells, and calculating the final test results and deep dive analysis of the test results and craft Test Summaries using both behavioral and voice of the customer analytics to provide actionable insights for key business stakeholders. Use experimental design to optimize website and marketing activities as well as new in-product initiatives. Utilize best-in-class analytics tools, including the Adobe Marketing Cloud (eg. Target, Adobe Analytics etc.) to analyze test results and provide interpretation, guidance, and recommendations to aid marketing decision making Partner with Marketing to identify key visitor segments, draft ‘user stories’ for the ideal customer experience for each segment Monitoring changes and trends of online for customer behavior Effectively respond to requests for ad hoc analyses. Collaborate with other team members to synthesize learnings from other analyses/sources to present holistic analysis and recommendations to stakeholders. Ensure solutions are scalable, repeatable, effective, and meet the expectations of stakeholders. Identify opportunities to teach stakeholders basic analytics to educate and empower users. Skills Proficient in Adobe Analytics, Analysis Workspace, Excel, PPT and SQL. Expert in A/B and Multivariate testing, design of experiments, the mathematics of statistical hypothesis testing coupled with the ability to draw meaningful business insights across multiple tests. Experience with web analytics tools such as Adobe Analytics (strongly preferred) or Google Analytics Good understanding of Microsoft Excel, SQL , Visualization and experience with Big Data tools like Hadoop and Hive. Knowledge of test design and combining disparate data sources is a plus. Experience 6-8 Years Skills Primary Skill: Data Science Sub Skill(s): Data Science Additional Skill(s): Python, Data Modeling, Big Data, Data Science, SQL
Posted 23 hours ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Data Engineer at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Reality Labs, Threads). Your technical skills and analytical mindset will be utilized designing and building some of the world's most extensive data sets, helping to craft experiences for billions of people and hundreds of millions of businesses worldwide.In this role, you will collaborate with software engineering, data science, and product management teams to design/build scalable data solutions across Meta to optimize growth, strategy, and user experience for our 3 billion plus users, as well as our internal employee community.You will be at the forefront of identifying and solving some of the most interesting data challenges at a scale few companies can match. By joining Meta, you will become part of a world-class data engineering community dedicated to skill development and career growth in data engineering and beyond.Data Engineering: You will guide teams by building optimal data artifacts (including datasets and visualizations) to address key questions. You will refine our systems, design logging solutions, and create scalable data models. Ensuring data security and quality, and with a focus on efficiency, you will suggest architecture and development approaches and data management standards to address complex analytical problems.Product leadership: You will use data to shape product development, identify new opportunities, and tackle upcoming challenges. You'll ensure our products add value for users and businesses, by prioritizing projects, and driving innovative solutions to respond to challenges or opportunities.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner. Data Engineer, Product Analytics Responsibilities: Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains Define and manage Service Level Agreements for all data sets in allocated areas of ownership Solve challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources Improve logging Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts Influence product and cross-functional teams to identify data opportunities to drive impact Minimum Qualifications: Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent 2+ years of experience where the primary responsibility involves working with data. This could include roles such as data analyst, data scientist, data engineer, or similar positions 2+ years of experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala or others.) Preferred Qualifications: Master's or Ph.D degree in a STEM field About Meta: Meta builds technologies that help people connect, find communities, and grow businesses. When Facebook launched in 2004, it changed the way people connect. Apps like Messenger, Instagram and WhatsApp further empowered billions around the world. Now, Meta is moving beyond 2D screens toward immersive experiences like augmented and virtual reality to help build the next evolution in social technology. People who choose to build their careers by building with us at Meta help shape a future that will take us beyond what digital connection makes possible today—beyond the constraints of screens, the limits of distance, and even the rules of physics. Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate, monthly rate, or annual salary only, and do not include bonus, equity or sales incentives, if applicable. In addition to base compensation, Meta offers benefits. Learn more about benefits at Meta.
Posted 23 hours ago
5.0 years
0 Lacs
Maharashtra, India
On-site
Job Description About DP World Trade is the lifeblood of the global economy, creating opportunities and improving the quality of life for people around the world. DP World exists to make the world’s trade flow better, changing what’s possible for the customers and communities we serve globally. With a dedicated, diverse and professional team of more than 111,000 employees from 159 nationalities, spanning 77 countries on six continents, DP World is pushing trade further and faster towards a seamless supply chain that’s fit for the future. We’re rapidly transforming and integrating our businesses -- Ports and Terminals, Marine Services, Logistics and Technology – and uniting our global infrastructure with local expertise to create stronger, more efficient end-to-end supply chain solutions that can change the way the world trades. What's more, we're reshaping the future by investing in innovation. From intelligent delivery systems to automated warehouse stacking, we’re at the cutting edge of disruptive technology, pushing the sector towards better ways to trade, minimizing disruptions from the factory floor to the customer’s door. About DP World Global Service Centre DP World’s Global Service Centre (GSCs) are key enablers of growth delivering standardization, process excellence and expertise, and automation in areas of Finance, Freight Forwarding, Marine Services, Engineering and Human Resources, helping accelerate DP World’s growth and business transformation. As we experience exponential growth, there has never been a more exciting time to join us. Discover your next role here and change what's possible for everyone! As an equal employer that recognizes and values diversity and an inclusive culture, we empower and up-skill our people with opportunities to perform at their best. Join us and be part of an amazing team that is transforming the future of world trade. Role Purpose Sr. Structural designers are responsible to manage, control, teaching and checking of a group of designers in line with project and department requirements in a productivity, quality and efficient manner as imposed by direct supervision/management. Designation: Sr. Structural Designer – Engineering - Global Service Centre Base Location: Navi Mumbai Reporting to: GSC Structural Lead. Key Role Responsibilities Review and extract materials in the form of MTO on specific projects. Prepare and verify detailed production packages including 3D models. Ensure that structural designers are carrying out their assigned task as per project schedule requirements. Enhance or improve related activities or assignments in order to reduce cost and man-hours consumptions. Identification of changes of client design drawings, and report to his direct supervisor. Follow all relevant departmental policies, processes, standard operating procedures, and instructions so that work is carried out in a controlled and consistent manner. Follow and comply with information Security Rules and data security measures in order to protect the company information / intellectual properties. Ability to generate complete discipline drawings with Engineer's supervision Demonstrated ability to handle multiple tasks within assigned project and to meet required deadlines. Skills & Competencies Capability of making, reading and following project schedules. Coordinate and teach structural designers to ensure that department related standards are strictly followed. Prepare and verify detailed production packages including 3D models. Ensure that structural designer is carrying their assigned task as per project schedule requirements. Enhance or improve related activities or assignments in order to reduce cost and man-hours consumptions. Good technical competency and ability to generate reports with status of his assigned team/tasks. Education & Qualifications Draughtsman Diploma from recognized institute. Bachelor/Engineering degree in Mechanical Engineering or equivalent will be preferable. 5 years and above relevant experience in using software’s like Tekla and Cadmatic inside of production engineering office for similar shipyard as DDWD. 8-12 years and above on similar position is a must. DP World is committed to the principles of Equal Employment Opportunity (EEO). We strongly believe that employing a diverse workforce is central to our success and we make recruiting decisions based on your experience and skills. We welcome applications from all members of society irrespective of age, gender, disability, race, religion or belief.
Posted 23 hours ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Roles & Responsibilities Job Summary We are seeking a highly skilled and self-motivated AI Engineer to join us as we establish our AI Center of Excellence (CoE). As an early team member, you will play a critical role in shaping the foundation, strategy, and implementation of AI and ML solutions across the organization. This role offers a unique opportunity to work at the forefront of AI innovation, contribute to impactful use cases, and collaborate with cross-functional teams to build intelligent agentic systems. You will work with both Microsoft technologies (Copilot Studio, AI Foundry, Azure OpenAI) and open-source frameworks to design, deploy, and manage enterprise-ready AI solutions. Key Responsibilities Design, develop, and deploy AI agents using Microsoft Copilot Studio and AI Foundry. Build and fine-tune machine learning models for NLP, prediction, classification, and recommendation tasks. Conduct exploratory data analysis (EDA) to extract insights and support model development. Implement and manage LLM workflows, including prompt engineering, fine-tuning, evaluation, deployment, and monitoring. Utilize open-source frameworks such as LangChain, Hugging Face, MLflow, and RAG pipelines to build scalable, modular AI solutions. Integrate AI solutions with business workflows using APIs and cloud-native deployment methods. Use Azure AI services, including AI Foundry and Azure OpenAI, for secure and scalable model operations. Contribute to the creation of an AI governance framework, including Responsible AI principles, model explainability, fairness, and accountability. Support the creation of standards, reusable assets, and documentation as the CoE grows. Collaborate with engineering, data, and business teams to define problems, build solutions, and demonstrate value. Stay up to date with emerging AI capabilities such as Model Context Protocol (MCP), Agent-to-Agent (A2A) frameworks, and Agent Communication Protocols (ACP), and proactively evaluate opportunities to integrate them into enterprise solutions. Required Qualifications Bachelor's or master's degree in computer science, Data Science, Engineering, or a related field. 5+ years of experience in AI, machine learning, or data science with production-level deployments. Strong foundation in statistics, ML algorithms, and data analysis techniques. Hands-on experience building with LLMs, GenAI platforms, and AI copilots. Proficient in Python, with experience using libraries such as Pandas, Scikit-learn, PyTorch, TensorFlow, and Transformers. Experience with Microsoft Copilot Studio, AI Foundry, and Azure OpenAI. Working knowledge of open-source GenAI tools (LangChain, Haystack, MLflow). Understanding of cloud deployment, API integration, and version control (Git). Experience 6-8 Years Skills Primary Skill: AI/ML Development Sub Skill(s): AI/ML Development Additional Skill(s): TensorFlow, NLP, Pytorch, GenAI Fundamentals, Large Language Models (LLM)
Posted 23 hours ago
4.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dear Candidate! Greetings from TCS !!! Role: Performance Test Engineer Location: Bangalore/Chennai/Kolkata/Pune/Hyderabad Experience Range: 4 to 7 Years Job Description: Good experience using Performance Test tool LoadRunner and understanding of APM tools like AppDynamics/Dynatrace/New Relic, etc. Good hands-on experience in Web-HTTP, Java Vuser, Webservice protocol. Should have ability to work independently in Requirement analysis, designing, execution & result analysis phase. Develop customized codes in Java & C language for optimizing and enhancing VuGen scripts. Analyze test results and coordinate with development teams for issue triaging & bug fixes. Good understanding of different OS internals, file systems, disk/storage, networking protocols and other latest technologies like Cloud infra. Monitor/extract production performance statistics and apply the same model in the test environments with higher load to uncover performance issues. TCS has been a great pioneer in feeding the fire of Young Techies like you. We are a global leader in the technology arena and there's nothing that can stop us from growing together.
Posted 1 day ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more . Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. About The Team The People Team is a purveyor of opportunity, searching the globe for the most talented individuals and offering them an open, collaborative workplace. By prioritizing skill and potential, we have cultivated a powerful assembly of professionals through our drive for equal opportunity and diversity. We make the move to Agoda a breeze with assisted onboarding programs, and we continue to support and enrich our thousands of Agoda employees through individual growth with outstanding learning programs and various means of assistance. Our development of incredible benefits has ensured everyone can stay strong, healthy, and happy during their time at Agoda. Leading ambitious changes and making a positive impact in the lives of our employees, the People Team is a crucial and rewarding part of the Agoda family. In This Role, You’ll Get To Develop, test, and maintain robust ETL processes to extract, transform, and load data from various sources into data warehouses Analyze and optimize ETL workflows to improve performance and efficiency Implement best practices for data integration and transformation to ensure high-quality data delivery Develop and implement data validation and cleansing processes to ensure data accuracy and consistency Monitor and maintain ETL systems to ensure smooth operation and minimal downtime Participate in design and code reviews to ensure high-quality deliverables Create and maintain comprehensive documentation for ETL processes and data flows Understanding how HR data is built and used in a global company What You’ll Need To Succeed 4+ years of experience as a ETL Developer or Data Engineer Proficiency in Python Knowledge of data warehousing concepts and data modeling Strong attention to detail to ensure data accuracy and integrity throughout the ETL process Any other experience with Big Data technologies / tools SQL experience Analytical problem-solving capabilities and experience It’s Great If You Have Experience with business intelligence tools like Tableau for data visualization and reporting Experience working with Open-source products Experience with project management methodologies such as Agile or Scrum to manage ETL projects effectively A global perspective and experience working with diverse cultures #4#IT #bangkok#hongkong#london#melbourne#berlin#copenhagen#seoul#tokyo#jakarta#manila#singapore#kualalumpaur#hanoi#manila#milan#rome#Naples#Turin#Palermo#Venice#Florence#Bologna#Gurugram#Gurgoan#Hyderabad#mumbai Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy . Disclaimer We do not accept any terms or conditions, nor do we recognize any agency’s representation of a candidate, from unsolicited third-party or agency submissions. If we receive unsolicited or speculative CVs, we reserve the right to contact and hire the candidate directly without any obligation to pay a recruitment fee.
Posted 1 day ago
0 years
0 Lacs
India
Remote
Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 30th June 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀
Posted 1 day ago
0 years
0 Lacs
India
Remote
Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python or R for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 30th June 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. Technical Professional to Design, Build and Manage the infrastructure and systems that enable organizations to collect, process, store, and analyze large volumes of data. He will be the architects and builders of the data pipelines, ensuring that data is accessible, reliable, and optimized for various uses, including analytics, machine learning, and business intelligence. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. Technical Professional to Design, Build and Manage the infrastructure and systems that enable organizations to collect, process, store, and analyze large volumes of data. You will be the architects and builders of the data pipelines, ensuring that data is accessible, reliable, and optimized for various uses, including analytics, machine learning, and business intelligence. Key Responsibilities: Designing and Building Data Pipelines: Creating robust, scalable, and efficient ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) pipelines to move data from various sources into data warehouses, data lakes, or other storage systems. Ingest data which is structured, unstructured, streaming, realtime. Data Architecture: Designing and implementing data models, schemas, and database structures that support business requirements and data analysis needs. Data Storage and Management: Selecting and managing appropriate data storage solutions (e.g., relational databases, NoSQL databases, data lakes like HDFS or S3, data warehouses like Snowflake, BigQuery, Redshift). Data Integration: Connecting disparate data sources, ensuring data consistency and quality across different systems. Performance Optimization: Optimizing data processing systems for speed, efficiency, and scalability, often dealing with large datasets (Big Data). Data Governance and Security: Implementing measures for data quality, security, privacy, and compliance with regulations. Collaboration: Working closely with Data Scientists, Data Analysts, Business Intelligence Developers, and other stakeholders to understand their data needs and provide them with clean, reliable data. Automation: Automating data processes and workflows to reduce manual effort and improve reliability. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 4 - 6 years of experience as an Data Engineer . Programming Languages: Strong proficiency in languages like Python, Java, Scala Database Management: Expertise in SQL and experience with various database systems (e.g., PostgreSQL, MySQL, SQL Server, Oracle). Big Data Technologies: Experience with frameworks and tools like Apache Spark, Ni-Fi, Kafka, Flink, or similar distributed processing technologies. Cloud Platforms: Proficiency with cloud data services from providers like Microsoft Azure (Azure Data Lake, Azure Synapse Analytics), Fabric, Cloudera etc Data Warehousing: Understanding of data warehousing concepts, dimensional modelling, and schema design. ETL/ELT Tools: Experience with data integration tools and platforms. Version Control: Familiarity with Git and collaborative development workflows. Preferred Technical And Professional Experience Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 day ago
8.0 - 10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As a Data Scientist at Kyndryl you are the bridge between business problems and innovative solutions, using a powerful blend of well-defined methodologies, statistics, mathematics, domain expertise, consulting, and software engineering. You'll wear many hats, and each day will present a new puzzle to solve, a new challenge to conquer. You will dive deep into the heart of our business, understanding its objectives and requirements – viewing them through the lens of business acumen, and converting this knowledge into a data problem. You’ll collect and explore data, seeking underlying patterns and initial insights that will guide the creation of hypotheses. Analytical professional who uses statistical methods, machine learning, and programming skills to extract insights and knowledge from data. Their primary goal is to solve complex business problems, make predictions, and drive strategic decision-making by uncovering patterns and trends within large datasets. In this role, you will embark on a transformative process of business understanding, data understanding, and data preparation. Utilizing statistical and mathematical modeling techniques, you'll have the opportunity to create models that defy convention – models that hold the key to solving intricate business challenges. With an acute eye for accuracy and generalization, you'll evaluate these models to ensure they not only solve business problems but do so optimally. Additionally, you're not just building and validating models – you’re deploying them as code to applications and processes, ensuring that the model(s) you've selected sustains its business value throughout its lifecycle. Your expertise doesn't stop at data; you'll become intimately familiar with our business processes and have the ability to navigate their complexities, identifying issues and crafting solutions that drive meaningful change in these domains. You will develop and apply standards and policies that protect our organization's most valuable asset – ensuring that data is secure, private, accurate, available, and, most importantly, usable. Your mastery extends to data management, migration, strategy, change management, and policy and regulation. Key Responsibilities: Problem Framing: Collaborating with stakeholders to understand business problems and translate them into data-driven questions. Data Collection and Cleaning: Sourcing, collecting, and cleaning large, often messy, datasets from various sources, preparing them for analysis. Exploratory Data Analysis (EDA): Performing initial investigations on data to discover patterns, spot anomalies, test hypotheses, and check assumptions with the help of summary statistics and graphical representations. Model Development: Building, training, and validating machine learning models (e.g., regression, classification, clustering, deep learning) to predict outcomes or identify relationships. Statistical Analysis: Applying statistical tests and methodologies to draw robust conclusions from data and quantify uncertainty. Feature Engineering: Creating new variables or transforming existing ones to improve model performance and provide deeper insights. Model Deployment: Working with engineering teams to deploy models into production environments, making them operational for real-time predictions or insights. Communication and Storytelling: Presenting complex findings and recommendations clearly and concisely to both technical and non-technical audiences, often through visualizations and narratives. Monitoring and Maintenance: Tracking model performance in production and updating models as data patterns evolve or new data becomes available. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 8 - 10 years of experience as an Data Scientist . Programming Languages: Strong proficiency in Python and/or R, with libraries for data manipulation (e.g., Pandas, dplyr), scientific computing (e.g., NumPy), and machine learning (e.g., Scikit-learn, TensorFlow, PyTorch). Statistics and Probability: A solid understanding of statistical inference, hypothesis testing, probability distributions, and experimental design. Machine Learning: Deep knowledge of various machine learning algorithms, their underlying principles, and when to apply them. Database Querying: Proficiency in SQL for extracting and manipulating data from relational databases. Data Visualization: Ability to create compelling and informative visualizations using tools like Matplotlib, Seaborn, Plotly, or Tableau. Big Data Concepts: Familiarity with concepts and tools for handling large datasets, though often relying on Data Engineers for infrastructure. Domain Knowledge: Understanding of the specific industry or business domain to contextualize data and insights. Preferred Technical And Professional Experience Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 day ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role The Core Analytics & Science Team (CAS) is Uber's primary science organisation, covering both our main lines of business as well as the underlying platform technologies on which those businesses are built. We are a key part of Uber's cross-functional product development teams, helping to drive every stage of product development through data analytic, statistical, and algorithmic expertise. CAS owns the experience and algorithms powering Uber's global Mobility and Delivery products. We optimise and personalise the rider experience, target incentives and introduce customizations for routing and matching for products and use cases that go beyond the core Uber capabilities. What the Candidate Will Do ---- Refine ambiguous questions and generate new hypotheses and design ML based solutions that benefit product through a deep understanding of the data, our customers, and our business Deliver end-to-end solutions rather than algorithms, working closely with the engineers on the team to productionize, scale, and deploy models world-wide. Use statistical techniques to measure success, develop northstar metrics and KPIs to help provide a more rigorous data-driven approach in close partnership with Product and other subject areas such as engineering, operations and marketing Design experiments and interpret the results to draw detailed and impactful conclusions. Collaborate with data scientists and engineers to build and improve on the availability, integrity, accuracy, and reliability of data logging and data pipelines. Develop data-driven business insights and work with cross-functional partners to find opportunities and recommend prioritisation of product, growth, and optimisation initiatives. Present findings to senior leadership to drive business decisions Basic Qualifications ---- Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields. 4+ years experience as a Data Scientist, Machine learning engineer, or other types of data science-focused functions Knowledge of underlying mathematical foundations of machine learning, statistics, optimization, economics, and analytics Hands-on experience building and deployment ML models Ability to use a language like Python or R to work efficiently at scale with large data sets Significant experience in setting up and evaluation of complex experiments Experience with exploratory data analysis, statistical analysis and testing, and model development Knowledge in modern machine learning techniques applicable to marketplace, platforms Proficiency in technologies in one or more of the following: SQL, Spark, Hadoop Preferred Qualifications Advanced SQL expertise Proven track record to wrangle large datasets, extract insights from data, and summarise learnings/takeaways. Proven aptitude toward Data Storytelling and Root Cause Analysis using data Advanced understanding of statistics, causal inference, and machine learning Experience designing and analyzing large scale online experiments Ability to deliver on tight timelines and prioritise multiple tasks while maintaining quality and detail Ability to work in a self-guided manner Ability to mentor, coach and develop junior team members Superb communication and organisation skills
Posted 1 day ago
0 years
0 Lacs
India
Remote
Data Science Intern (Paid) Company: Unified Mentor Location: Remote Duration: 3 months Application Deadline: 30th June 2025 Opportunity: Full-time role based on performance + Internship Certificate About Unified Mentor Unified Mentor provides aspiring professionals with hands-on experience in data science through industry-relevant projects, helping them build successful careers. Responsibilities Collect, preprocess, and analyze large datasets Develop predictive models and machine learning algorithms Perform exploratory data analysis (EDA) to extract insights Create data visualizations and dashboards for effective communication Collaborate with cross-functional teams to deliver data-driven solutions Requirements Enrolled in or a graduate of Data Science, Computer Science, Statistics, or a related field Proficiency in Python or R for data analysis and modeling Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) Familiarity with data visualization tools like Tableau, Power BI, or Matplotlib Strong analytical and problem-solving skills Excellent communication and teamwork abilities Stipend & Benefits Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) Hands-on experience in data science projects Certificate of Internship & Letter of Recommendation Opportunity to build a strong portfolio of data science models and applications Potential for full-time employment based on performance How to Apply Submit your resume and a cover letter with the subject line "Data Science Intern Application." Equal Opportunity: Unified Mentor welcomes applicants from all backgrounds.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The extract job market in India is growing rapidly as companies across various industries are increasingly relying on data extraction to make informed business decisions. Extract professionals play a crucial role in collecting and analyzing data to provide valuable insights to organizations. If you are considering a career in extract, this article will provide you with valuable insights into the job market in India.
The average salary range for extract professionals in India varies based on experience. Entry-level professionals can expect to earn around INR 3-5 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 10 lakhs per annum.
In the field of extract, a typical career path may include roles such as Data Analyst, Data Engineer, and Data Scientist. As professionals gain experience and expertise, they may progress to roles like Senior Data Scientist, Data Architect, and Chief Data Officer.
In addition to expertise in data extraction, professionals in this field are often expected to have skills in data analysis, database management, programming languages (such as SQL, Python, or R), and data visualization tools.
As you prepare for your career in extract roles, remember to showcase not only your technical skills but also your problem-solving abilities and communication skills during interviews. With the right preparation and confidence, you can excel in the extract job market in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane