Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 years
0 Lacs
Madurai, Tamil Nadu, India
On-site
Job Title: GCP Data Architect Location: Madurai Experience: 12+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or enterprise data platforms Minimum 3–5 years of hands-on experience in GCP Data Service Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema) Experience with real-time data processing, streaming architectures, and batch ETL pipelines Good understanding of IAM, networking, security models, and cost optimization on GCP Prior experience in leading cloud data transformation projects Excellent communication and stakeholder management skills Preferred Qualifications: GCP Professional Data Engineer / Architect Certification Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics Exposure to AI/ML use cases and MLOps on GCP Experience working in agile environments and client-facing roles What We Offer: Opportunity to work on large-scale data modernization projects with global clients A fast-growing company with a strong tech and people culture Competitive salary, benefits, and flexibility Collaborative environment that values innovation and leadership
Posted 6 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job description Job Name: Senior Data Engineer Azure Years of Experience: 5 Job Description: We are looking for a skilled and experienced Senior Azure Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: ADF, Databricks Secondary Skills: DBT, Python, Databricks, Airflow, Fivetran, Glue, Snowflake Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge /involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. Utilize Azure Databricks for data transformation and processing. Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. Proficient in programming languages like Python, SQL, and conversant with pertinent scripting languages
Posted 6 days ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities : Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 6 days ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 6 days ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 6 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
This is Adyen Adyen provides payments, data, and financial products in a single solution for customers like Meta, Uber, H&M, and Microsoft - making us the financial technology platform of choice. At Adyen, everything we do is engineered for ambition. For our teams, we create an environment with opportunities for our people to succeed, backed by the culture and support to ensure they are enabled to truly own their careers. We are motivated individuals who tackle unique technical challenges at scale and solve them as a team. Together, we deliver innovative and ethical solutions that help businesses achieve their ambitions faster. Data Engineer We are looking for a Data Engineer to join the Payment Engine Data team in Bengaluru, our newest Adyen office. The main goal of the Payment Engine Data (PED) team is to provide insightful data and solutions for processing payments using all of Adyen's payment options. These consist of various data pipelines between various systems, dashboards offering insights into payment processing, internal and external reporting, additional data products, and infrastructure. The ideal candidate is able to understand the business context and relate it to the underlying data requirements. You should also excel at building top-notch data pipelines on our big data platform. At Adyen, your work as a Data Engineer will be vital in forming our data infrastructure and guaranteeing the seamless flow of data across various systems. What You’ll Do Develop High-Quality Data Pipelines- Design, develop, deploy, and operate ETL/ELT pipelines in PySpark. Your work will directly contribute to the creation of reports, tools, analytics, and datasets for both internal and external use. Collaborative Solution Development- Partner with various teams, engineers, and data analysts to understand data requirements and transform these insights into effective data pipelines. Orchestrate Data Flow- Utilise orchestration tools to manage data pipelines efficiently, experience in Airflow is a significant advantage. Champion Data Best Practices- Advocate for performance, testing, code quality, data validation, data governance, and discoverability. Ensure that the data provided is accurate, performant, and reliable. Performance Optimisation- Identify and resolve performance bottlenecks in data pipelines and systems. Optimise query performance and resource utilisation to meet SLAs and performance requirements, using technologies such as caching, indexing, partitioning, and other Spark optimizations. Knowledge Sharing and Training- Scale your knowledge throughout the organisation, enhancing the overall data literacy. Who You Are Experienced in Big Data- At least 5 years of experience working as a Data Engineer or in a similar role. Data & Engineering practices- You possess an expert-level understanding of both Software and Data Engineering practices. Technical Super Star- Highly proficient in tools and languages such as- Python, PySpark, Airflow, Hadoop, Spark, Kafka, SQL, Git, S3. Looker is a plus. Clear Communicator- Skilled at articulating complex data-related concepts and outcomes to a diverse range of stakeholders. Self-starter- Capable of independently recognizing opportunities, devising solutions, leading, prioritizing and owning projects. Innovator- You have an experimental mindset with a ‘launch fast and iterate’ mentality. Data Culture Champion- Experienced in fostering a data-centric culture within large, technical organizations and setting standards for excellence and continuous improvement. Data Positions At Adyen We know companies handle different definitions for their data-related positions, this is for instance dependent on the size of a company. We categorized and defined all our positions. Have a look at this blogpost to find out! Our Diversity, Equity and Inclusion commitments Our unique approach is a product of our diverse perspectives. This diversity of backgrounds and cultures is essential in helping us maintain our momentum. Our business and technical challenges are unique, and we need as many different voices as possible to join us in solving them - voices like yours. No matter who you are or where you’re from, we welcome you to be your true self at Adyen. Studies show that women and members of underrepresented communities apply for jobs only if they meet 100% of the qualifications. Does this sound like you? If so, Adyen encourages you to reconsider and apply. We look forward to your application! What’s next? Ensuring a smooth and enjoyable candidate experience is critical for us. We aim to get back to you regarding your application within 5 business days. Our interview process tends to take about 4 weeks to complete, but may fluctuate depending on the role. Learn more about our hiring process here. Don’t be afraid to let us know if you need more flexibility. This role is based out of our Bengaluru office. We are an office-first company and value in-person collaboration; we do not offer remote-only roles.
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Overview We are seeking a Data Scientist with a strong foundation in machine learning and a passion for the travel industry. You will work with cross-functional teams to analyze customer behavior, forecast travel demand, optimize pricing models, and deploy AI-driven solutions to improve user experience and drive business growth. Key Responsibilities Engage in all stages of the project lifecycle, including data collection, labeling, and preprocessing, to ensure high-quality datasets for model training. Utilize advanced machine learning frameworks and pipelines for efficient model development, training execution, and deployment. Implement MLFlow for tracking experiments, managing datasets, and facilitating model versioning to streamline collaboration. Oversee model deployment on cloud platforms, ensuring scalable and robust performance in real-world travel applications. Analyze large volumes of structured and unstructured travel data to identify trends, patterns, and actionable insights. Develop, test, and deploy predictive models and machine learning algorithms for fare prediction, demand forecasting, and customer segmentation. Create dashboards and reports to communicate insights effectively to stakeholders across the business. Collaborate with Engineering, Product, Marketing, and Finance teams to support strategic data initiatives. Build and maintain data pipelines for data ingestion, transformation, and modeling. Conduct statistical analysis, A/B testing, and hypothesis testing to guide product decisions. Automate processes and contribute to scalable, production-ready data science tools. Technical Skills Machine Learning Frameworks: PyTorch, TensorFlow, JAX, Keras, Keras-Core, Scikit-learn, Distributed Model Training Programming & Development: Python, Pyspark, Julia, MATLAB, Git, GitLab, Docker, MLOps, CI/CD Pipelines Cloud & Deployment: AWS SageMaker, MLFlow, Production Scaling Data Science & Analytics: Statistical Analysis, Predictive Modeling, Feature Engineering, Data Preprocessing, Pandas, NumPy, PySpark Computer Vision: CNN, RNN, OpenCV, Kornia, Object Detection, Image Processing, Video Analytics Visualization Tools: Looker,Tableau, Power BI, Matplotlib, Seaborn Databases & Querying: SQL, Snowflake, Databricks Big Data & MLOps: Spark, Hadoop, Kubernetes, Model Monitoring Nice To Have Experience with deep learning, LLMs, NLP (Transformers), or recommendation systems in travel use cases. Knowledge of GDS APIs (Amadeus, Sabre), flight search optimization, and pricing models. Strong system design (HLD/LLD) and architecture experience for production-scale ML workflows. Skills: data preprocessing,docker,feature engineering,sql,python,predictive modeling,statistical analysis,keras,data science,spark,data scientist,aws sagemaker,machine learning,mlflow,tensorflow,pytorch
Posted 6 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
ROLE: Bigdata developer JOB LOCATION : Pune, Chennai,Bangalore, EXPERIENCE REQUIREMENT : 7 to 15 Required Technical Skill : Hadoop + Spark with Scala/Java Must-Have •7+ years of active development experience in Spark with Java/Scala/, Hadoop, Hive with hands-on coding and de-bugging skills •Having core fundamentals experience of Apache Hadoop components •Understanding best practices for Big Data (in Hadoop), data processing •Deployment experience with Hadoop distributions open source. ( 7 plus years in Spark with java / Spark with Scala is mandatory ) Good-to-Have •Pyspark/Python Responsibility of / Expectations from the Role Proficiency in troubleshooting, root-cause analysis, application design, and implementing large components for enterprise projects Unit testing Feature development using Spark and Java/Scala Working in Agile team.
Posted 6 days ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. T26 - Manager Software Development 2 R0068418 The Manager, Software Development 2 role at eBay is part of the Risk Engineering team, focused on developing innovative solutions that enhance eBay's Risk management capabilities. The position requires leading engineering initiatives to improve risk and compliance mitigation across eBay services, ensuring alignment with business objectives and a customer-centric approach. You will report within a structure designed to foster collaboration and mentorship, holding a pivotal role in sustaining eBay's growth and trust What You Will Accomplish Lead a diversely skilled team members comprising of engineers, ML developers and product owners. Drive engineering initiatives that enhance risk management and transaction security, ensuring a customer-centric approach and excellent developer experience. Collaborate with cross-functional teams to integrate robust technical solutions aligned with business objectives and focused on customer needs. Mentor and develop team members, fostering a culture of continuous learning and growth with emphasis on hiring and retaining top talent. Execute eBay’s Risk engineering strategies to achieve measurable performance improvements and optimize developer experience. Innovate and influence the adoption of market-leading technology capabilities, rapidly evaluating and scaling new technologies where appropriate. Lead cross-team collaborations that support eBay’s overall business metrics. What You Will Bring At least 12 years of experience in software development, with the latest 4 years as an Engineering Manager. Hands-on experience with technologies like Java, JEE, Spark, Hadoop, Apache Flink, RDBMS, NoSql. Extensive experience in software development, especially within risk management platforms or related fields. Strong technical expertise in solution design patterns, data systems, and machine learning frameworks. Proven leadership skills with the ability to mentor and guide engineering teams. Excellent communication skills to translate and distill complex technical concepts for various audiences. Willingness to travel as needed for this role to collaborate with global teams. Education: Bachelor’s or Master’s in Computer science & Engineering or equivalent experience. Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.
Posted 6 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. As a Full Stack Software Engineer in the Risk Engineering team, you will be a member of the core team that builds outstanding risk products. Primary Responsibilities Build solutions using your strong background in distributed systems, and large scale database systems. Build excellent user experience for customers. Research, analyze, design, develop and test the solutions that are appropriate for the business and technology strategies. Participate in design discussions, code reviews and project related team meetings. Work with other specialists, Architecture, Product Management, and Operations teams to develop innovative solutions that meet business needs with respect to functionality, performance, scalability, reliability, realistic implementation schedules and alignment to development principles and product goals. Develop technical & domain expertise and apply to solving product challenges. Qualifications Bachelor’s degree or equivalent experience in Computer Science or related fie with 5+ years of proven experience as a software engineer. Hands-on experience in Java/J2EE, XML, Web technologies, Web Services, Design Patterns, and OOAD. Solid foundation in computer science with strong proficiencies in data structures, algorithms, and software design. Proficient in implementing OOAD, architectural and design patterns, diverse platforms, frameworks, technologies, and software engineering methodologies. Experience in Oracle/ NoSQL DBs, REST, Event Source, Web Socket. Experience with data solutions like Hadoop, MapReduce, Hive, Pig, Kafka, Storm, Flink etc. is a plus. Experience with Java script, AngularJS, ReactJS, HTML5, CSS3 is nice to have. Proficient in agile development methodologies. Demonstrated ability to understand the business and ability to contribute to technology direction that gives to measurable business improvements. Ability to think out of the box in solving real world problems. Ability to adapt to changing business priorities and to thrive under pressure. Excellent decision-making, communication and collaboration skills. Risk domain expertise is a major plus. Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.
Posted 6 days ago
0.0 - 3.0 years
25 - 35 Lacs
Madurai, Tamil Nadu
On-site
Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra “Clients Vision is our Mission”. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Techmangohttps://www.techmango.net/ Job Title: GCP Data Architect Location: Madurai Experience: 12+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or enterprise data platforms. Minimum 3–5 years of hands-on experience in GCP Data Service. Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner. Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema). Experience with real-time data processing, streaming architectures, and batch ETL pipelines. Good understanding of IAM, networking, security models, and cost optimization on GCP. Prior experience in leading cloud data transformation projects. Excellent communication and stakeholder management skills. Preferred Qualifications: GCP Professional Data Engineer / Architect Certification. Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics. Exposure to AI/ML use cases and MLOps on GCP. Experience working in agile environments and client-facing roles. What We Offer: Opportunity to work on large-scale data modernization projects with global clients. A fast-growing company with a strong tech and people culture. Competitive salary, benefits, and flexibility. Collaborative environment that values innovation and leadership. Job Type: Full-time Pay: ₹2,500,000.00 - ₹3,500,000.00 per year Application Question(s): Current CTC ? Expected CTC ? Notice Period ? (If you are serving Notice period please mention the Last working day) Experience: GCP Data Architecture : 3 years (Required) BigQuery: 3 years (Required) Cloud Composer (Airflow): 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person
Posted 6 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In testing and quality assurance at PwC, you will focus on the process of evaluating a system or software application to identify any defects, errors, or gaps in its functionality. Working in this area, you will execute various test cases and scenarios to validate that the system meets the specified requirements and performs as expected. You are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt, take ownership and consistently deliver quality work that drives value for our clients and success as a team. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. JD Template- ETL Tester Associate - Operate Field CAN be edited Field CANNOT be edited ____________________________________________________________________________ Job Summary - A career in our Managed Services team will provide you with an opportunity to collaborate with a wide array of teams to help our clients implement and operate new capabilities, achieve operational efficiencies, and harness the power of technology. Our Data, Testing & Analytics as a Service team brings a unique combination of industry expertise, technology, data management and managed services experience to create sustained outcomes for our clients and improve business performance. We empower companies to transform their approach to analytics and insights while building your skills in exciting new directions. Have a voice at our table to help design, build and operate the next generation of software and services that manage interactions across all aspects of the value chain. Minimum Degree Required (BQ) *: Bachelor's degree Degree Preferred Required Field(s) of Study (BQ): Preferred Field(s) Of Study Computer and Information Science, Management Information Systems Minimum Year(s) of Experience (BQ) *: US Certification(s) Preferred Minimum of 2 years of experience Required Knowledge/Skills (BQ) Preferred Knowledge/Skills *: As an ETL Tester, you will be responsible for designing, developing, and executing SQL scripts to ensure the quality and functionality of our ETL processes. You will work closely with our development and data engineering teams to identify test requirements and drive the implementation of automated testing solutions. Key Responsibilities Collaborate with data engineers to understand ETL workflows and requirements. Perform data validation and testing to ensure data accuracy and integrity. Create and maintain test plans, test cases, and test data. Identify, document, and track defects, and work with development teams to resolve issues. Participate in design and code reviews to provide feedback on testability and quality. Develop and maintain automated test scripts using Python for ETL processes. Ensure compliance with industry standards and best practices in data testing. Qualifications Solid understanding of SQL and database concepts. Proven experience in ETL testing and automation. Strong proficiency in Python programming. Familiarity with ETL tools such as Apache NiFi, Talend, Informatica, or similar. Knowledge of data warehousing and data modeling concepts. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Experience with version control systems like Git. Preferred Qualifications Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with CI/CD pipelines and tools like Jenkins or GitLab. Knowledge of big data technologies such as Hadoop, Spark, or Kafka.
Posted 6 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their outstanding selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate problem solvers, innovators, and optimists — and help us connect people and build communities to create economic opportunity for all. Do you want to make an impact on the world’s largest e-commerce website? Are you interested in building performance efficient, high-volume and highly scalable distributed systems? We have a place for you! Who Are We? We are seeking a hard-working Software Engineer to join our Compliance Engineering Development team. In this key role, you will help ensure that the eBay marketplace operates in full alignment with all relevant regulatory requirements. As a dedicated and enthusiastic team member, you’ll collaborate with dedicated and hardworking peers in a dynamic and enjoyable environment, building exceptional compliance products. You will thrive in an agile setting that values problem-solving, innovation, and engineering perfection. What Will You Do We Are Looking For Exceptional Engineers, Who Take Pride In Creating Simple Solutions To Apparently-complex Problems. Our Engineering Tasks Typically Involve At Least One Of The Following Crafting sound API design and driving integration between our Data layers and Customer-facing applications and components Designing and running A/B tests in Production experiences in order to vet and measure the impact of any new or improved functionality Active contributor on development of complex, multi-tier distributed software applications Design layered application, including user interface, business functionality, and database access. Work with other developers, quality engineers to develop innovative solutions that meet market needs. Estimate engineering efforts, plan implementations, and rollout system changes Participate in continuous improvement of Payment product to achieve better quality Participate in requirement/design meetings with other PD/QE What You Bring Excellent decision-making skills, thrive on dealing with ambiguities and changes. Strong sense of ownership and communication skills , embrace diverse ideas across organizations and align in a mutually agreed direction to get things done and move forward. Deeply care about growing others, great at mentoring and coaching, creating a large positive impact on organizational culture. Strong learning ability, self-driven Attending knowledge sharing sessions, both within the company and externally Learning transferable skills Growth mindset and constantly looking for opportunities to learn Learns adjacent areas (project management, people management, product management) in addition to core technical skills to better support the organization Qualification And Skill Requirements Bachelor's degree in EE, CS or other related field. 4+ years of experience in building large scale, distributed web platforms/APIs with lead responsible for the end to end product scope across multiple domains. Experience in server-side development with Java Proficiency with Spring framework Object-oriented design Design patterns RESTful services Agile development methodologies Multi-threading development Databases - SQL/NO-SQL Hadoop, Hive and HDFS Demonstrated ability to understand the business and ability to chip in to the technology direction that gives measurable business improvements. Ability to adapt to changing business priorities and to thrive under pressure. Excellent Decision-making, Communication And Collaboration Skills. Benefits are an essential part of your total compensation for the work you do every day. Whether you’re single, in a growing family, or nearing retirement, eBay offers a variety of comprehensive and competitive benefit programs to meet your needs. Including maternal & paternal leave, paid sabbatical, and plans to help ensure your financial security today and in the years ahead because we know feeling financially secure during your working years and through retirement is important. Here at eBay, we love creating opportunities for others by connecting people from widely a diverse set of backgrounds, perspectives, and geographies. So, being diverse and inclusive isn’t just something we strive for, it is who we are, and part of what we do each and every single day. We want to ensure that as an employee, you feel eBay is a place where, no matter who you are, you feel safe, included, and that you have the opportunity to bring your unique self to work. To learn about eBay’s Diversity & Inclusion click here: https://www.ebayinc.com/company/diversity-inclusion/ Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.
Posted 6 days ago
30.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us Thoucentric is the Consulting arm of Xoriant, a prominent digital engineering services company with 5000 employees. We are headquartered in Bangalore with presence across multiple locations in India, US, UK, Singapore & Australia Globally. As the Consulting business of Xoriant, we help clients with Business Consulting, Program & Project Management, Digital Transformation, Product Management, Process & Technology Solutioning and Execution including Analytics & Emerging Tech areas cutting across functional areas such as Supply Chain, Finance & HR, Sales & Distribution across US, UK, Singapore and Australia. Our unique consulting framework allows us to focus on execution rather than pure advisory. We are working closely with marquee names in the global consumer & packaged goods (CPG) industry, new age tech and start-up ecosystem. Xoriant (Parent entity) started in 1990 and is a Sunnyvale, CA headquartered digital engineering firm with offices in the USA, Europe, and Asia. Xoriant is backed by ChrysCapital, a leading private equity firm. Our strengths are now combined with Xoriants capabilities in AI & Data, cloud, security and operations services proven for 30 years. We have been certified as "Great Place to Work" by AIM and have been ranked as "50 Best Firms for Data Scientists to Work For." We have an experienced consulting team of over 450 world-class business and technology consultants based across six global locations, supporting clients through their expert insights, entrepreneurial approach and focus on delivery excellence. We have also built point solutions and products through Thoucentric labs using AI/ML in the supply chain space. Job Title: Data Scientist (3-5 Years Experience) Location: Bangalore About Us: Thoucentric is a forward-thinking organization at the forefront of leveraging data-driven insights to solve complex business challenges. We are seeking a passionate and skilled Data Scientist to join our dynamic team and help us drive innovation through advanced analytics and machine learning. Key Responsibilities: Develop and implement machine learning and deep learning models for various business problems, with a strong focus on time series forecasting. Analyze large, complex datasets to extract actionable insights and identify trends, patterns, and opportunities for improvement. Design, build, and validate predictive models using state-of-the-art techniques, ensuring scalability and robustness. Collaborate with cross-functional teams (Product, Engineering, Business) to translate business requirements into data science solutions. Communicate findings and recommendations clearly to both technical and non-technical stakeholders. Stay updated with the latest research and advancements in machine learning, deep learning, and time series analysis, and proactively apply new techniques as appropriate. Mentor junior team members and contribute to a culture of continuous learning and innovation. Requirements Required Skills & Qualifications: 3-5 years of hands-on experience in data science, machine learning, and statistical modeling. Strong expertise in time series forecasting (ARIMA, XGBoost, RandomForest, TFT, NHITS, etc.) and familiarity with deep learning frameworks (TensorFlow, PyTorch). Excellent programming skills in Python (preferred), with proficiency in libraries such as NumPy, Pandas, scikit-learn, and visualization tools (Matplotlib, Seaborn, Plotly). Solid conceptual understanding of machine learning algorithms, deep learning architectures, and statistical methods. Experience with data preprocessing, feature engineering, and model evaluation. Ability to learn quickly and adapt to new technologies, tools, and methodologies. Strong problem-solving skills and a keen attention to detail. Excellent communication and presentation skills. Preferred Qualifications: Experience with cloud platforms and MLOps tools. Exposure to big data technologies (Spark, Hadoop) is a plus. Masters degree in Computer Science, Statistics, Mathematics, or a related field. Benefits What a Consulting role at Thoucentric will offer you? Opportunity to define your career path and not as enforced by a manager A great consulting environment with a chance to work with Fortune 500 companies and startups alike. A dynamic but relaxed and supportive working environment that encourages personal development. Be part of One Extended Family. We bond beyond work - sports, get-togethers, common interests etc. Work in a very enriching environment with Open Culture, Flat Organization and Excellent Peer Group. Be part of the exciting Growth Story of Thoucentric! I'm interested Locations: Bangalore North, India | Posted on: 05/02/2025
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com Job Description You will be a key member of our Data Engineering team, focused on designing, developing, and maintaining robust data solutions on on-premise environments. You will work closely with internal teams and client stakeholders to build and optimize data pipelines and analytical tools using Python, PySpark, SQL, and Hadoop ecosystem technologies. This role requires deep hands-on experience with big data technologies in traditional data center environments (non-cloud). What you’ll be doing? Design, build, and maintain on-premise data pipelines to ingest, process, and transform large volumes of data from multiple sources into data warehouses and data lakes Develop and optimize PySpark and SQL jobs for high-performance batch and real-time data processing Ensure the scalability, reliability, and performance of data infrastructure in an on-premise setup Collaborate with data scientists, analysts, and business teams to translate their data requirements into technical solutions Troubleshoot and resolve issues in data pipelines and data processing workflows Monitor, tune, and improve Hadoop clusters and data jobs for cost and resource efficiency Stay current with on-premise big data technology trends and suggest enhancements to improve data engineering capabilities Qualifications Bachelor’s degree in Computer Science, Software Engineering, or a related field 6+ years of experience in data engineering or a related domain Strong programming skills in Python (with experience in PySpark) Expertise in SQL with a solid understanding of data warehousing concepts Hands-on experience with Hadoop ecosystem components (e.g., HDFS, Hive, Oozie, Sqoop) Proven ability to design and manage data solutions in on-premise environments (no cloud dependency) Strong problem-solving skills with an ability to work independently and collaboratively
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Opportunity Our client is expanding the software product engineering team for its partner, a US-based SaaS platform company specializing in autonomous security solutions. Their partner's platform leverages advanced AI to detect, mitigate, and respond to cyber threats across enterprise infrastructures. By offering comprehensive visibility, deep cognition, effective detection, thorough root cause analysis, and high-precision control, they aim to transform traditional governance, risk, and compliance (GRC) workflows into fast and scalable AI-native processes. Responsibilities Model Development and Integration: Design, implement, test, integrate and deploy scalable machine learning models, integrating them into production systems and APIs to support existing and new customers. Experimentation and Optimization: Lead the design of experiments and hypothesis testing for product feature development; monitor and analyze model performance and data accuracy, making improvements as needed. Cross-Functional Collaboration: Work closely with cross-functional teams across India and the US to identify opportunities, deploy impactful solutions, and effectively communicate findings to both technical and non-technical stakeholders. Mentorship and Continuous Learning: Mentor junior team members, contribute to knowledge sharing, and stay current with best practices in data science, machine learning, and AI. Requirements & Qualifications Bachelors or Masters in Statistics, Mathematics, Computer Science, Engineering, or a related quantitative field. 4-7 years of experience building and deploying machine learning models. Strong problem-solving skills with an emphasis on product development. Experience operating and troubleshooting scalable machine learning systems in the cloud. Technical Skills Programming and Frameworks: Proficient in Python with experience in TensorFlow, PyTorch, scikit-learn, and Pandas; familiarity with Golang is a plus; proficient with Git and collaborative workflows. Software Engineering: Strong understanding of data structures, algorithms, and system design principles; experience in designing scalable, reliable, and maintainable systems. Machine Learning Expertise: Extensive experience in AI and machine learning model development, including large language models, transformers, sequence models, causal inference, unsupervised clustering, and reinforcement learning. Knowledge of prompting techniques, embedding models and RAG. Innovation in Machine Learning: Ability to design and conceive novel ways of problem solving using new machine learning models. Integration, Deployment, and Cloud Services: Experience integrating machine learning models into backend systems and APIs; familiarity with Docker, Kubernetes, CI/CD tools and Cloud Services like AWS/Azure/GCP for efficient deployment. Data Management and Security: Proficient with SQL and experience with PostgreSQL; knowledge of NoSQL databases; understanding of application security and data protection principles. Methodologies And Tools Agile/Scrum Practices: Experience with Agile/Scrum methodologies. Project Management Tools: Proficiency with Jira, Notion, or similar tools. Soft Skills Excellent communication and problem-solving abilities. Ability to work independently and collaboratively. Strong organizational and time management skills. High degree of accountability and ownership. Nice-to-Haves Experience with big data tools like Hadoop or Spark. Familiarity with infrastructure management and operations lifecycle concepts. Experience working in a startup environment. Contributions to open-source projects or a strong GitHub portfolio. Benefits Comprehensive Insurance (Life, Health, Accident). Flexible Work Model. Accelerated learning & non-linear growth. Flat organization structure driven by ownership and accountability. Opportunity to own and be a part of some of the most innovative and promising AI/SaaS product companies in North America and around the world. Accomplished Global Peers - Working with some of the best engineers/professionals globally from the likes of Apple, Amazon, IBM Research, Adobe and other innovative product companies . Ability to make a global impact with your work, leading innovations in Conversational AI, Energy/Utilities, ESG, HealthTech, IoT, Risk/Compliance, CyberSecurity, PLM and more. Skills: api,azure,jira,machine learning models,aws,golang,docker,product development,machine learning,git,postgresql,python,github,scikit-learn,tensorflow,nosql,ci/cd,kubernetes,gcp,pytorch,pandas,spark,sql,hadoop
Posted 1 week ago
6.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greetings from TCS! Job Title: Azure Data Engineer having hands on experience with .NET & C#. Required Skillset: Proficiency in Azure Data Factory, Azure Databricks (including Spark and Delta Lake), and other Azure data services. Strong programming skills in Python, with experience in data processing libraries such as Pandas, PySpark, and NumPy. Mandatory Experience with .NET, C# Familiarity with data warehousing concepts and tools (e.g., Azure Synapse Analytics). Location: Pune (Kharadi) Experience Range: 6-12 years Job Description: Must-Have** Proven experience as an Azure Data Engineer, .NET, C# Strong proficiency in Azure Databricks, including Spark and Delta Lake. Experience with Azure Data Factory, Azure Data Lake Storage, and Azure SQL Database. Proficiency in Data Warehousing concepts and Python Good To Have: Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark Experience creating SQL queries for ETL and reporting Experience with DevOps practices and tools, including CI/CD pipelines for data engineering workflows. Knowledge of big data technologies and data processing workflows. Responsibility of / Expectations from the Role : Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks, Azure Data Factory, and other Azure services. Leverage existing C# code and libraries within Azure Databricks, connect to Databricks using .NET applications via ODBC or JDBC, and even develop Spark applications in C# Create and maintain optimal data pipeline architecture to support data-driven decision-making. Assemble large, complex datasets that meet functional and non-functional business requirements. Implement data flows to connect operational systems, data for analytics, and BI systems. Develop and optimize data models on Azure data platforms. Stay updated with the latest industry trends and best practices in data engineering and big data technologies. Participate in code reviews and ensure adherence to coding standards and best practices. Collaborate with cross-functional teams to define and implement data solutions that align with business objectives. Document data processes, architectures, and workflows for future reference and knowledge sharing. Kindly Share your updated Resumes Thanks & Regards, Shilpa Silonee BFSI A&I TAG
Posted 1 week ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Wissen Technology is Hiring fo r Python + Data Engineer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges. Role Overview: We are seeking a skilled and innovative Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes. Experience: 5-9 Years Location: Mumbai Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills: Professional Experience: 5+ years of experience in data engineering or a related field. Programming: Strong proficiency in Python, with experience in libraries like pandas, pyspark , or boto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: -AWS Glue for ETL/ELT. -S3 for storage. -Redshift or Athena for data warehousing and querying. -Lambda for serverless compute . -Kinesis or SNS/SQS for data streaming. -IAM Roles for security. Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline . Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right ’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard ’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website : www.wissen.com LinkedIn : https ://www.linkedin.com/company/wissen-technology Wissen Leadership : https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership : https://www.wissen.com/articles/ Employee Speak: https://www.ambitionbox.com/overview/wissen-technology-overview https://www.glassdoor.com/Reviews/Wissen-Infotech-Reviews-E287365.htm Great Place to Work: https://www.wissen.com/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-india/ https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-6935459546131763200-xF2k About Wissen Interview Process : https://www.wissen.com/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-whom-we-hire/ Latest in Wissen in CIO Insider: https://www.cioinsiderindia.com/vendor/wissen-technology-setting-new-benchmarks-in-technology-consulting-cid-1064.html
Posted 1 week ago
4.0 - 8.0 years
2 - 3 Lacs
Chennai
On-site
The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Key Responsibilities: Design and implement ETL pipelines using PySpark and Big Data tools on platforms like Hadoop, Hive, HDFS etc. Write scalable Python code for Machine Learning preprocessing tasks and work with libraries such as pandas, Scikit-learn etc. Develop data pipelines to support model training, evaluation and inference. Skills: Proficiency in Python programming with experience in PySpark for large-scale data processing. Hands-on experience in Big Data technologies: Hadoop, Hive HDFS etc. Exposure to machine learning workflows, model lifecycle and data preparation. Experience with ML libraries: Scikit-learn, XGBoost, Tensorflow, PyTorch etc. Exposure to cloud platforms (AWS/GCP) for data and AI workloads. Qualifications: 4-8 years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
5.0 years
2 - 3 Lacs
Chennai
On-site
Job Description The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Ab Initio Data Engineer We are looking for Ab Initio Data Engineer to be able to design and build Ab Initio-based applications across Data Integration, Governance & Quality domains for Compliance Risk programs. The individual will be working with both Technical Leads, Senior Solution Engineers and prospective Application Managers in order to build applications, rollout and support production environments, leveraging Ab Initio tech-stack, and ensuring the overall success of their programs. The programs have a high visibility, and are fast paced key initiatives, which generally aims towards acquiring & curating data and metadata across internal and external sources, provide analytical insights and integrate with other Citi systems. Technical Stack: Ab Initio 4.0.x software suite – Co>Op, GDE, EME, BRE, Conduct>It, Express>It, Metadata>Hub, Query>it, Control>Center, Easy>Graph Big Data – Cloudera Hadoop, Hive, Yarn Databases - Oracle 11G/12C, Teradata, MongoDB, Snowflake Others – JIRA, Service Now, Linux, SQL Developer, AutoSys, and Microsoft Office Responsibilities: Ability to design and build Ab Initio graphs (both continuous & batch) and Conduct>it Plans, and integrate with portfolio of Ab Initio softwares. Build Web-Service and RESTful graphs and create RAML or Swagger documentations. Complete understanding and analytical ability of Metadata Hub metamodel. Strong hands on Multifile system level programming, debugging and optimization skill. Hands on experience in developing complex ETL applications. Good knowledge of RDBMS – Oracle, with ability to write complex SQL needed to investigate and analyze data issues Strong in UNIX Shell/Perl Scripting. Build graphs interfacing with heterogeneous data sources – Oracle, Snowflake, Hadoop, Hive, AWS S3. Build application configurations for Express>It frameworks – Acquire>It, Spec-To-Graph, Data Quality Assessment. Build automation pipelines for Continuous Integration & Delivery (CI-CD), leveraging Testing Framework & JUnit modules, integrating with Jenkins, JIRA and/or Service Now. Build Query>It data sources for cataloguing data from different sources. Parse XML, JSON & YAML documents including hierarchical models. Build and implement data acquisition and transformation/curation requirements in a data lake or warehouse environment, and demonstrate experience in leveraging various Ab Initio components. Build Autosys or Control Center Jobs and Schedules for process orchestration Build BRE rulesets for reformat, rollup & validation usecases Build SQL scripts on database, performance tuning, relational model analysis and perform data migrations. Ability to identify performance bottlenecks in graphs, and optimize them. Ensure Ab Initio code base is appropriately engineered to maintain current functionality and development that adheres to performance optimization, interoperability standards and requirements, and compliance with client IT governance policies Build regression test cases, functional test cases and write user manuals for various projects Conduct bug fixing, code reviews, and unit, functional and integration testing Participate in the agile development process, and document and communicate issues and bugs relative to data standards Pair up with other data engineers to develop analytic applications leveraging Big Data technologies: Hadoop, NoSQL, and In-memory Data Grids Challenge and inspire team members to achieve business results in a fast paced and quickly changing environment Perform other duties and/or special projects as assigned Qualifications: Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and a minimum of 5 years of experience Minimum 5 years of extensive experience in design, build and deployment of Ab Initio-based applications Expertise in handling complex large-scale Data Lake and Warehouse environments Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
5.0 years
0 Lacs
Pune
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an exprienced professional to join our team in the role of Consultant Specialist In this role, you will: Data Pipelines Integration and Management: Expertise in Scala-Spark/Python-Spark development and should be able to Work with Agile application dev team to implement data strategies. Design and implement scalable data architectures to support the bank's data needs. Develop and maintain ETL (Extract, Transform, Load) processes. Ensure the data infrastructure is reliable, scalable, and secure. Oversee the integration of diverse data sources into a cohesive data platform. Ensure data quality, data governance, and compliance with regulatory requirements. Monitor and optimize data pipeline performance. Troubleshoot and resolve data-related issues promptly. Implement monitoring and alerting systems for data processes. Troubleshoot and resolve technical issues optimizing system performance ensuring reliability Create and maintain technical documentation for new and existing system ensuring that information is accessible to the team Implementing and monitoring solutions that identify both system bottlenecks and production issues Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in data engineering or related field and hands-on experience of building and maintenance of ETL Data pipelines Good experience in Designing and Developing Spark Applications using Scala or Python. Good experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark) Proficiency in programming languages such as Python, Java, or Scala. Optimization and Performance Tuning of Spark Applications GIT Experience on creating, merging and managing Repos. Perform unit testing and performance testing. Good understanding of ETL processes and data pipeline orchestration tools like Airflow, Control-M. Strong problem-solving skills and ability to work under pressure. Excellent communication and interpersonal skills. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience in the banking or financial services industry. Familiarity with regulatory requirements related to data security and privacy in the banking sector. Experience with cloud platforms (Google Cloud) and their data services. Certifications in cloud platforms (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI
Posted 1 week ago
8.0 years
8 - 9 Lacs
Pune
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Advisor, Statistical Analysis Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. 8+ years of experience in data science, preferably in the payments or fintech industry. Proficiency in Python, SQL, and data science libraries (e.g., pandas, scikit-learn, TensorFlow, PyTorch). Strong understanding of payment systems, transaction flows, and fraud detection techniques. Experience with big data technologies (e.g., Spark, Hadoop) and cloud platforms (e.g., AWS, GCP, Azure). Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist In this role, you will: Data Pipelines Integration and Management: Expertise in Scala-Spark/Python-Spark development and should be able to Work with Agile application dev team to implement data strategies. Design and implement scalable data architectures to support the bank's data needs. Develop and maintain ETL (Extract, Transform, Load) processes. Ensure the data infrastructure is reliable, scalable, and secure. Oversee the integration of diverse data sources into a cohesive data platform. Ensure data quality, data governance, and compliance with regulatory requirements. Monitor and optimize data pipeline performance. Troubleshoot and resolve data-related issues promptly. Implement monitoring and alerting systems for data processes. Troubleshoot and resolve technical issues optimizing system performance ensuring reliability Create and maintain technical documentation for new and existing system ensuring that information is accessible to the team Implementing and monitoring solutions that identify both system bottlenecks and production issues Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in data engineering or related field and hands-on experience of building and maintenance of ETL Data pipelines Good experience in Designing and Developing Spark Applications using Scala or Python. Good experience with database technologies (SQL, NoSQL), data warehousing solutions, and big data technologies (Hadoop, Spark) Proficiency in programming languages such as Python, Java, or Scala. Optimization and Performance Tuning of Spark Applications GIT Experience on creating, merging and managing Repos. Perform unit testing and performance testing. Good understanding of ETL processes and data pipeline orchestration tools like Airflow, Control-M. Strong problem-solving skills and ability to work under pressure. Excellent communication and interpersonal skills. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience in the banking or financial services industry. Familiarity with regulatory requirements related to data security and privacy in the banking sector. Experience with cloud platforms (Google Cloud) and their data services. Certifications in cloud platforms (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI
Posted 1 week ago
8.0 years
0 Lacs
Karnataka
On-site
At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Global Payments and Risk is at the forefront of innovation, integrating ground breaking technologies into our products and services. We are dedicated to using the power of data driven solutions to solve real-world problems at internet scale, improve user experiences, and drive business outcomes. Join us in shaping the future with your expertise and passion for technology. We are seeking a motivated Software Engineer with a strong background in full stack software development. The ideal candidate should have a platform-centric approach and a proven track record to build platforms, frameworks to solve complex problems. You will be instrumental in crafting innovative applications that safeguard our marketplace to mitigate risks, and curtail financial losses. Join our collaborative team that thrives on creativity and resourcefulness to tackle sophisticated challenges. What you will accomplish: Develop high-performing solutions that align with eBay's business and technology strategies to enhance risk management, trust, and compliance.. Research, analyze, design, develop and test the solutions that are appropriate for the business and technology strategies. Participate in design discussions, code reviews, and project-related team meetings, impacting significantly to the development process. Collaborate optimally within a multi-functional team comprising of engineers, architects, product managers, and operations, to deliver innovative solutions to address business needs, performance, scale, and reliability. Acquire domain expertise and apply this knowledge to tackle product challenges, ensuring continuous improvement in the domain. Act as an onboarding buddy for new joiners, fostering a supportive and inclusive work environment. What you will bring: At least 8 years of software design and development experience with proven foundation in computer science with strong competencies in data structures, algorithms, distributed computing and scalable software design Hands on expertise with architectural and design patterns, open source platforms & frameworks, technologies and software engineering methodologies. Hands-on experience in developing applications with Spring/Spring Boot, Rest, GraphQl, Java, JEE, Spring batch. Hands-on experience with building data models in Oracle/MySql/RDBMS and NoSql databases e.g key value store, document store like Mongo, Couchbase, Cassandra. Hands-on Experience in building tools and User experience using Html5, Node.js, ReactJs, Arco design, Material design. Hands on experience in finetuning performance bottlenecks of java, node.js, javascript. Proficiency in data & streaming technologies like Hadoop, Spark, Kafka, apache Flink etc. Practiced agile development and ability to adapt changes with business priorities. Experience with building sophisticated integration solutions for internet scale traffic is a major plus. Risk domain, rule engine expertise is a major plus. Excellent with decision-making, communication and collaboration skills. Familiarity with prompt engineering and AI tools is a major plus. Experience with Prometheus, Graphana, OLAP and Devops tools for observability is required. Behaviors: Innovating effectively in a dynamic, fast-changing environment, challenges convention. Develop solutions that deliver tangible results. Strong execution and alignment with timelines and timely addressing of blocking issues when risks arise. Practices learning and collaborates effectively in a team that has multiple functions. Education: Degree in computer science or equivalent discipline with 8+ years of software application development experience. Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.
Posted 1 week ago
0 years
0 Lacs
Bengaluru
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist In this role, you will: Influences a cross-functional/cross-cultural team and the performance of individuals/teams against performance objectives and plans Endorses team engagement initiatives, fostering an environment which encourages learning and collaboration to build a sense of community Creates environments where only the best will do and high standards are expected, regularly achieved and appropriately rewarded; encourages and supports continual improvements within the team based on ongoing feedback Develops a network of professional relationships across Wholesale Data & Analytics and our stakeholders to improve collaborative working and encourage openness - sharing ideas, information and collateral Encourages individuals to network and collaborate with colleagues beyond their own business areas and/or the Group to shape change and benefit the business and its customers Support the PMO and delivery managers in documenting accurate status reports as required, in a timely manner Requirements To be successful in this role, you should meet the following requirements: Serve as voice of the customer Develop/scope and define backlog items (epics/features/user stories) that guide the development team Managing the capture, analysis and documentation of data & analytics requirements and processes (both business and IT) Create analysis of customer journeys and own the product roadmap Managing implementation of data & analytics solutions Manage change interventions such as training, adoption planning to stakeholder management Tracking and documenting progress and managing delivery status Helping define and track measure of success for products Risk identification, risk reporting and devising interventions to mitigating risks Budget management and forecasting Management of internal delivery teams and/or external service providers Data Platform, Service and Product Owner experience Experience of applying Design Thinking A Data-Driven mind-set Outstanding cross-platform knowledge (incl. Teradata, Hadoop and GCP) and understands the nuances of delivery across these platforms both technically and functionally and how they impact delivery Communication capabilities, decision-making, problem solving skills, lateral thinking, analytical and interpersonal skills Experience in leading a team and managing multiple, competing priorities and demands in a dynamic environment Highly analytical with strong attention to detail Demonstrable track record of delivery in a Banking and Financial Markets context A strong and diverse technical background and foundation (ETL, SQL, NoSQL, APIs, Data Architecture, Data Management principle and patterns, Data Ingest, Data Refinery, Data Provision, etc.) Experience in customising and managing integration tools, databases, warehouses and analytical tools A passion for designing towards consistency and efficiency, and always striving for continuous improvements via automated processes Experience in Agile Project methodologies and tools such as JIRA and Confluence You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France