Jobs
Interviews

6785 Hadoop Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

7 - 10 Lacs

Chennai

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: Apache Spark Experience: 5-8 Years

Posted 3 days ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Experience: 7+ Years Location: Noida-Sector 64 Contract to hire Key Responsibilities: Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with the business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders.

Posted 3 days ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Chennai

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: DataBricks - Data Engineering Experience: 5-8 Years

Posted 3 days ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Pune

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: Big Data Testing Experience: 5-8 Years

Posted 3 days ago

Apply

4.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

TCS Hiring !!Java, Microservices Developer Role**Java, Microservices Developer Exp -4 to 7 years Mumbai, Bangalore, Ahmedabad, Hyderabad and Kolkata Please read Job description before Applying NOTE: If the skills/profile matches and interested, please reply to this email by attaching your latest updated CV and with below few details: Name: Contact Number: Email ID: Highest Qualification in: (Eg. B.Tech/B.E./M.Tech/MCA/M.Sc./MS/BCA/B.Sc./Etc.) Current Organization Name: Total IT Experience-4 to 7 years Mumbai, Bangalore, Ahmedabad, Hyderabad and Kolkata Current CTC Expected CTC Notice period Whether worked with TCS - Y/N Competencies (Technical/Behavioral Competency) Location: 4 to 7 years Mumbai, Bangalore, Ahmedabad, Hyderabad and Kolkata Must-Have** Must have development experience in Expert knowledge in developing cloud based applications with Java, Spring Boot, Spring Rest, Spring JPA and Spring Cloud. Development experience in Microservices Architectures best practices and, Docker, Kubernetes• Knowledge of NoSQL databases like Couchbase and MongoDB. Experience designing / maintaining / tuning high-performance code to ensure optimal performance. • Strong knowledge of web security practice. • Knowledge about Google Cloud Platform and Kubernetes Good understanding of Git, source control procedures, and feature branching Experience in Hadoop, Scala, spark Experience of working in Agile Development

Posted 3 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

At EXL, we go beyond capabilities to focus on collaboration and character, tailoring solutions to your unique needs, culture, goals, and technology environments. We specialize in transformation, data science, and change management to enhance efficiency, improve customer relationships, and drive revenue growth. Our expertise in analytics, digital interventions, and operations management helps you outperform the competition with sustainable models at scale. As your business evolution partner, we optimize data leverage for better business decisions and intelligence-driven operations. For more information, visit www.exlservice.com. Job Title - Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. Role Description We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics Responsibilities: 1. Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently. 2. Collaborate with analysts to understand data requirements and ensure data availability and quality. 3. Write and optimize SQL queries for data extraction, transformation, and loading. 4. Utilize Git for version control, ensuring proper documentation and tracking of code changes. 5. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval. 6. Develop and optimize high-performance, scalable databases using Amazon DynamoDB. 7. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations. 8. Automate workflows using AWS Cloud services like event bridge, step functions. 9. Monitor and optimize data processing workflows for performance and scalability. 10. Troubleshoot data-related issues and provide timely resolution. 11. Stay up-to-date with industry best practices and emerging technologies in data engineering. Qualifications: 1. Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus. 2. Strong proficiency in PySpark and Python for data processing and analysis. 3. Proficiency in SQL for data manipulation and querying. 4. Experience with version control systems, preferably Git. 5. Familiarity with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc. 6. Familiarity with Databricks and it’s concepts. 7. Excellent problem-solving skills and attention to detail. 8. Strong communication and collaboration skills to work effectively within a team. 9. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Preferred Skills: 1. Knowledge of data warehousing concepts and data modeling. 2. Familiarity with big data technologies like Hadoop and Spark. 3. AWS certifications related to data engineering.

Posted 3 days ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

We are looking for 8+years experienced candidates for this role. Job Location: Technopark, Trivandrum Experience 8+ years of experience in Microsoft SQL Server administration Primary skills Strong experience in Microsoft SQL Server administration Qualifications Bachelor's degree in computer science, software engineering or a related field Microsoft SQL certifications (MTA Database, MCSA: SQL Server, MCSE: Data Management and Analytics) will be an advantage. Secondary Skills Experience in MySQL, PostgreSQL, and Oracle database administration. Exposure to Data Lake, Hadoop, and Azure technologies Exposure to DevOps or ITIL Main duties/responsibilities Optimize database queries to ensure fast and efficient data retrieval, particularly for complex or high-volume operations. Design and implement effective indexing strategies to reduce query execution times and improve overall database performance. Monitor and profile slow or inefficient queries and recommend best practices for rewriting or re-architecting queries. Continuously analyze execution plans for SQL queries to identify bottlenecks and optimize them. Database Maintenance: Schedule and execute regular maintenance tasks, including backups, consistency checks, and index rebuilding. Health Monitoring: Implement automated monitoring systems to track database performance, availability, and critical parameters such as CPU usage, memory, disk I/O, and replication status. Proactive Issue Resolution: Diagnose and resolve database issues (e.g., locking, deadlocks, data corruption) proactively, before they impact users or operations. High Availability: Implement and manage database clustering, replication, and failover strategies to ensure high availability and disaster recovery (e.g., using tools like SQL Server Always On, Oracle RAC, MySQL Group Replication).

Posted 3 days ago

Apply

12.0 years

0 Lacs

Madurai, Tamil Nadu, India

On-site

Job Title: GCP Data Architect Location: Madurai Experience: 12+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or enterprise data platforms Minimum 3–5 years of hands-on experience in GCP Data Service Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema) Experience with real-time data processing, streaming architectures, and batch ETL pipelines Good understanding of IAM, networking, security models, and cost optimization on GCP Prior experience in leading cloud data transformation projects Excellent communication and stakeholder management skills Preferred Qualifications: GCP Professional Data Engineer / Architect Certification Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics Exposure to AI/ML use cases and MLOps on GCP Experience working in agile environments and client-facing roles What We Offer: Opportunity to work on large-scale data modernization projects with global clients A fast-growing company with a strong tech and people culture Competitive salary, benefits, and flexibility Collaborative environment that values innovation and leadership

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job description Job Name: Senior Data Engineer Azure Years of Experience: 5 Job Description: We are looking for a skilled and experienced Senior Azure Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: ADF, Databricks Secondary Skills: DBT, Python, Databricks, Airflow, Fivetran, Glue, Snowflake Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge /involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. Utilize Azure Databricks for data transformation and processing. Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. Proficient in programming languages like Python, SQL, and conversant with pertinent scripting languages

Posted 3 days ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities : Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 days ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 days ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job Description: · Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. · Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. · Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. · Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. · Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. · Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks · Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained · Working with other members of the project team to support delivery of additional project components (API interfaces) · Evaluating the performance and applicability of multiple tools against customer requirements · Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. · Integrate Databricks with other technologies (Ingestion tools, Visualization tools). · Proven experience working as a data engineer · Highly proficient in using the spark framework (python and/or Scala) · Extensive knowledge of Data Warehousing concepts, strategies, methodologies. · Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). · Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics · Experience in designing and hands-on development in cloud-based analytics solutions. · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. · Designing and building of data pipelines using API ingestion and Streaming ingestion methods. · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. · Thorough understanding of Azure Cloud Infrastructure offerings. · Strong experience in common data warehouse modeling principles including Kimball. · Working knowledge of Python is desirable · Experience developing security models. · Databricks & Azure Big Data Architecture Certification would be plus Mandatory skill sets: ADE, ADB, ADF Preferred skill sets: ADE, ADB, ADF Years of experience required: 4-8 Years Education qualification: BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

This is Adyen Adyen provides payments, data, and financial products in a single solution for customers like Meta, Uber, H&M, and Microsoft - making us the financial technology platform of choice. At Adyen, everything we do is engineered for ambition. For our teams, we create an environment with opportunities for our people to succeed, backed by the culture and support to ensure they are enabled to truly own their careers. We are motivated individuals who tackle unique technical challenges at scale and solve them as a team. Together, we deliver innovative and ethical solutions that help businesses achieve their ambitions faster. Data Engineer We are looking for a Data Engineer to join the Payment Engine Data team in Bengaluru, our newest Adyen office. The main goal of the Payment Engine Data (PED) team is to provide insightful data and solutions for processing payments using all of Adyen's payment options. These consist of various data pipelines between various systems, dashboards offering insights into payment processing, internal and external reporting, additional data products, and infrastructure. The ideal candidate is able to understand the business context and relate it to the underlying data requirements. You should also excel at building top-notch data pipelines on our big data platform. At Adyen, your work as a Data Engineer will be vital in forming our data infrastructure and guaranteeing the seamless flow of data across various systems. What You’ll Do Develop High-Quality Data Pipelines- Design, develop, deploy, and operate ETL/ELT pipelines in PySpark. Your work will directly contribute to the creation of reports, tools, analytics, and datasets for both internal and external use. Collaborative Solution Development- Partner with various teams, engineers, and data analysts to understand data requirements and transform these insights into effective data pipelines. Orchestrate Data Flow- Utilise orchestration tools to manage data pipelines efficiently, experience in Airflow is a significant advantage. Champion Data Best Practices- Advocate for performance, testing, code quality, data validation, data governance, and discoverability. Ensure that the data provided is accurate, performant, and reliable. Performance Optimisation- Identify and resolve performance bottlenecks in data pipelines and systems. Optimise query performance and resource utilisation to meet SLAs and performance requirements, using technologies such as caching, indexing, partitioning, and other Spark optimizations. Knowledge Sharing and Training- Scale your knowledge throughout the organisation, enhancing the overall data literacy. Who You Are Experienced in Big Data- At least 5 years of experience working as a Data Engineer or in a similar role. Data & Engineering practices- You possess an expert-level understanding of both Software and Data Engineering practices. Technical Super Star- Highly proficient in tools and languages such as- Python, PySpark, Airflow, Hadoop, Spark, Kafka, SQL, Git, S3. Looker is a plus. Clear Communicator- Skilled at articulating complex data-related concepts and outcomes to a diverse range of stakeholders. Self-starter- Capable of independently recognizing opportunities, devising solutions, leading, prioritizing and owning projects. Innovator- You have an experimental mindset with a ‘launch fast and iterate’ mentality. Data Culture Champion- Experienced in fostering a data-centric culture within large, technical organizations and setting standards for excellence and continuous improvement. Data Positions At Adyen We know companies handle different definitions for their data-related positions, this is for instance dependent on the size of a company. We categorized and defined all our positions. Have a look at this blogpost to find out! Our Diversity, Equity and Inclusion commitments Our unique approach is a product of our diverse perspectives. This diversity of backgrounds and cultures is essential in helping us maintain our momentum. Our business and technical challenges are unique, and we need as many different voices as possible to join us in solving them - voices like yours. No matter who you are or where you’re from, we welcome you to be your true self at Adyen. Studies show that women and members of underrepresented communities apply for jobs only if they meet 100% of the qualifications. Does this sound like you? If so, Adyen encourages you to reconsider and apply. We look forward to your application! What’s next? Ensuring a smooth and enjoyable candidate experience is critical for us. We aim to get back to you regarding your application within 5 business days. Our interview process tends to take about 4 weeks to complete, but may fluctuate depending on the role. Learn more about our hiring process here. Don’t be afraid to let us know if you need more flexibility. This role is based out of our Bengaluru office. We are an office-first company and value in-person collaboration; we do not offer remote-only roles.

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Overview We are seeking a Data Scientist with a strong foundation in machine learning and a passion for the travel industry. You will work with cross-functional teams to analyze customer behavior, forecast travel demand, optimize pricing models, and deploy AI-driven solutions to improve user experience and drive business growth. Key Responsibilities Engage in all stages of the project lifecycle, including data collection, labeling, and preprocessing, to ensure high-quality datasets for model training. Utilize advanced machine learning frameworks and pipelines for efficient model development, training execution, and deployment. Implement MLFlow for tracking experiments, managing datasets, and facilitating model versioning to streamline collaboration. Oversee model deployment on cloud platforms, ensuring scalable and robust performance in real-world travel applications. Analyze large volumes of structured and unstructured travel data to identify trends, patterns, and actionable insights. Develop, test, and deploy predictive models and machine learning algorithms for fare prediction, demand forecasting, and customer segmentation. Create dashboards and reports to communicate insights effectively to stakeholders across the business. Collaborate with Engineering, Product, Marketing, and Finance teams to support strategic data initiatives. Build and maintain data pipelines for data ingestion, transformation, and modeling. Conduct statistical analysis, A/B testing, and hypothesis testing to guide product decisions. Automate processes and contribute to scalable, production-ready data science tools. Technical Skills Machine Learning Frameworks: PyTorch, TensorFlow, JAX, Keras, Keras-Core, Scikit-learn, Distributed Model Training Programming & Development: Python, Pyspark, Julia, MATLAB, Git, GitLab, Docker, MLOps, CI/CD Pipelines Cloud & Deployment: AWS SageMaker, MLFlow, Production Scaling Data Science & Analytics: Statistical Analysis, Predictive Modeling, Feature Engineering, Data Preprocessing, Pandas, NumPy, PySpark Computer Vision: CNN, RNN, OpenCV, Kornia, Object Detection, Image Processing, Video Analytics Visualization Tools: Looker,Tableau, Power BI, Matplotlib, Seaborn Databases & Querying: SQL, Snowflake, Databricks Big Data & MLOps: Spark, Hadoop, Kubernetes, Model Monitoring Nice To Have Experience with deep learning, LLMs, NLP (Transformers), or recommendation systems in travel use cases. Knowledge of GDS APIs (Amadeus, Sabre), flight search optimization, and pricing models. Strong system design (HLD/LLD) and architecture experience for production-scale ML workflows. Skills: data preprocessing,docker,feature engineering,sql,python,predictive modeling,statistical analysis,keras,data science,spark,data scientist,aws sagemaker,machine learning,mlflow,tensorflow,pytorch

Posted 3 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

ROLE: Bigdata developer JOB LOCATION : Pune, Chennai,Bangalore, EXPERIENCE REQUIREMENT : 7 to 15 Required Technical Skill : Hadoop + Spark with Scala/Java Must-Have •7+ years of active development experience in Spark with Java/Scala/, Hadoop, Hive with hands-on coding and de-bugging skills •Having core fundamentals experience of Apache Hadoop components •Understanding best practices for Big Data (in Hadoop), data processing •Deployment experience with Hadoop distributions open source. ( 7 plus years in Spark with java / Spark with Scala is mandatory ) Good-to-Have •Pyspark/Python Responsibility of / Expectations from the Role Proficiency in troubleshooting, root-cause analysis, application design, and implementing large components for enterprise projects Unit testing Feature development using Spark and Java/Scala Working in Agile team.

Posted 3 days ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. T26 - Manager Software Development 2 R0068418 The Manager, Software Development 2 role at eBay is part of the Risk Engineering team, focused on developing innovative solutions that enhance eBay's Risk management capabilities. The position requires leading engineering initiatives to improve risk and compliance mitigation across eBay services, ensuring alignment with business objectives and a customer-centric approach. You will report within a structure designed to foster collaboration and mentorship, holding a pivotal role in sustaining eBay's growth and trust What You Will Accomplish Lead a diversely skilled team members comprising of engineers, ML developers and product owners. Drive engineering initiatives that enhance risk management and transaction security, ensuring a customer-centric approach and excellent developer experience. Collaborate with cross-functional teams to integrate robust technical solutions aligned with business objectives and focused on customer needs. Mentor and develop team members, fostering a culture of continuous learning and growth with emphasis on hiring and retaining top talent. Execute eBay’s Risk engineering strategies to achieve measurable performance improvements and optimize developer experience. Innovate and influence the adoption of market-leading technology capabilities, rapidly evaluating and scaling new technologies where appropriate. Lead cross-team collaborations that support eBay’s overall business metrics. What You Will Bring At least 12 years of experience in software development, with the latest 4 years as an Engineering Manager. Hands-on experience with technologies like Java, JEE, Spark, Hadoop, Apache Flink, RDBMS, NoSql. Extensive experience in software development, especially within risk management platforms or related fields. Strong technical expertise in solution design patterns, data systems, and machine learning frameworks. Proven leadership skills with the ability to mentor and guide engineering teams. Excellent communication skills to translate and distill complex technical concepts for various audiences. Willingness to travel as needed for this role to collaborate with global teams. Education: Bachelor’s or Master’s in Computer science & Engineering or equivalent experience. Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. As a Full Stack Software Engineer in the Risk Engineering team, you will be a member of the core team that builds outstanding risk products. Primary Responsibilities Build solutions using your strong background in distributed systems, and large scale database systems. Build excellent user experience for customers. Research, analyze, design, develop and test the solutions that are appropriate for the business and technology strategies. Participate in design discussions, code reviews and project related team meetings. Work with other specialists, Architecture, Product Management, and Operations teams to develop innovative solutions that meet business needs with respect to functionality, performance, scalability, reliability, realistic implementation schedules and alignment to development principles and product goals. Develop technical & domain expertise and apply to solving product challenges. Qualifications Bachelor’s degree or equivalent experience in Computer Science or related fie with 5+ years of proven experience as a software engineer. Hands-on experience in Java/J2EE, XML, Web technologies, Web Services, Design Patterns, and OOAD. Solid foundation in computer science with strong proficiencies in data structures, algorithms, and software design. Proficient in implementing OOAD, architectural and design patterns, diverse platforms, frameworks, technologies, and software engineering methodologies. Experience in Oracle/ NoSQL DBs, REST, Event Source, Web Socket. Experience with data solutions like Hadoop, MapReduce, Hive, Pig, Kafka, Storm, Flink etc. is a plus. Experience with Java script, AngularJS, ReactJS, HTML5, CSS3 is nice to have. Proficient in agile development methodologies. Demonstrated ability to understand the business and ability to contribute to technology direction that gives to measurable business improvements. Ability to think out of the box in solving real world problems. Ability to adapt to changing business priorities and to thrive under pressure. Excellent decision-making, communication and collaboration skills. Risk domain expertise is a major plus. Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.

Posted 3 days ago

Apply

0.0 - 3.0 years

25 - 35 Lacs

Madurai, Tamil Nadu

On-site

Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra “Clients Vision is our Mission”. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Techmangohttps://www.techmango.net/ Job Title: GCP Data Architect Location: Madurai Experience: 12+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or enterprise data platforms. Minimum 3–5 years of hands-on experience in GCP Data Service. Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner. Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema). Experience with real-time data processing, streaming architectures, and batch ETL pipelines. Good understanding of IAM, networking, security models, and cost optimization on GCP. Prior experience in leading cloud data transformation projects. Excellent communication and stakeholder management skills. Preferred Qualifications: GCP Professional Data Engineer / Architect Certification. Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics. Exposure to AI/ML use cases and MLOps on GCP. Experience working in agile environments and client-facing roles. What We Offer: Opportunity to work on large-scale data modernization projects with global clients. A fast-growing company with a strong tech and people culture. Competitive salary, benefits, and flexibility. Collaborative environment that values innovation and leadership. Job Type: Full-time Pay: ₹2,500,000.00 - ₹3,500,000.00 per year Application Question(s): Current CTC ? Expected CTC ? Notice Period ? (If you are serving Notice period please mention the Last working day) Experience: GCP Data Architecture : 3 years (Required) BigQuery: 3 years (Required) Cloud Composer (Airflow): 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person

Posted 3 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In testing and quality assurance at PwC, you will focus on the process of evaluating a system or software application to identify any defects, errors, or gaps in its functionality. Working in this area, you will execute various test cases and scenarios to validate that the system meets the specified requirements and performs as expected. You are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt, take ownership and consistently deliver quality work that drives value for our clients and success as a team. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. JD Template- ETL Tester Associate - Operate Field CAN be edited Field CANNOT be edited ____________________________________________________________________________ Job Summary - A career in our Managed Services team will provide you with an opportunity to collaborate with a wide array of teams to help our clients implement and operate new capabilities, achieve operational efficiencies, and harness the power of technology. Our Data, Testing & Analytics as a Service team brings a unique combination of industry expertise, technology, data management and managed services experience to create sustained outcomes for our clients and improve business performance. We empower companies to transform their approach to analytics and insights while building your skills in exciting new directions. Have a voice at our table to help design, build and operate the next generation of software and services that manage interactions across all aspects of the value chain. Minimum Degree Required (BQ) *: Bachelor's degree Degree Preferred Required Field(s) of Study (BQ): Preferred Field(s) Of Study Computer and Information Science, Management Information Systems Minimum Year(s) of Experience (BQ) *: US Certification(s) Preferred Minimum of 2 years of experience Required Knowledge/Skills (BQ) Preferred Knowledge/Skills *: As an ETL Tester, you will be responsible for designing, developing, and executing SQL scripts to ensure the quality and functionality of our ETL processes. You will work closely with our development and data engineering teams to identify test requirements and drive the implementation of automated testing solutions. Key Responsibilities Collaborate with data engineers to understand ETL workflows and requirements. Perform data validation and testing to ensure data accuracy and integrity. Create and maintain test plans, test cases, and test data. Identify, document, and track defects, and work with development teams to resolve issues. Participate in design and code reviews to provide feedback on testability and quality. Develop and maintain automated test scripts using Python for ETL processes. Ensure compliance with industry standards and best practices in data testing. Qualifications Solid understanding of SQL and database concepts. Proven experience in ETL testing and automation. Strong proficiency in Python programming. Familiarity with ETL tools such as Apache NiFi, Talend, Informatica, or similar. Knowledge of data warehousing and data modeling concepts. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Experience with version control systems like Git. Preferred Qualifications Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with CI/CD pipelines and tools like Jenkins or GitLab. Knowledge of big data technologies such as Hadoop, Spark, or Kafka.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their outstanding selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate problem solvers, innovators, and optimists — and help us connect people and build communities to create economic opportunity for all. Do you want to make an impact on the world’s largest e-commerce website? Are you interested in building performance efficient, high-volume and highly scalable distributed systems? We have a place for you! Who Are We? We are seeking a hard-working Software Engineer to join our Compliance Engineering Development team. In this key role, you will help ensure that the eBay marketplace operates in full alignment with all relevant regulatory requirements. As a dedicated and enthusiastic team member, you’ll collaborate with dedicated and hardworking peers in a dynamic and enjoyable environment, building exceptional compliance products. You will thrive in an agile setting that values problem-solving, innovation, and engineering perfection. What Will You Do We Are Looking For Exceptional Engineers, Who Take Pride In Creating Simple Solutions To Apparently-complex Problems. Our Engineering Tasks Typically Involve At Least One Of The Following Crafting sound API design and driving integration between our Data layers and Customer-facing applications and components Designing and running A/B tests in Production experiences in order to vet and measure the impact of any new or improved functionality Active contributor on development of complex, multi-tier distributed software applications Design layered application, including user interface, business functionality, and database access. Work with other developers, quality engineers to develop innovative solutions that meet market needs. Estimate engineering efforts, plan implementations, and rollout system changes Participate in continuous improvement of Payment product to achieve better quality Participate in requirement/design meetings with other PD/QE What You Bring Excellent decision-making skills, thrive on dealing with ambiguities and changes. Strong sense of ownership and communication skills , embrace diverse ideas across organizations and align in a mutually agreed direction to get things done and move forward. Deeply care about growing others, great at mentoring and coaching, creating a large positive impact on organizational culture. Strong learning ability, self-driven Attending knowledge sharing sessions, both within the company and externally Learning transferable skills Growth mindset and constantly looking for opportunities to learn Learns adjacent areas (project management, people management, product management) in addition to core technical skills to better support the organization Qualification And Skill Requirements Bachelor's degree in EE, CS or other related field. 4+ years of experience in building large scale, distributed web platforms/APIs with lead responsible for the end to end product scope across multiple domains. Experience in server-side development with Java Proficiency with Spring framework Object-oriented design Design patterns RESTful services Agile development methodologies Multi-threading development Databases - SQL/NO-SQL Hadoop, Hive and HDFS Demonstrated ability to understand the business and ability to chip in to the technology direction that gives measurable business improvements. Ability to adapt to changing business priorities and to thrive under pressure. Excellent Decision-making, Communication And Collaboration Skills. Benefits are an essential part of your total compensation for the work you do every day. Whether you’re single, in a growing family, or nearing retirement, eBay offers a variety of comprehensive and competitive benefit programs to meet your needs. Including maternal & paternal leave, paid sabbatical, and plans to help ensure your financial security today and in the years ahead because we know feeling financially secure during your working years and through retirement is important. Here at eBay, we love creating opportunities for others by connecting people from widely a diverse set of backgrounds, perspectives, and geographies. So, being diverse and inclusive isn’t just something we strive for, it is who we are, and part of what we do each and every single day. We want to ensure that as an employee, you feel eBay is a place where, no matter who you are, you feel safe, included, and that you have the opportunity to bring your unique self to work. To learn about eBay’s Diversity & Inclusion click here: https://www.ebayinc.com/company/diversity-inclusion/ Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information.

Posted 3 days ago

Apply

30.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Us Thoucentric is the Consulting arm of Xoriant, a prominent digital engineering services company with 5000 employees. We are headquartered in Bangalore with presence across multiple locations in India, US, UK, Singapore & Australia Globally. As the Consulting business of Xoriant, we help clients with Business Consulting, Program & Project Management, Digital Transformation, Product Management, Process & Technology Solutioning and Execution including Analytics & Emerging Tech areas cutting across functional areas such as Supply Chain, Finance & HR, Sales & Distribution across US, UK, Singapore and Australia. Our unique consulting framework allows us to focus on execution rather than pure advisory. We are working closely with marquee names in the global consumer & packaged goods (CPG) industry, new age tech and start-up ecosystem. Xoriant (Parent entity) started in 1990 and is a Sunnyvale, CA headquartered digital engineering firm with offices in the USA, Europe, and Asia. Xoriant is backed by ChrysCapital, a leading private equity firm. Our strengths are now combined with Xoriants capabilities in AI & Data, cloud, security and operations services proven for 30 years. We have been certified as "Great Place to Work" by AIM and have been ranked as "50 Best Firms for Data Scientists to Work For." We have an experienced consulting team of over 450 world-class business and technology consultants based across six global locations, supporting clients through their expert insights, entrepreneurial approach and focus on delivery excellence. We have also built point solutions and products through Thoucentric labs using AI/ML in the supply chain space. Job Title: Data Scientist (3-5 Years Experience) Location: Bangalore About Us: Thoucentric is a forward-thinking organization at the forefront of leveraging data-driven insights to solve complex business challenges. We are seeking a passionate and skilled Data Scientist to join our dynamic team and help us drive innovation through advanced analytics and machine learning. Key Responsibilities: Develop and implement machine learning and deep learning models for various business problems, with a strong focus on time series forecasting. Analyze large, complex datasets to extract actionable insights and identify trends, patterns, and opportunities for improvement. Design, build, and validate predictive models using state-of-the-art techniques, ensuring scalability and robustness. Collaborate with cross-functional teams (Product, Engineering, Business) to translate business requirements into data science solutions. Communicate findings and recommendations clearly to both technical and non-technical stakeholders. Stay updated with the latest research and advancements in machine learning, deep learning, and time series analysis, and proactively apply new techniques as appropriate. Mentor junior team members and contribute to a culture of continuous learning and innovation. Requirements Required Skills & Qualifications: 3-5 years of hands-on experience in data science, machine learning, and statistical modeling. Strong expertise in time series forecasting (ARIMA, XGBoost, RandomForest, TFT, NHITS, etc.) and familiarity with deep learning frameworks (TensorFlow, PyTorch). Excellent programming skills in Python (preferred), with proficiency in libraries such as NumPy, Pandas, scikit-learn, and visualization tools (Matplotlib, Seaborn, Plotly). Solid conceptual understanding of machine learning algorithms, deep learning architectures, and statistical methods. Experience with data preprocessing, feature engineering, and model evaluation. Ability to learn quickly and adapt to new technologies, tools, and methodologies. Strong problem-solving skills and a keen attention to detail. Excellent communication and presentation skills. Preferred Qualifications: Experience with cloud platforms and MLOps tools. Exposure to big data technologies (Spark, Hadoop) is a plus. Masters degree in Computer Science, Statistics, Mathematics, or a related field. Benefits What a Consulting role at Thoucentric will offer you? Opportunity to define your career path and not as enforced by a manager A great consulting environment with a chance to work with Fortune 500 companies and startups alike. A dynamic but relaxed and supportive working environment that encourages personal development. Be part of One Extended Family. We bond beyond work - sports, get-togethers, common interests etc. Work in a very enriching environment with Open Culture, Flat Organization and Excellent Peer Group. Be part of the exciting Growth Story of Thoucentric! I'm interested Locations: Bangalore North, India | Posted on: 05/02/2025

Posted 3 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com Job Description You will be a key member of our Data Engineering team, focused on designing, developing, and maintaining robust data solutions on on-premise environments. You will work closely with internal teams and client stakeholders to build and optimize data pipelines and analytical tools using Python, PySpark, SQL, and Hadoop ecosystem technologies. This role requires deep hands-on experience with big data technologies in traditional data center environments (non-cloud). What you’ll be doing? Design, build, and maintain on-premise data pipelines to ingest, process, and transform large volumes of data from multiple sources into data warehouses and data lakes Develop and optimize PySpark and SQL jobs for high-performance batch and real-time data processing Ensure the scalability, reliability, and performance of data infrastructure in an on-premise setup Collaborate with data scientists, analysts, and business teams to translate their data requirements into technical solutions Troubleshoot and resolve issues in data pipelines and data processing workflows Monitor, tune, and improve Hadoop clusters and data jobs for cost and resource efficiency Stay current with on-premise big data technology trends and suggest enhancements to improve data engineering capabilities Qualifications Bachelor’s degree in Computer Science, Software Engineering, or a related field 6+ years of experience in data engineering or a related domain Strong programming skills in Python (with experience in PySpark) Expertise in SQL with a solid understanding of data warehousing concepts Hands-on experience with Hadoop ecosystem components (e.g., HDFS, Hive, Oozie, Sqoop) Proven ability to design and manage data solutions in on-premise environments (no cloud dependency) Strong problem-solving skills with an ability to work independently and collaboratively

Posted 4 days ago

Apply

4.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About The Opportunity Our client is expanding the software product engineering team for its partner, a US-based SaaS platform company specializing in autonomous security solutions. Their partner's platform leverages advanced AI to detect, mitigate, and respond to cyber threats across enterprise infrastructures. By offering comprehensive visibility, deep cognition, effective detection, thorough root cause analysis, and high-precision control, they aim to transform traditional governance, risk, and compliance (GRC) workflows into fast and scalable AI-native processes. Responsibilities Model Development and Integration: Design, implement, test, integrate and deploy scalable machine learning models, integrating them into production systems and APIs to support existing and new customers. Experimentation and Optimization: Lead the design of experiments and hypothesis testing for product feature development; monitor and analyze model performance and data accuracy, making improvements as needed. Cross-Functional Collaboration: Work closely with cross-functional teams across India and the US to identify opportunities, deploy impactful solutions, and effectively communicate findings to both technical and non-technical stakeholders. Mentorship and Continuous Learning: Mentor junior team members, contribute to knowledge sharing, and stay current with best practices in data science, machine learning, and AI. Requirements & Qualifications Bachelors or Masters in Statistics, Mathematics, Computer Science, Engineering, or a related quantitative field. 4-7 years of experience building and deploying machine learning models. Strong problem-solving skills with an emphasis on product development. Experience operating and troubleshooting scalable machine learning systems in the cloud. Technical Skills Programming and Frameworks: Proficient in Python with experience in TensorFlow, PyTorch, scikit-learn, and Pandas; familiarity with Golang is a plus; proficient with Git and collaborative workflows. Software Engineering: Strong understanding of data structures, algorithms, and system design principles; experience in designing scalable, reliable, and maintainable systems. Machine Learning Expertise: Extensive experience in AI and machine learning model development, including large language models, transformers, sequence models, causal inference, unsupervised clustering, and reinforcement learning. Knowledge of prompting techniques, embedding models and RAG. Innovation in Machine Learning: Ability to design and conceive novel ways of problem solving using new machine learning models. Integration, Deployment, and Cloud Services: Experience integrating machine learning models into backend systems and APIs; familiarity with Docker, Kubernetes, CI/CD tools and Cloud Services like AWS/Azure/GCP for efficient deployment. Data Management and Security: Proficient with SQL and experience with PostgreSQL; knowledge of NoSQL databases; understanding of application security and data protection principles. Methodologies And Tools Agile/Scrum Practices: Experience with Agile/Scrum methodologies. Project Management Tools: Proficiency with Jira, Notion, or similar tools. Soft Skills Excellent communication and problem-solving abilities. Ability to work independently and collaboratively. Strong organizational and time management skills. High degree of accountability and ownership. Nice-to-Haves Experience with big data tools like Hadoop or Spark. Familiarity with infrastructure management and operations lifecycle concepts. Experience working in a startup environment. Contributions to open-source projects or a strong GitHub portfolio. Benefits Comprehensive Insurance (Life, Health, Accident). Flexible Work Model. Accelerated learning & non-linear growth. Flat organization structure driven by ownership and accountability. Opportunity to own and be a part of some of the most innovative and promising AI/SaaS product companies in North America and around the world. Accomplished Global Peers - Working with some of the best engineers/professionals globally from the likes of Apple, Amazon, IBM Research, Adobe and other innovative product companies . Ability to make a global impact with your work, leading innovations in Conversational AI, Energy/Utilities, ESG, HealthTech, IoT, Risk/Compliance, CyberSecurity, PLM and more. Skills: api,azure,jira,machine learning models,aws,golang,docker,product development,machine learning,git,postgresql,python,github,scikit-learn,tensorflow,nosql,ci/cd,kubernetes,gcp,pytorch,pandas,spark,sql,hadoop

Posted 4 days ago

Apply

6.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Greetings from TCS! Job Title: Azure Data Engineer having hands on experience with .NET & C#. Required Skillset: Proficiency in Azure Data Factory, Azure Databricks (including Spark and Delta Lake), and other Azure data services. Strong programming skills in Python, with experience in data processing libraries such as Pandas, PySpark, and NumPy. Mandatory Experience with .NET, C# Familiarity with data warehousing concepts and tools (e.g., Azure Synapse Analytics). Location: Pune (Kharadi) Experience Range: 6-12 years Job Description: Must-Have** Proven experience as an Azure Data Engineer, .NET, C# Strong proficiency in Azure Databricks, including Spark and Delta Lake. Experience with Azure Data Factory, Azure Data Lake Storage, and Azure SQL Database. Proficiency in Data Warehousing concepts and Python Good To Have: Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark Experience creating SQL queries for ETL and reporting Experience with DevOps practices and tools, including CI/CD pipelines for data engineering workflows. Knowledge of big data technologies and data processing workflows. Responsibility of / Expectations from the Role : Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks, Azure Data Factory, and other Azure services. Leverage existing C# code and libraries within Azure Databricks, connect to Databricks using .NET applications via ODBC or JDBC, and even develop Spark applications in C# Create and maintain optimal data pipeline architecture to support data-driven decision-making. Assemble large, complex datasets that meet functional and non-functional business requirements. Implement data flows to connect operational systems, data for analytics, and BI systems. Develop and optimize data models on Azure data platforms. Stay updated with the latest industry trends and best practices in data engineering and big data technologies. Participate in code reviews and ensure adherence to coding standards and best practices. Collaborate with cross-functional teams to define and implement data solutions that align with business objectives. Document data processes, architectures, and workflows for future reference and knowledge sharing. Kindly Share your updated Resumes Thanks & Regards, Shilpa Silonee BFSI A&I TAG

Posted 4 days ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Wissen Technology is Hiring fo r Python + Data Engineer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges. Role Overview: We are seeking a skilled and innovative Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes. Experience: 5-9 Years Location: Mumbai Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills: Professional Experience: 5+ years of experience in data engineering or a related field. Programming: Strong proficiency in Python, with experience in libraries like pandas, pyspark , or boto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: -AWS Glue for ETL/ELT. -S3 for storage. -Redshift or Athena for data warehousing and querying. -Lambda for serverless compute . -Kinesis or SNS/SQS for data streaming. -IAM Roles for security. Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline . Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right ’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard ’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website : www.wissen.com LinkedIn : https ://www.linkedin.com/company/wissen-technology Wissen Leadership : https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership : https://www.wissen.com/articles/ Employee Speak: https://www.ambitionbox.com/overview/wissen-technology-overview https://www.glassdoor.com/Reviews/Wissen-Infotech-Reviews-E287365.htm Great Place to Work: https://www.wissen.com/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-india/ https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-6935459546131763200-xF2k About Wissen Interview Process : https://www.wissen.com/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-whom-we-hire/ Latest in Wissen in CIO Insider: https://www.cioinsiderindia.com/vendor/wissen-technology-setting-new-benchmarks-in-technology-consulting-cid-1064.html

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies