Home
Jobs

4851 Hadoop Jobs - Page 49

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Looking to hire a Data Engineer at the G4 level who has a deep understanding of Data Product Lifecycle, Standards and Practices. Will be responsible for building scalable and efficient data solutions to support the Brand Marketing / Menu function with a specific focus on the Menu Data product and initiatives. As a Data Engineer, you will collaborate with data scientists, analysts, and other cross-functional teams to ensure the availability, reliability, and performance of data systems. Leads initiatives to enable trusted Menu data, supports decision-making, and partners with business and technology teams to deliver scalable data solutions that drive insights into menu performance, customer preferences, and marketing effectiveness. Expertise in cloud computing platforms, technologies and data engineering best practices will play a crucial role within this domain. Who we’re looking for: Primary Responsibilities: Builds and maintains relevant and reliable Menu data products that support menu and marketing Analytics. Develops and implements new technology solutions as needed to ensure ongoing improvement with data reliability and observability in-view. Participates in new software development engineering and Lead data engineering initiatives supporting Product Mix Analytics, ensuring timely and accurate delivery of marketing and menu-related products. Work closely with the Product owner and help to define business rules that determines the quality of Menu datasets. Drive and implement best practices for pipeline development, data governance, data security and quality across marketing and menu-related datasets. Ensure scalability, maintainability, and quality of data systems powering menu item tracking, promotion data, and marketing analytics. Staying up to date with emerging data engineering technologies, trends, and best practices, and evaluating their applicability to meet evolving Product Mix analytics needs. Documenting data engineering processes, workflows, and solutions for knowledge sharing and future reference. Mentor and coach junior data engineers, particularly in areas related to menu item tracking, promotion data, and marketing analytics. Ability and flexibility to coordinate and work with teams distributed across time zones, as needed Skill: Leads teams to drive scalable data engineering practices and technical excellence within the Menu Data ecosystem. Bachelor's or master's degree in computer science or related engineering field and deep experience with Cloud computing 5+ years of professional experience in data engineering or related fields Proficiency in Python, Java, or Scala for data processing and automation Hands-on experience with data orchestration tools (e.g., Apache Airflow, Luigi) and big data ecosystems (e.g., Hadoop, Spark, NoSQL) Expert knowledge of Data quality functions like cleansing, standardization, parsing, de-duplication, mapping, hierarchy management, etc. Ability to perform extensive data analysis (comparing multiple datasets) using a variety of tools Proven ability to mentor team members and lead technical initiatives across multiple workstreams Effective communication and stakeholder management skills to drive alignment and adoption of data engineering standards Demonstrated experience in data management & data governance capabilities Familiarity with data warehousing principles and best practices. Excellent problem solver - use of data and technology to solve problems or answer complex data related questions Excellent collaboration skills to work effectively in cross-functional teams. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mysore, Karnataka, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

P1,C3,STS Required Skills Hands-on experience of Linux 8.x operating system for 5 years at an advanced level. Experience with Service Oriented Architecture, Distributed Systems, Scripting such as Python and shell, Relational database (E.g., Sybase, DB2, SQL, Postgres) Hands-on experience of web servers (Apache / Nginx), Application Servers (Tomcat / JBoss) to include application integration, configuration, and troubleshooting. Hands-on experience Docker containers, Kubernetes and SaaS platform integration Exposure and experience messaging technology like Kafka Clear concept of load balancer, web proxies and storage platforms like NAS / SAN from an implementation perspective only. Familiar with basic security policies for secure hosting solutions, Kerberos and standard encryption methodologies including SSL and TLS. Prior experience managing large web-based n-tier applications in secure environments on cloud Strong knowledge SRE Principles with grasp over tools / approach to apply them Strong infrastructure knowledge in Linux / Unix admin, Storage, Networking and Web Technologies Experience in troubleshooting Application Issues and Managing Incidents Exposure to tools like Open Telemetry, Prometheus, Grafana, Splunk, Ansible Excellent verbal and written communication skills. Desired / Nice to have skills Exposure to Big Data platforms like Hadoop / Cloudera and ELK Stack Working knowledge of workflow orchestration tool like Airflow Familiarity with caching DB like Redis, NoSQL database and SPARQL Capacity planning and performance tuning exercise Identity management protocols like OIDC / OAuth, SAML, LDAP integration Cloud Application and respective infrastructure Knowledge is a plus. Working knowledge of GenAI, LLM models Experience in Cloud / Distributed computing technology or certification is a plus Experience 5 to 8 years in a similar role of hands-on application / middleware specialist. Prior experience of working in a global financial organization is an advantage Location The candidate will be based at Morgan Stanleys office in Mumbai. NFR Tech is looking to onboard an application support and SRE specialist for their Application and Data Engineering (ADE) group. ADE provides application engineering, tooling, automation and elevated production support services conforming to company security blueprints and focused on performance, reliability and scalability by understanding the technical requirement from application owners and business, participate in technical evaluation of vendors and vendor technologies, conduct proof of concept, packaging and deploying middleware products. Skills Linux Python/Shell Database-Sybase, DB2 Web Servers Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Delhi Cantonment, Delhi, India

On-site

Linkedin logo

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA We are seeking an experienced Data Architect to join our team in designing and delivering innovative data solutions to clients. The successful candidate will be responsible for architecting, developing, and implementing data management solutions and data architectures for various industries. This role requires strong technical expertise, excellent problem-solving skills, and the ability to work effectively with clients and internal teams to design and deploy scalable, secure, and efficient data solutions. What You'll Be Doing We are seeking an experienced Data Architect to join our team in designing and delivering innovative data solutions to clients. The successful candidate will be responsible for architecting, developing, and implementing data management solutions and data architectures for various industries. This role requires strong technical expertise, excellent problem-solving skills, and the ability to work effectively with clients and internal teams to design and deploy scalable, secure, and efficient data solutions. Experience and Leadership: Proven experience in data architecture, with a recent role as a Lead Data Solutions Architect, or a similar senior position in the field. Proven experience in leading architectural design and strategy for complex data solutions and then overseeing their delivery. Experience in consulting roles, delivering custom data architecture solutions across various industries. Architectural Expertise: Strong expertise in designing and overseeing delivery of data streaming and event-driven architectures, with a focus on Kafka and Confluent platforms. In-depth knowledge in architecting and implementing data lakes and lakehouse platforms, including experience with Databricks and Unity Catalog. Proficiency in conceptualising and applying Data Mesh and Data Fabric architectural patterns. Experience in developing data product strategies, with a strong inclination towards a product-led approach in data solution architecture. Extensive familiarity with cloud data architecture on platforms such as AWS, Azure, GCP, and Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as Apache Spark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python or R is beneficial. Exposure to ETL/ ELT processes, SQL, NoSQL databases is a nice-to-have, providing a well-rounded background. Experience with data visualization tools and DevOps principles/tools is advantageous. Familiarity with machine learning and AI concepts, particularly in how they integrate into data architectures. Design and Lifecycle Management: Proven background in designing modern, scalable, and robust data architectures. Comprehensive grasp of the data architecture lifecycle, from concept to deployment and consumption. Data Management and Governance: Strong knowledge of data management principles and best practices, including data governance frameworks. Experience with data security and compliance regulations (GDPR, CCPA, HIPAA, etc.) Leadership and Communication: Exceptional leadership skills to manage and guide a team of architects and technical experts. Excellent communication and interpersonal skills, with a proven ability to influence architectural decisions with clients and guide best practices Project and Stakeholder Management: Experience with agile methodologies (e.g. SAFe, Scrum, Kanban) in the context of architectural projects. Ability to manage project budgets, timelines, and resources, maintaining focus on architectural deliverables. Location: Delhi or Bangalore Workplace type: Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less

Posted 1 week ago

Apply

0.0 - 1.0 years

0 Lacs

Thiruvananthapuram, Kerala

On-site

Indeed logo

Job Requisition Document Job Title: Software Engineer – Full Stack (Geospatial) Location: Thiruvananthapuram, Kerala About Us: Our success is driven by our ability to consistently deliver world-class, high-quality talent, particularly in the areas of precision engineering, assembly line operations, and other skilled manpower across diverse industrial domains. Among our esteemed clients is a listed Japanese company that is set to begin its operations in Technopark, Thiruvananthapuram, further reinforcing our standing as a premier recruitment partner in the region. Job Summary: We are seeking a highly skilled and motivated Full Stack Software Engineer to join our dynamic multinational team, specializing in the Geospatial domain (Location-Based Services - LBS, Geographic Information Systems - GIS). This role focuses on the development and enhancement of sophisticated geospatial platforms and applications. The ideal candidate will possess strong expertise in a range of technologies including Java, Springboot, Python, Vue.js, and AWS cloud services, coupled with a passion for building high-quality, scalable, and impactful software solutions that leverage geographic data and spatial analysis. Responsibilities: ● Design, develop, test, deploy, and maintain robust and scalable web applications and services for geospatial data processing, visualization, and analysis, utilizing Vue.js for front-end and Java (with Springboot), Python for back-end development. ● Collaborate effectively with cross-functional, multinational teams including product managers, GIS analysts, data scientists, and other engineers to deliver high-quality geospatial software solutions. ● Develop and integrate user-facing mapping interfaces and geospatial tools with server- side logic, ensuring seamless performance and intuitive user experience. ● Build reusable components and front-end libraries for geospatial applications (Vue.js). ● Develop and maintain efficient, reusable, and reliable code in Java, Python for geospatial algorithms, data processing pipelines, and API development. ● Ensure the technical feasibility of UI/UX designs for geospatial applications, providing constructive feedback on map interactions and data display. ● Optimize applications for maximum speed, scalability, and responsiveness, particularly when handling large geospatial datasets. ● Implement robust security and data protection measures, considering the sensitivity of location data. ● Design, manage, and optimize AWS cloud infrastructure for hosting and scaling geospatial applications and services (e.g., using EC2, S3 for raster/vector tiles, RDS with PostGIS, Lambda for geoprocessing tasks). ● Work with various geospatial data formats (e.g., GeoJSON, Shapefile, KML, GeoTIFF) and database systems (e.g., PostgreSQL/PostGIS). ● Participate actively in code reviews to maintain code quality, share knowledge, and foster a collaborative development environment. ● Troubleshoot, debug, and upgrade existing geospatial software, ensuring platform stability and performance. ● Contribute to all phases of the software development lifecycle, from concept and design through testing and deployment on cloud platforms like AWS. ● Stay updated with emerging technologies in GIS, LBS, new AWS services relevant to geospatial data, and industry best practices to drive innovation. Mandatory Technical Skills, Experience: 1 to 5 Years relevant experience ● Proven experience as a Software Engineer with a focus on geospatial applications. ● Experience with front-end frameworks like Vue.js and its core principles. ● Strong proficiency in Java and experience with the Springboot framework. ● Strong skills in Python, particularly with libraries used in geospatial analysis and data manipulation (e.g., GeoPandas, Shapely, Rasterio). ● Solid understanding of object-oriented programming principles. ● Experience with front-end technologies such as HTML5, CSS3, and responsive design. ● Familiarity with RESTful APIs and web services, including OGC standards (WMS, WFS, WPS). ● Experience with database technologies, especially PostgreSQL with PostGIS extension. ● Proficient understanding of code versioning tools, such as Git. ● Solid experience with cloud platforms, particularly AWS (including services like EC2, S3, RDS, Lambda, API Gateway, Location Service). ● Experience with GIS tools and libraries (e.g., QGIS, ArcGIS APIs, GeoServer, MapServer, Leaflet, OpenLayers, Mapbox GL JS). ● Understanding of core GIS concepts, map projections, coordinate systems, and spatial analysis techniques. Additional (Nice to have) Skills: ● Experience with other front-end frameworks like React.js. ● Familiarity with other cloud platforms (e.g., Azure, Google Cloud) and their geospatial offerings. ● Experience with Big Data technologies for geospatial data (e.g., Spark, Hadoop). ● Knowledge of mobile development (iOS/Android) for LBS applications. ● Experience with containerization technologies like Docker and orchestration tools like Kubernetes. ● Understanding of CI/CD pipelines and associated tools (e.g., Jenkins, GitLab CI). ● Experience with 3D GIS and visualization. Behavioral Skills (1 st 3 skills below are mandatory only for Senior role): ● Leadership Potential: Demonstrated ability or strong potential to guide and support a small team, fostering a collaborative and productive environment. This includes providing guidance, mentoring junior team members and delegating tasks effectively. ● Communication Excellence: Exceptional verbal and written communication skills, with the ability to clearly and concisely convey technical information to both technical and non- technical audiences, including clients. ● Client Relationship Management: Ability to build and maintain positive relationships with clients, understand their needs and expectations and proactively address any concerns. ● Problem-Solving and Analytical Thinking: Strong analytical and problem-solving skills with the ability to identify root causes of issues, evaluate different solutions and implement effective resolutions, both independently and within a team. ● Adaptability and Flexibility: Ability to adapt to changing project requirements, client demands and work environments. ● Collaboration and Teamwork: Proven ability to work effectively within a team, contributing positively to team goals, sharing knowledge and supporting colleagues. ● Ownership and Accountability: Takes ownership of assigned tasks and responsibilities, demonstrates a strong sense of accountability for delivering high-quality work within deadlines. ● Proactiveness and Initiative: Demonstrates a proactive approach to work, identifying potential issues or opportunities for improvement and taking initiative to address them. ● Professionalism and Integrity: Maintains a high level of professionalism, ethical conduct and integrity in all interactions, both internally and with clients. ● Time Management and Organization: Excellent time management and organizational skills, with the ability to prioritize tasks, manage workload effectively and meet deadlines in a fast-paced environment. Education:Bachelor's degree in Computer Science/Electronics/Electrical Engineering. Salary: Best in the Market Job Type: Permanent Experience: Full Stack Software Engineer – Geospatial Platform: 1 year (Required) Work Location: In person

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Ahmedabad

Work from Office

Naukri logo

Roles and Responsibility : Collaborate with stakeholders to understand business requirements and data needs. Translate business requirements into scalable and efficient data engineering solutions. Design, develop, and maintain data pipelines using AWS serverless technologies. Implement data modeling techniques to optimize data storage and retrieval processes. Develop and deploy data processing and transformation frameworks for real-time and batch processing. Ensure data pipelines are scalable, reliable, and performant for large-scale data sizes. Implement data documentation and observability tools and practices to monitor...

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Lemongrass Consulting (www.lemongrassconsulting.com) is the leading professional and managed service Lemongrass (lemongrasscloud.com) is a global leader in SAP consulting, focused on helping organizations transform their business processes through innovative solutions and technologies. With a strong commitment to customer success, Lemongrass partners with companies to drive their digital transformation journeys, enabling them to unlock the full potential of their SAP investments. We do this with our continuous innovation, automation, migration and operation, delivered on the world's most comprehensive cloud platforms – AWS, Azure and GCP and SAP Cloud ERP. We have been working with AWS and SAP since 2010 and we are a Premier Amazon Partner Network (APN) Consulting Partner. We are also a Microsoft Gold Partner, a Google Cloud Partner and an SAP Certified Silver Partner. Our team is what makes Lemongrass exceptional and why we have the excellent reputation in the market that we enjoy today. At Lemongrass, you will work with the smartest and most motivated people in the business. We take pride in our culture of innovation and collaboration that drives us to deliver exceptional benefits to our clients every day. About the Role: We are seeking an experienced Cloud Data Engineer with a strong background in AWS, Azure, and GCP. The ideal candidate will have extensive experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, and other ETL tools like Informatica, SAP Data Intelligence, etc. You will be responsible for designing, implementing, and maintaining robust data pipelines and building scalable data lakes. Experience with various data platforms like Redshift, Snowflake, Databricks, Synapse, Snowflake and others is essential. Familiarity with data extraction from SAP or ERP systems is a plus. Key Responsibilities: • Design and Development: • Design, develop, and maintain scalable ETL pipelines using cloud-native tools (AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc.). • Architect and implement data lakes and data warehouses on cloud platforms (AWS, Azure, GCP). • Develop and optimize data ingestion, transformation, and loading processes using Databricks, Snowflake, Redshift, BigQuery and Azure Synapse. • Implement ETL processes using tools like Informatica, SAP Data Intelligence, and others. Develop and optimize data processing jobs using Spark Scala. • Data Integration and Management: • Integrate various data sources, including relational databases, APIs, unstructured data, and ERP systems into the data lake. • Ensure data quality and integrity through rigorous testing and validation. • Perform data extraction from SAP or ERP systems when necessary. • Performance Optimization: • Monitor and optimize the performance of data pipelines and ETL processes. • Implement best practices for data management, including data governance, security, and compliance. • Collaboration and Communication: • Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. • Collaborate with cross-functional teams to design and implement data solutions that meet business needs. • Documentation and Maintenance: • Document technical solutions, processes, and workflows. • Maintain and troubleshoot existing ETL pipelines and data integrations. Qualifications: • Education: • Bachelor’s degree in Computer Science, Information Technology, or a related field. Advanced degrees are a plus. • Experience: • 7+ years of experience as a Data Engineer or in a similar role. • Proven experience with cloud platforms: AWS, Azure, and GCP. • Hands-on experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc. • Experience with other ETL tools like Informatica, SAP Data Intelligence, etc. • Experience in building and managing data lakes and data warehouses. • Proficiency with data platforms like Redshift, Snowflake, BigQuery, Databricks, and Azure Synapse. • Experience with data extraction from SAP or ERP systems is a plus. Strong experience with Spark and Scala for data processing. • Skills: • Strong programming skills in Python, Java, or Scala. • Proficient in SQL and query optimization techniques. • Familiarity with data modeling, ETL/ELT processes, and data warehousing concepts. • Knowledge of data governance, security, and compliance best practices. • Excellent problem-solving and analytical skills. • Strong communication and collaboration skills. Preferred Qualifications: • Experience with other data tools and technologies such as Apache Spark, or Hadoop. • Certifications in cloud platforms (AWS Certified Data Analytics – Specialty, Google Professional Data Engineer, Microsoft Certified: Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps practices for data engineering What we offer in return: Remote Working: Lemongrass always has been and always will offer 100% remote work Flexibility: Work where and when you like most of the time Training: A subscription to A Cloud Guru and generous budget for taking certifications and other resources you’ll find helpful State of the art tech : An opportunity to learn and run the latest industry standard tools Team: Colleagues who will challenge you giving the chance to learn from them and them from you Selected applicant will be subject to a background investigation, which will be conducted and the results of which will be used in compliance with applicable law. Lemongrass Consulting is an Equal Opportunity/Affirmative Action employer. All qualified candidates will receive consideration for employment without regard to disability, protected veteran status, race, color, religious creed, national origin, citizenship, marital status, sex, sexual orientation/gender identity, age, or genetic information. Selected applicant will be subject to a background investigation. Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Senior Consultant – PySpark | AI & Data | Big 4 Company – Multiple Locations Job Type: Full-Time | Permanent Department: Consulting Services | AI & Engineering Experience Required: 6–9 Years Education: B.Tech / M.Tech / MCA / MS Locations Available: Bengaluru, Karnataka Chennai, Tamil Nadu Gurugram, Haryana Hyderabad, Telangana Kolkata, West Bengal Mumbai, Maharashtra Pune, Maharashtra About the Role: Join one of the Big 4 Consulting Firms. Strategy & Analytics - AI & Data team. As a Senior Consultant (PySpark) , you will implement large-scale data ecosystems, drive operational efficiency, and deliver enterprise-level insights using modern big data platforms and cloud-based technologies. Key Responsibilities: Migrate enterprise legacy systems to Big Data ecosystems Implement data ingestion, enrichment, and processing using Apache Spark and Python (PySpark) Work with cloud platforms like AWS and Google Cloud (BigQuery, S3) Handle data governance, integration of structured/unstructured data Automate pipelines using Airflow , Control-M , etc. Develop and deploy solutions using CI/CD tools (Jenkins, Git) Support performance tuning, CDC handling, testing, and documentation (HLD, TDD) Collaborate in Agile delivery teams Required Skills: Strong hands-on experience with PySpark , Apache Spark Proficient in UNIX , Shell scripting Experience with Hadoop , Hive , Cloudera/Hortonworks Knowledge of data warehousing , historical data load, and framework concepts Exposure to scheduling and orchestration tools Hands-on with S3 file system operations Familiarity with Agile methodology and DevOps practices This is a golden opportunity to join a Big 4 firm and elevate your career in AI & Big Data. Job Type: Full-time Pay: ₹870,754.73 - ₹2,179,698.77 per year Benefits: Provident Fund Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

1.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Summary: We are looking for a passionate and detail-oriented ETL Developer with 1 to 4 years of experience in building, testing, and maintaining ETL processes. The ideal candidate should have a strong understanding of data warehousing concepts, ETL tools, and database technologies. Key Responsibilities: ✅ Design, develop, and maintain ETL workflows and processes using \[specify tools e.g., Informatica / Talend / SSIS / Pentaho / custom ETL frameworks]. ✅ Understand data requirements and translate them into technical specifications and ETL designs. ✅ Optimize and troubleshoot ETL processes for performance and scalability. ✅ Ensure data quality, integrity, and security across all ETL jobs. ✅ Perform data analysis and validation for business reporting. ✅ Collaborate with Data Engineers, DBAs, and Business Analysts to ensure smooth data operations. Required Skills: •⁠ ⁠1-4 years of hands-on experience with ETL tools (e.g., *Informatica, Talend, SSIS, Pentaho*, or equivalent). •⁠ ⁠Proficiency in SQL and experience working with RDBMS (e.g., SQL Server, Oracle, MySQL, PostgreSQL). •⁠ ⁠Good understanding of data warehousing concepts and data modeling. •⁠ ⁠Experience in handling large datasets and performance tuning of ETL jobs. •⁠ ⁠Ability to work in Agile environments and participate in code reviews. •⁠ ⁠Ability to learn and work with open-source languages like Node.js and AngularJS. Preferred Skills (Good to Have): •⁠ ⁠Experience with cloud ETL solutions (AWS Glue, Azure Data Factory, GCP Dataflow). •⁠ ⁠Exposure to big data ecosystems (Hadoop, Spark). Qualifications: 🎓 Bachelor’s degree in Computer Science, Engineering, Information Technology, or related field. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Job Title – Senior Data Scientist Candidate Specification – 10+ years, Notice Period – Immediate to 30 days, Hybrid. Job Summary We are seeking a highly skilled and experienced Senior Data Scientist to join our advanced analytics team. The ideal candidate will possess strong statistical and machine learning expertise, hands-on programming skills, and the ability to transform data into actionable business insights. This role also requires domain understanding to align data science efforts with business objectives in industries such as Oil & Gas, Pharma, Automotive, Desalination, and Industrial Equipment . Primary Responsibilities Lead the design, development, and deployment of advanced machine learning and statistical models Analyze large, complex datasets to uncover trends, patterns, and actionable insights Collaborate cross-functionally with business, engineering, and domain teams to define analytical problems and deliver impactful solutions Apply deep understanding of business objectives to drive the application of data science in decision-making Ensure the quality, integrity, and governance of data used for modeling and analytics Guide junior data scientists and review code and models for scalability and accuracy Core Competencies (Primary Skills) Statistical Analysis & Mathematics Strong foundation in probability, statistics, linear algebra, and calculus Experience with hypothesis testing, A/B testing, and regression models Machine Learning & Deep Learning Proficient in supervised/unsupervised learning, ensemble techniques Hands-on experience with neural networks, NLP, and computer vision Business Acumen & Domain Knowledge Proven ability to translate business needs into data science solutions Exposure to domains such as Oil & Gas, Pharma, Automotive, Desalination, and Industrial Pumps/Motors Technical Proficiency Programming Languages: Python, R, SQL Libraries & Tools: Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch Data Visualization: Matplotlib, Seaborn, Plotly, Tableau, Power BI MLOps & Deployment: Docker, Kubernetes, MLflow, Airflow Cloud & Big Data (Preferred): AWS, GCP, Azure, Spark, Hadoop, Hive, Presto Secondary Skills (Preferred) Generative AI: GPT-based models, fine-tuning, open-source LLMs, Agentic AI frameworks Project Management: Agile methodologies, sprint planning, stakeholder communication Skills Required RoleSenior Data Scientist - Contract Hiring Industry TypeIT/ Computers - Software Functional Area Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills DEEP LEARNING MACHINE LAEARNING PYHTON S TATISTICAL ANALYSIS Other Information Job CodeGO/JC/375/2025 Recruiter NameChristopher Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Job Title: Data Engineer Candidate Specification: 5 + years, Immediate to 30 days. (All 5 Days work from office for 9 Hours). Job Description Experience with any modern ETL tools (PySpark or EMR, or Glue or others). Experience in AWS, programming knowledge in python, Java, Snowflake. Experience in DBT, StreamSets (or similar tools like Informatica, Talend), migration work done in the past. Agile experience is required with Version One or Jira tool expertise. Provide hands-on technical solutions to business challenges & translates them into process/ technical solutions. Good knowledge of CI/CD and DevOps principles. Experience in data technologies - Hadoop PySpark / Scala (Any one) Skills Required RoleData Engineer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education B Tech Employment TypeFull Time, Permanent Key Skills PYSPARK. EMR GLUE ETL TOOL AWS CI/CD DEVOPS Other Information Job CodeGO/JC/102/2025 Recruiter NameSheena Rakesh Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Analyzing and processing data, building and maintaining models and report templates, and developing dynamic, data-driven solutions. Makes recommendations for key business partners and senior management or communicate conclusions from complex analytical solutions to a wide range of audiences. Leverages analytical tools to provide business and technical support for the analytics process, tools and applications for a business function or business unit. Conceptualizing, developing and continuously optimizing analytical solution for operations and executive management to enable data driven decision making. Provides support to business users for mining and interpretation of warehoused and operational data. Experience in analytics modelling/scripting tools such as Python, Hadoop, and SQL. Lead and review data analytics preparation and finalization with the ability to develop and interpret the relevant business requirements. Ensure that data analytics assessments are accurate and completed on time per project milestones. Train qualified teammates to perform the various data analytic activities. Manage relationships with project stakeholders, establishing mutual understanding and strategic direction for solutioning. Partner with key stakeholders on enhancement projects that improve process efficiency, documentation standards and control effectiveness. Ability to communicate findings / recommendations to executive management in concise and effective manner leveraging MS PowerPoint. Skills Required RoleSenior associate - data analytics Industry TypeITES/BPO/KPO Functional AreaITES/BPO/Customer Service Required Education Graduation Employment TypeFull Time, Permanent Key Skills HADOOP POWER BI PYTHON SQL Other Information Job CodeGO/JC/384/2025 Recruiter NamePrernaraj Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

On-site

Linkedin logo

It takes powerful technology to connect our brands and partners with an audience of hundreds of millions of people. Whether you’re looking to write mobile app code, engineer the servers behind our massive ad tech stacks, or develop algorithms to help us process trillions of data points a day, what you do here will have a huge impact on our business—and the world. About Us Yahoo delivers delightful, inspiring and entertaining daily-habit experiences to over half a billion people worldwide. Our products include the Yahoo Homepage (www.yahoo.com), AOL, as well as Comscore #1 sites in News, Sports and Finance. Yahoo in Three Words: Inform, connect, and entertain. The Enterprise Application team is responsible for managing the financial systems along with other custom home grown applications which cater to the needs of the financial teams. We build and maintain applications to ensure Yahoo is able to serve the customers and finance teams, using Oracle R12 and a combination of open source software and internal tools. We encourage new ideas and continuously experiment and evaluate new technologies to assimilate them into our infrastructure. Our team structure encourages trust, learning from one another, having fun, and attracting people who are passionate about what they do. About You You are a self-starter and problem solver, who is passionate about velocity, developer productivity and product quality. You are an aggressive trouble-shooter who can multitask on problems of varying difficulty, priority and time-sensitivity and get things done. You are smart, self-driven, and spend time trying to figure out how something works, not stopping with knowing just what it does. You like to relentlessly automate everything and anything at scale. Job Responsibilities/The Role/The Job This position is for a Production Engineer II with extensive experience in the support & administration of complex applications/systems deployment, infrastructure upgrades, software upgrades, patching and ongoing end-to-end support of mission critical applications. Some of these applications are home grown custom applications facing Yahoo’s internal customer and others are Corporate Sites facing Yahoo’s external customers. This position will be responsible for defining, implementing and maintaining the standard operating procedures for the operations team within the Corporate Applications group. This position will partner closely with relevant business process owners, application developers in Bangalore and Sunnyvale and other Corporate Applications team members to deliver global solutions with an objective of optimizing processes. The individual must have solid experience and understanding of system, database & integration technologies and be responsible for 24/7/365 availability, scalability and incident response. Responsibilities include: Understand existing project design, monitoring setup, and automation. Providing expert advice and direction in Applications & database administration and configuration technologies that include host configuration, monitoring, change & release management, performance tuning, hardware & capacity planning, and upgrades. Design tools for managing the infrastructure and program clean & re-usable simple codes. Troubleshoot, resolve, and document production issues and escalate as required. Proactively maintain and develop all Linux infrastructure technology to maintain a 24x7x365 uptime service Develop and implement automation tools for managing production systems. Be part of global on-call (12/7) rotation. Being responsible for database design, performance, and monitoring of various versions of MySQL or SQL Server databases, database tools, and services Problem diagnosis and resolution of moderate to advance production issues Develop and deploy platform infrastructure tools such as monitoring, alerting, and orchestration Build independent web-based tools, microservices, and solutions. Writing reusable, testable, and efficient code Ability to design and develop a business operations model for large applications to provide support for business needs. Experience in dealing with difficult situations and making decisions with a sense of urgency. Monitoring and reporting metrics related to performance, availability, and other SLA measures Developing, implementing, and maintaining change control and testing processes for modifications to all applications environments Design and implement redundant systems, policies, and procedures for disaster recovery and data archiving to ensure effective protection and integrity of data assets Work with application development staff to harden, enhance, document, and generally improve the operability of our systems Minimum Job Qualifications Bachelor's degree in Computer Science, Engineering, Information Systems or similar relevant degree 6 to 8 years of experience in Linux systems, web applications, distributed computing, and computer networking. Hands-on in various DevOps tools like GIT, Jenkins, Ansible, Terraform, Docker, Jira, Slack, Confluence, Nagios, and Kubernetes Experience in container orchestration services, especially Kubernetes Fair understanding of major public cloud service providers, like Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, and private cloud like OpenStack Expert in Python, with knowledge of at least one Python web framework such as Django / Flask Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Understanding of databases - Relational and Non-Relational - their data models and Performance trade-offs. Hands-on experience in MySQL is preferred. In-depth knowledge of Linux: RedHat, CentOS, etc. Linux certifications (RHCT, RHCE, and LPIC) will be considered an advantage. Excellent communication, interpersonal, and team working skills. Good coding skills in BASH, Python, and Perl Experience in developing web applications and familiarity with at least one framework (Django, Flask) is desired. Basic web development skills using HTML5, CSS are mandatory. Strong desire to learn and understand new concepts, technologies, systems as part of day-to-day work. Solid knowledge of principles, concepts, and theories of virtual infrastructure and container platform orchestration. Ability to apply independent judgment to develop creative, practical, and repeatable solutions Knowledge of Hadoop, HBase, spark is preferred Working knowledge of HTTP, DNS, and DHCP is preferred. Important notes for your attention Applications: All applicants must apply for Yahoo openings direct with Yahoo. We do not authorize any external agencies in India to handle candidates’ applications. No agency nor individual may charge candidates for any efforts they make on an applicant’s behalf in the hiring process. Our internal recruiters will reach out to you directly to discuss the next steps if we determine that the role is a good fit for you. Selected candidates will go through formal interviews and assessments arranged by Yahoo direct. Offer Distributions: Our electronic offer letter and documents will be issued through our system for e-signatures, not via individual emails. Yahoo is proud to be an equal opportunity workplace. All qualified applicants will receive consideration for employment without regard to, and will not be discriminated against based on age, race, gender, color, religion, national origin, sexual orientation, gender identity, veteran status, disability or any other protected category. Yahoo will consider for employment qualified applicants with criminal histories in a manner consistent with applicable law. Yahoo is dedicated to providing an accessible environment for all candidates during the application process and for employees during their employment. If you need accessibility assistance and/or a reasonable accommodation due to a disability, please submit a request via the Accommodation Request Form (www.yahooinc.com/careers/contact-us.html) or call +1.866.772.3182. Requests and calls received for non-disability related issues, such as following up on an application, will not receive a response. Yahoo has a high degree of flexibility around employee location and hybrid working. In fact, our flexible-hybrid approach to work is one of the things our employees rave about. Most roles don’t require specific regular patterns of in-person office attendance. If you join Yahoo, you may be asked to attend (or travel to attend) on-site work sessions, team-building, or other in-person events. When these occur, you’ll be given notice to make arrangements. If you’re curious about how this factors into this role, please discuss with the recruiter. Currently work for Yahoo? Please apply on our internal career site. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

LivePerson (NASDAQ: LPSN) is the global leader in enterprise conversations. Hundreds of the world’s leading brands — including HSBC, Chipotle, and Virgin Media — use our award-winning Conversational Cloud platform to connect with millions of consumers. We power nearly a billion conversational interactions every month, providing a uniquely rich data set and safety tools to unlock the power of Conversational AI for better customer experiences. At LivePerson, we foster an inclusive workplace culture that encourages meaningful connection, collaboration, and innovation. Everyone is invited to ask questions, actively seek new ways to achieve success, nd reach their full potential. We are continually looking for ways to improve our products and make things better. This means spotting opportunities, solving ambiguities, and seeking effective solutions to the problems our customers care about. Overview LivePerson is experiencing rapid growth, and we’re evolving our database infrastructure to scale faster than ever. We are building a team dedicated to optimizing data storage, accessibility, and performance across our applications. As a Senior Database Engineer, you will be a key contributor, driving innovation in cloud database solutions and automation. You Will Partner with cross-functional teams to define database requirements and architectural strategies. Design, implement, and maintain highly scalable, on-prem and cloud-based database systems on Google Cloud Platform (GCP). Develop automation solutions using Terraform, Ansible, and Python to streamline database provisioning and management. Ensure robust version control of infrastructure configurations for seamless deployments. Monitor, troubleshoot, and optimize database performance, addressing bottlenecks proactively. Establish and enforce backup, recovery, and disaster recovery protocols to protect data integrity. Collaborate with security teams to implement compliance and data protection measures. Lead incident resolution, analyzing root causes and driving long-term solutions. Stay ahead of industry trends in DevOps, cloud computing, and database technologies. Participate in on-call rotations, ensuring 24x7 support for mission-critical systems. You Have 8+ years of experience managing large-scale production database systems handling terabytes of data. Expertise in MySQL administration & replication. Experience with anyone of Elasticsearch, Kafka, Hadoop, and Vertica is plus Strong background in Google Cloud Platform (GCP) or AWS database deployments. Proficiency in Infrastructure as Code (IaC) using Terraform & Ansible. Skilled in Python & Bash scripting for automation. Hands-on experience with Liquibase or Flyway for database automation. Knowledge of monitoring tools like Prometheus, Grafana, PMM (Percona Monitoring and Management) and ELK stack (Elasticsearch, Kibana & Logstash). Strong problem-solving skills with a proactive approach to troubleshooting complex issues. Solid foundation in database architecture, optimization, and CI/CD concepts. Excellent collaboration & communication skills in a dynamic team environment. Highly accountable with a results-driven mindset. Able to create documentation, work on changes, incidents and jira tickets. Relevant certifications (AWS, GCP) are a plus. Benefits Health: Medical, Dental and Vision Time away: Vacation and holidays Equal opportunity employer Why You’ll Love Working Here As leaders in enterprise customer conversations, we celebrate diversity, empowering our team to forge impactful conversations globally. LivePerson is a place where uniqueness is embraced, growth is constant, and everyone is empowered to create their own success. And, we're very proud to have earned recognition from Fast Company, Newsweek, and BuiltIn for being a top innovative, beloved, and remote-friendly workplace. Belonging At LivePerson We are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local law. We are committed to the accessibility needs of applicants and employees. We provide reasonable accommodations to job applicants with physical or mental disabilities. Applicants with a disability who require reasonable accommodation for any part of the application or hiring process should inform their recruiting contact upon initial connection. The talent acquisition team at LivePerson has recently been notified of a phishing scam targeting candidates applying for our open roles. Scammers have been posing as hiring managers and recruiters in an effort to access candidates' personal and financial information. This phishing scam is not isolated to only LivePerson and has been documented in news articles and media outlets. Please note that any communication from our hiring teams at LivePerson regarding a job opportunity will only be made by a LivePerson employee with an @ liveperson.com email address. LivePerson does not ask for personal or financial information as part of our interview process, including but not limited to your social security number, online account passwords, credit card numbers, passport information and other related banking information. If you have any questions and or concerns, please feel free to contact recruiting-lp@liveperson.com Show more Show less

Posted 1 week ago

Apply

1.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. What you’ll do This position is at the forefront of Equifax's post cloud transformation, focusing on developing and enhancing Java applications within the Google Cloud Platform (GCP) environment. The ideal candidate will combine strong Java development skills with cloud expertise to drive innovation and improve existing systems Key Responsibilities Design, develop, test, deploy, maintain, and improve software applications on GCP Enhance existing applications and contribute to new initiatives leveraging cloud-native technologies Implement best practices in serverless computing, microservices, and cloud architecture Collaborate with cross-functional teams to translate functional and technical requirements into detailed architecture and design Participate in code reviews and maintain high development and security standards Provide technical oversight and direction for Java and GCP implementations What Experience You Need Bachelor's or Master's degree in Computer Science or equivalent experience 1+ years of IT experience with a strong focus on Java development Experience in modern Java development and cloud computing concepts Familiarity with agile methodologies and test-driven development (TDD) Strong understanding of software development best practices, including continuous integration and automated testing What could set you apart Experience with GCP or other cloud platforms (AWS, Azure) Active cloud certifications (e.g., Google Cloud Professional certifications) Experience with big data technologies (Spark, Kafka, Hadoop) and NoSQL databases Knowledge of containerization and orchestration tools (Docker, Kubernetes) Familiarity with financial services industry Experience with open-source frameworks (Spring, Ruby, Apache Struts, etc.) Experience with Python We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Roles and Responsibilities ● Designing, developing, implementing and maintaining Java-based application phases codes and software. ● Contributing in all phases of the development lifecycle ● Writing testable, scalable and efficient code ● Test and debug new applications and updates ● Maintain up to date code documentation ● Participating in code reviews Required Skills ● 3-5 years’ experience in JAVA 8 Version, Java-J2EE, Spring Boot. ● Strong CS fundamentals in OOD, DS, Algorithms and Problem Solving for a wide variety of problem spaces and technologies. ● Must have the good coding skills. ● Experience creating large-scale, multi-tiered, distributed web applications with databases, and designing web services, APIs, data models and schemas, using SQL or NoSQL. ● Experience with different types of data storage solutions, such as Elastic Search, SQL, Hadoop, or MongoDB ● Worked in Agile environments previously. ● Good analytical and troubleshooting skills. ● Aware of software engineering best practices and full development life cycle, including coding standards, code reviews, source control, build processes, testing and deployment. Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Job title: Big Data Location: Bangalore/Mumbai/Pune/Chennai Candidate Specification Candidate should have 9+ Years in Big Data, JAVA with Scala or Hadoop with Scala. Job Description Design, develop, and maintain scalable big data architectures and systems. Implement data processing pipelines using technologies such as Hadoop, Spark, and Kafka. Optimize data storage and retrieval processes to ensure high performance and reliability. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Perform data modeling, mining, and production processes to support business needs. Ensure data quality, governance, and security across all data systems. Stay updated with the latest trends and advancements in big data technologies. Experience with real-time data processing and stream analytics. Knowledge of advanced analytics and data visualization tools. Knowledge of DevOps practices and tools for continuous integration and deployment Experience in managing big data projects and leading technical teams. Skills Required RoleBig Data - Manager Industry TypeIT/ Computers - Software Functional Area Required Education B E Employment TypeFull Time, Permanent Key Skills BIGDATA HADOOP J A V A SCALA Other Information Job CodeGO/JC/224/2025 Recruiter NameDevikala D Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. What You’ll Do Works in Data and Analytics under the close supervision of the team manager or senior associates Leverage coding best practices to ensure efficient execution of code against large datasets Run standard processes to ensure metrics, reports and insights are delivered consistently to stakeholders Leverage knowledge of data structures to prepare data for ingestion efforts analysis, assembling data from disparate data sources for the creation of insights Integrate Equifax, customer and third party data to solve basic internal or customer analytical problems and report findings to managers and internal stakeholders Review output of code for anomalies and perform analysis to determine cause, and work with Data, Analytics, Product and Technology counterparts to implement corrective measures Supports discussion on impact and importance of findings on the business (either Equifax or external customer) Ensure proper use of Equifax data assets by working closely with data governance and compliance professionals What Experience You Need 2 years of proven experience as a Data Analyst or Data Scientist Cognizance of BFSI or marketing analytics landscape Experience of working with Python (mandatory) R, SQL Experience of using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop, BigQuery) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) Problem-solving aptitude Excellent communication and presentation skills BSc/BA/BTech in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative STEM streams What Could Set You Apart Cloud certification such as GCP strongly preferred Self Starter Excellent communicator / Client Facing Ability to work in fast paced environment Flexibility work across A/NZ time zones based on project needs We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Kindly Apply only if you have 5-8 Years of work Experience. The healthcare industry is the next great frontier of opportunity for software development, and Health Catalyst is one of the most dynamic and influential companies in this space. We are working on solving national-level healthcare problems, and this is your chance to improve the lives of millions of people, including your family and friends. Health Catalyst is a fast-growing company that values smart, hardworking, and humble individuals. Each product team is a small, mission-critical team focused on developing innovative tools to support Catalyst’s mission to improve healthcare performance, cost, and quality. Job Summary The Data Engineer focuses on acquiring data from various sources that are found in a Health System’s ecosystem. The Data Engineer leverages Catalyst’s Data Operating System to acquire this data. Data Engineers become accustomed to both the technical and business details of the source systems and engage with multiple technologies on how to acquire the source data. Duties & Responsibilities Required Skills Proficiency in Structured Query Language (SQL) Experience working with EMR\EHR systems and an understanding of the healthcare clinical domain. Lead the design, development, and maintenance of scalable data pipelines and ETL processes. Strong expertise in ETL tools Proficient working knowledge of database principles, processes, technologies and tools Excellent analytical and troubleshooting skills. Strong sense of customer service to consistently and effectively address client needs. Excellent communication, leadership, and problem-solving skills Mentor and guide a team of data engineers, fostering a culture of continuous learning and improvement. Monitor and troubleshoot data infrastructure issues to ensure high availability and performance. Ensure data quality, integrity, and security across all data platforms. Implement best practices for data governance, lineage, and compliance Desired Skills Prior experience with RDBMS (SQL Server, Oracle, etc.) Stored Procedure/T-SQL/SSIS experience Experience with processing HL7 messages, CCD documents, and EDI X12 Claims files. Familiarity with development methodologies, including the AGILE development approaches Ability to code and comprehend code around technologies that deal with acquiring data Working experience with Hadoop and other Big Data Technologies Experience in Microsoft Azure cloud solutions, architecture, and related technologies Education & Relevant Experience Bachelor’s degree in technology, business, or healthcare related degree 5+ years of experience in data engineering, with at least 2+ years in a leadership role. 2+ years of experience in healthcare/technology related field. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: AI/ML Engineer Location: Pune, India About the Role: We’re looking for highly analytical, technically strong Artificial Intelligence/Machine Learning Engineers to help build scalable, data-driven systems in the digital marketing space. You'll work alongside a top-tier team on impactful solutions affecting billions of users globally. Experience Required: 3 – 7 years Key Responsibilities: Collaborate across Data Science, Ops, and Engineering to tackle large-scale ML challenges. Build and manage robust ML pipelines (ETL, training, deployment) in real-time environments. Optimize models and infrastructure for performance and scalability. Research and implement best practices in ML systems and lifecycle management. Deploy deep learning models using high-performance computing environments. Integrate ML frameworks into cloud/distributed systems. Required Skills: 2+ years of Python development in a programming-intensive role. 1+ year of hands-on ML experience (e.g., Classification, Clustering, Optimization, Deep Learning). 2+ years working with distributed frameworks (Spark, Hadoop, Kubernetes). 2+ years with ML tools such as TensorFlow, PyTorch, Keras, MLlib. 2+ years experience with cloud platforms (AWS, Azure, GCP). Excellent communication skills. Preferred: Prior experience in AdTech or digital advertising platforms (DSP, Ad Exchange, SSP). Education: M.Tech or Ph.D. in Computer Science, Software Engineering, Mathematics, or a related discipline. Why Apply? Join a fast-moving team working on the forefront of AI in advertising. Build technologies that impact billions of users worldwide. Shape the future of programmatic and performance advertising. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Company EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning Business EXLerator Framework™, which integrates analytics, automation, benchmarking, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 24,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients’ decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 2,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. About the Role Candidate Profile: 3+ years hands-on experience in creating content as an instructional designer Experience in eLearning development tools: PPT, Articulate Storyline, Adobe Captivate Experience in video developments tools such as Adobe Illustrator, Photoshop, After Effects, Animate Knowledge of big data platforms and ML techniques, a plus Experience in gamification, experiential learning, VR, a plus Background Client’s team creates data training for any organization wide employee who needs to understand data products. Whether they work in Hadoop, use a BI platform, or want to understand the data they’re seeing on a dashboard, this training should meet most of those needs. In short, translating complex data into easy to digest content is the objective of this team. Key Responsibilities eLearning Design/Development: Designing and developing eLearning training for data consumers (internal employees). Video development: Create short demo videos on data training. What we offer : EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth. Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Data Architect Location: Noida, India Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

About Lingaro: Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data. Since 2008, Lingaro has been recognized by clients and global research and advisory firms for innovation, technology excellence, and the consistent delivery of highest-quality data services. Our commitment to data excellence has created an environment that attracts the brightest global data talent to our team. Duties: Designing and implementing data processing systems using distributed frameworks like Hadoop, Spark, Snowflake, Airflow, or other similar technologies. This involves writing efficient and scalable code to process, transform, and clean large volumes of structured and unstructured data. Building data pipelines to ingest data from various sources such as databases, APIs, or streaming platforms. Integrating and transforming data to ensure its compatibility with the target data model or format. Designing and optimizing data storage architectures, including data lakes, data warehouses, or distributed file systems. Implementing techniques like partitioning, compression, or indexing to optimize data storage and retrieval. Identifying and resolving bottlenecks, tuning queries, and implementing caching strategies to enhance data retrieval speed and overall system efficiency. Designing and implementing data models that support efficient data storage, retrieval, and analysis. Collaborating with data scientists and analysts to understand their requirements and provide them with well-structured and optimized data for analysis and modeling purposes. Utilizing frameworks like Hadoop or Spark to perform distributed computing tasks, such as parallel processing, distributed data processing, or machine learning algorithms Implementing security measures to protect sensitive data and ensuring compliance with data privacy regulations. Establishing data governance practices to maintain data integrity, quality, and consistency. Identifying and resolving issues related to data processing, storage, or infrastructure. Monitoring system performance, identifying anomalies, and conducting root cause analysis to ensure smooth and uninterrupted data operations. Collaborating with cross-functional teams including data scientists, analysts, and business stakeholders to understand their requirements and provide technical solutions. Communicating complex technical concepts to non-technical stakeholders in a clear and concise manner. Independence and responsibility for delivering a solution Ability to work under Agile and Scrum development methodologies Staying updated with emerging technologies, tools, and techniques in the field of big data engineering. Exploring and recommending new technologies to enhance data processing, storage, and analysis capabilities. Train and mentor junior data engineers, providing guidance and knowledge transfer. Requirements: A bachelor's or master's degree in Computer Science, Information Systems, or a related field is typically required. A bachelor's or master's degree in Computer Science, Information Systems, or a related field is typically required. Additional certifications in cloud are advantageous. Minimum of 10+ years of experience in data engineering or a related field. Strong technical skills in data engineering, including proficiency in programming languages such as Python, SQL, Pyspark. Familiarity with Azure cloud platform viz. Azure Databricks, Data Factory, Data Lake etc., and experience in implementing data solutions in a cloud environment. Expertise in working with various data tools and technologies, such as ETL frameworks, data pipelines, and data warehousing solutions. In-depth knowledge of data management principles and best practices, including data governance, data quality, and data integration. Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues. Knowledge of data security and privacy regulations, and the ability to ensure compliance within data engineering projects. Excellent communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams, stakeholders, and senior management. Continuous learning mindset, staying updated with the latest advancements and trends in data engineering and related technologies. Consulting exposure, with external customer focus mindset is preferred. Why join us: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Job description Basic Responsibilities (Must-Haves): 5+ years of experience in dashboard story development, dashboard creation, and data engineering pipelines . Hands-on experience with log analytics, user engagement metrics, and product performance metrics . Ability to identify patterns, trends, and anomalies in log data to generate actionable insights for product enhancements and feature optimization . Collaborate with cross-functional teams to gather business requirements and translate them into functional and technical specifications. Manage and organize large volumes of application log data using Google Big Query . Design and develop interactive dashboards to visualize key metrics and insights using any of the tool like Tableau Power BI , or ThoughtSpot AI . Create intuitive, impactful visualizations to communicate findings to teams including customer success and leadership. Ensure data integrity, consistency, and accessibility for analytical purposes. Analyse application logs to extract metrics and statistics related to product performance, customer behaviour, and user sentiment . Work closely with product teams to understand log data generated by Python-based applications . Collaborate with stakeholders to define key performance indicators (KPIs) and success metrics. Can optimize data pipelines and storage in Big Query . Strong communication and teamwork skills . Ability to learn quickly and adapt to new technologies. Excellent problem-solving skills . Preferred Responsibilities (Nice-to-Haves): Knowledge of Generative AI (GenAI) and LLM-based solutions . Experience in designing and developing dashboards using ThoughtSpot AI . Good exposure to Google Cloud Platform (GCP) . Data engineering experience with modern data warehouse architectures . Additional Responsibilities: Participate in the development of proof-of-concepts (POCs) and pilot projects. Ability to articulate ideas and points of view clearly to the team. Take ownership of data analytics and data engineering solutions . Additional Nice-to-Haves: Experience working with large datasets and distributed data processing tools such as Apache Spark or Hadoop . Familiarity with Agile development methodologies and version control systems like Git . Familiarity with ETL tools such as Informatica or Azure Data Factory Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title: Data Scientist Experience: 6-10 years Location: Noida Contract duration: 6 months + extendable Responsibility : Model Development: Design and implement ML models to tackle complex business challenges. Data Preprocessing: Clean, preprocess, and analyze large datasets for meaningful insights and model features. Model Training: Train and fine-tune ML models using various techniques, including deep learning and ensemble methods. Evaluation and Optimization: Assess model performance, optimize for accuracy, efficiency, and scalability. Deployment: Deploy ML models in production, monitor performance for reliability. Collaboration: Work with data scientists, engineers, and stakeholders to integrate ML solutions. Research: Stay updated on ML/AI advancements, and contribute to internal knowledge. Documentation: Maintain comprehensive documentation for all ML models and processes. • Qualification - Bachelor's or master’s in computer science, Machine Learning, Data Science, or a related field, and must be experience of 6-10 years. • Desirable Skills: Must Have 1. Experience in time series forecasting, regression Model, Classification Model 2. Python, R, Data analysis 3. Large-scale data handling with Pandas, Numpy, and Matplotlib 4. Version Control: Git or any other 5. ML Framework: Hands-on exp in Tensorflow, Pytorch, Scikit-Learn, Keras 6. Good knowledge on Cloud platform and ( AWS/Azure/ GCP), Docker, Kubernetes 7. Model Selection, evaluation, Deployment, Data collection, and preprocessing, Feature engineering Estimation Good to Have Experience with Big Data and analytics using technologies like Hadoop, Spark, etc. Additional experience or knowledge in AI/ML technologies beyond the mentioned frameworks. BFSI and banking domain Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies