Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 9.0 years
8 - 14 Lacs
Faridabad
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 4 days ago
7.0 - 9.0 years
8 - 14 Lacs
Jaipur
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 4 days ago
7.0 - 9.0 years
8 - 14 Lacs
Nagpur
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 4 days ago
6.0 - 10.0 years
16 - 25 Lacs
Faridabad
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 4 days ago
3.0 years
10 - 12 Lacs
India
On-site
About The Role We are looking for a highly skilled Data Engineer with a strong foundation in Power BI, SQL, Python , and Big Data ecosystems to help design, build, and optimize end-to-end data solutions. The ideal candidate is passionate about solving complex data problems, transforming raw data into actionable insights, and contributing to data-driven decision-making across the organization. Key Responsibilities Data Modelling & Visualization Build scalable and high-quality data models in Power BI using best practices. Define relationships, hierarchies, and measures to support effective storytelling. Ensure dashboards meet standards in accuracy, visualization principles, and timelines. Data Transformation & ETL Perform advanced data transformation using Power Query (M Language) beyond UI-based steps. Design and optimize ETL pipelines using SQL, Python, and Big Data tools. Manage and process large-scale datasets from various sources and formats. Business Problem Translation Collaborate with cross-functional teams to translate complex business problems into scalable, data-centric solutions. Decompose business questions into testable hypotheses and identify relevant datasets for validation. Performance & Troubleshooting Continuously optimize performance of dashboards and pipelines for latency, reliability, and scalability. Troubleshoot and resolve issues related to data access, quality, security, and latency, adhering to SLAs. Analytical Storytelling Apply analytical thinking to design insightful dashboards—prioritizing clarity and usability over aesthetics. Develop data narratives that drive business impact. Solution Design Deliver wireframes, POCs, and final solutions aligned with business requirements and technical feasibility. Required Skills & Experience Minimum 3+ years of experience as a Data Engineer or in a similar data-focused role. Strong expertise in Power BI: data modeling, DAX, Power Query (M Language), and visualization best practices. Hands-on with Python and SQL for data analysis, automation, and backend data transformation. Deep understanding of data storytelling, visual best practices, and dashboard performance tuning. Familiarity with DAX Studio and Tabular Editor. Experience in handling high-volume data in production environments. Preferred (Good To Have) Exposure to Big Data technologies such as: PySpark Hadoop Hive / HDFS Spark Streaming (optional but preferred) Why Join Us? Work with a team that's passionate about data innovation. Exposure to modern data stack and tools. Flat structure and collaborative culture. Opportunity to influence data strategy and architecture decisions. Skills: data modeling,big data,pyspark,power bi,data storytelling,spark streaming,etl,sql,tabular editor,hive,power query,hadoop,python,data transformation,dax studio,dax
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Strong programming skills in SQL, Python and PySpark for data processing and automation. Experience with Databricks and Snowflake (preferred) for building and maintaining data pipelines. Understanding of Machine Learning and AI techniques, especially for data quality and anomaly detection. Experience with cloud platforms such as Azure and AWS and familiarity with Azure Web Apps Knowledge of Data Quality and Data Governance concepts (Preferred) Nice to have: Power BI dashboard development experience.
Posted 4 days ago
2.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
We are looking for a skilled Data Engineer with expertise in Python/PySpark to join our team at Bay Area Technology Solutions. The ideal candidate will have 2-5y years of experience in the IT Services & Consulting industry. Roles and Responsibility Design, develop, and implement data pipelines using Python/PySpark. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems and architectures. Ensure data quality, integrity, and security. Optimize data processing workflows for performance and efficiency. Troubleshoot and resolve technical issues related to data engineering projects. Job Requirements Strong proficiency in Python programming language. Experience with PySpark is required. Knowledge of big data technologies and tools is necessary. Excellent problem-solving skills and attention to detail are essential. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills are needed.
Posted 4 days ago
2.0 - 4.0 years
4 - 6 Lacs
Mumbai, Hyderabad
Work from Office
Job Responsibilities. Collaborate with data scientists, software engineers, and business stakeholders to understand data requirements and design efficient data models.. Develop, implement, and maintain robust and scalable data pipelines, ETL processes, and data integration solutions.. Extract, transform, and load data from various sources, ensuring data quality, integrity, and consistency.. Optimize data processing and storage systems to handle large volumes of structured and unstructured data efficiently.. Perform data cleaning, normalization, and enrichment tasks to prepare datasets for analysis and modelling.. Monitor data flows and processes, identify and resolve data-related issues and bottlenecks.. Contribute to the continuous improvement of data engineering practices and standards within the organization.. Stay up-to-date with industry trends and emerging technologies in data engineering, artificial intelligence, and dynamic pricing. Candidate Profile. Strong passion for data engineering, artificial intelligence, and problem-solving.. Solid understanding of data engineering concepts, data modeling, and data integration techniques.. Proficiency in programming languages such as Python, SQL and Web Scrapping.. Understanding of databases like No Sql , relational database, In Memory database and technologies like MongoDB, Redis, Apache Spark would be add on... Knowledge of distributed computing frameworks and big data technologies (e.g., Hadoop, Spark) is a plus.. Excellent analytical and problem-solving skills, with a keen eye for detail.. Strong communication and collaboration skills, with the ability to work effectively in a teamoriented environment.. Self-motivated, quick learner, and adaptable to changing priorities and technologies.. (ref:hirist.tech).
Posted 4 days ago
3.0 - 6.0 years
8 - 13 Lacs
Bengaluru
Work from Office
KPMG India is looking for Azure Data Engieer - Assistant Manager Azure Data Engieer - Assistant Manager to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 4 days ago
3.0 - 4.0 years
17 - 18 Lacs
Bengaluru
Work from Office
KPMG India is looking for Azure Data Engineer - Consultant Azure Data Engineer - Consultant to join our dynamic team and embark on a rewarding career journey Assure that data is cleansed, mapped, transformed, and otherwise optimised for storage and use according to business and technical requirements Solution design using Microsoft Azure services and other tools The ability to automate tasks and deploy production standard code (with unit testing, continuous integration, versioning etc.) Load transformed data into storage and reporting structures in destinations including data warehouse, high speed indexes, real-time reporting systems and analytics applications Build data pipelines to collectively bring together data Other responsibilities include extracting data, troubleshooting and maintaining the data warehouse
Posted 4 days ago
2.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Req ID: 327063 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a python,pySpark,ApacheSpark to join our team in Hyderabad, Telangana (IN-TG), India (IN). "At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company s growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring, the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here "NTT DATA Services currently seeks Python Developer to join our team in Hyderabad Design and build ETL solutions with experience in data engineering, data modelling in large-scale in both batch and real-time environments. Skills required: Python, PySpark, Apache Spark, Unix Shell Scripting, GCP, Big query, MongoDB, Kafka event streaming, API development, CI/CD. For software engineering 3: 6+yrs Mandate :Apache spark with python, pyspark, GCP with big query, database Secondary mandate: Abinitio ETL Good to have : Unix shell scripting Kafka event streaming" #LI-INPAS We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in RD to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com
Posted 4 days ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Associate Data Scientist - I Invent Health values creativity and the application of cutting-edge technology in AI and ML to enhance data quality, engineering efficiency, and overall productivity in the Healthcare Analytics domain. As a part of the Data Science team, you will play a crucial role in our data-driven initiatives. You will work closely with our data science team to extract meaningful insights from large and complex healthcare datasets. Responsibilities: Participating in end-to-end modelling projects, handling large variety and volume of data. Executing research and analytics, including data source identification, processing, model/algorithm development. Conducting exploratory data analysis, feature construction, and feature engineering. Performing Feature Engineering to create relevant features from raw data to enhance model performance and interpretability. Developing and evaluating advanced Predictive models using Python and Spark ML-Lib. Focusing on specialty areas such as Anomaly Detection, Time Series Forecasting, NLP and Generative AI. Deploying models and monitoring their performance post-deployment. Conducting proof-of-concept research to explore new modelling techniques and methodologies. Evaluating the performance of data science models using appropriate metrics and techniques. Collaborate with cross-functional teams (e.g., clinicians, biostatisticians, engineers) to ensure the successful implementation of data science solutions. Continuously seek to innovate and bring cutting-edge solutions to our customers. Developing high-quality data-driven solutions and collaborating effectively within a multidisciplinary team environment. Delivering presentations and reports on findings and translating complex concepts into actionable insights. Requirements: 2 - 4 years of Data Science experience. Previous experience in Healthcare domain preferred. Bachelors in a quantitative field (Operation Research, Computer Science, Engineering, Applied Math, Statistics, Physics, Analytics, etc.), Advanced courses in Business Analytics/Data Science/Data Analytics Strong problem-solving and analytical skills. Proficiency in Deep Learning libraries/frameworks like Kera’s and TensorFlow is a plus. Strong understanding of machine learning algorithms and statistical modelling techniques. Experience in handling large datasets and proficiency in data manipulation. Proficiency in PySpark is a plus. Proficiency in any BI tool like Tableau is a plus Ability to translate complex data science concepts into practical solutions. Excellent communication skills, both oral and written. Designing test strategies, plans and processes, methodologies and supporting documentation for functional and non-functional areas
Posted 4 days ago
6.0 - 9.0 years
5 - 10 Lacs
Noida
Work from Office
Relevant experience and skills: Must haves: At least 6-9 years of work experience in US and overseas payroll. Understanding of customer invoicing and timesheet management Quick learner & presentation skill Strong sense of urgency and results-orientation MS Office Advanced Excel and good Power point Acquaint with different client portals like wand, Fieldglass, Beeline, Coupa, Ariba Good to have: Experience of Background in IT staffing business ERP working knowledge Quick Book
Posted 4 days ago
5.0 - 9.0 years
12 - 17 Lacs
Noida
Work from Office
Spark/PySpark Technical hands on data processing Table designing knowledge using Hive - similar to RDBMS knowledge Database SQL knowledge for retrieval of data - transformation queries such as joins (full, left, right), ranking, group by Good Communication skills. Additional skills - GitHub, Jenkins, shell scripting would be added advantage Mandatory Competencies Big Data - Big Data - Pyspark Big Data - Big Data - SPARK Big Data - Big Data - Hadoop Big Data - Big Data - HIVE DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Beh - Communication and collaboration Database - Database Programming - SQL DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Basic Bash/Shell script writing
Posted 4 days ago
5.0 - 10.0 years
10 - 15 Lacs
Noida
Work from Office
We are seeking a Data Science Engineer to design, build, and optimize scalable data and machine learning systems. This role requires strong software engineering skills, a deep understanding of data science workflows, and the ability to work cross-functionally to translate business problems into production-level data solutions. Key Responsibilities: Design, implement, and maintain data science pipelines from data ingestion to model deployment. Collaborate with data scientists to operationalize ML models and algorithms in production environments. Develop robust APIs and services for ML model inference and integration. Build and optimize large-scale data processing systems using Spark, Pandas, or similar tools. Ensure data quality and pipeline reliability through rigorous testing, validation, and monitoring. Work with cloud infrastructure (AWS) for scalable ML deployment. Manage model versioning, feature engineering workflows, and experiment tracking. Optimize performance of models and pipelines for latency, cost, and throughput. Required Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in a data science, ML engineering, or software engineering role. Proficiency in Python (preferred) and SQL; knowledge of Java, Scala, or C++ is a plus. Experience with data science libraries like Scikit-learn, XGBoost, TensorFlow, or PyTorch. Familiarity with ML deployment tools such as ML flow, Sage Maker, or Vertex AI. Solid understanding of data structures, algorithms, and software engineering best practices. Experience working with databases (SQL, NoSQL) and data lakes (e.g., Delta Lake, Big Query). Preferred Qualifications: Experience with containerization and orchestration (Docker, Kubernetes). Experience working in Agile or cross-functional teams. Familiarity with streaming data platforms (Kafka, Spark Streaming, Flink). Soft Skills: Strong communication skills to bridge technical and business teams. Excellent problem-solving and analytical thinking. Self-motivated and capable of working independently or within a team. Passion for data and a curiosity-driven mindset. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - AI/ML Database - Database Programming - SQL Artifical Intelligence - AI Tools - TensorFlow Cloud - AWS - ECS DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes)
Posted 4 days ago
8.0 - 13.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Job Description Where you ll work: India (Remote) Engineering at GoTo We re the trailblazers of remote work technology. We build powerful, flexible work software that empowers everyone to live their best life, at work and beyond. And blaze even more trails along the way. There s ample room for growth so you can blaze your own trail here too. When you join a GoTo product team, you ll take on a key role in this process and see your work be used by millions of users worldwide. Your Day to Day As a Staff Data Engineer, you would be: Design and Develop Pipelines : Build robust, scalable, and efficient ETL /ELT data pipelines to process structured data from diverse sources. Big Data Processing : Develop and optimize large-scale data workflows using Apache Spark, with strong hands-on experience in building ETL pipelines. Cloud-Native Data Solutions : Architect and implement data solutions using AWS services such as S3, EMR, Lambda, and EKS. Data Governance : Manage and govern data using catalogs like Hive or Unity Catalog ; ensure strong data lineage, access controls, and metadata management. Workflow Orchestration : Schedule, monitor, and orchestrate workflows using Apache Airflow or similar tools. Data Quality & Monitoring : Implement quality checks, logging, monitoring, and alerting to ensure pipeline reliability and visibility. Cross-Functional Collaboration : Partner with analysts, data scientists, and business stakeholders to deliver high-quality data for applications and enable self-service BI. Compliance & Security : Uphold best practices in data governance, security, and compliance across the data ecosystem. Mentorship & Standards : Mentor junior engineers and help evolve engineering practices including CI/CD, testing, and documentation. What We re Looking For As a Staff Data Engineer, your background will look like: Bachelor s or Master s degree in Computer Science , Engineering, or a related field. 8+ years of experience in data engineering or software development, with a proven record of maintaining production-grade pipelines. Proficient in Python and SQL for data transformation and analytics. Strong expertise in Apache Spark , including data lake management, ACID transactions, schema enforcement/evolution, and time travel. In-depth knowledge of AWS services especially S3, EMR, Lambda, and EKS with a solid grasp of cloud architecture and security best practices. Solid data modeling skills (dimensional, normalized) and an understanding of data warehousing and lakehouse paradigms. Experience with BI tools like Tableau or Power BI . Familiar with setting up data quality , monitoring, and observability frameworks. Excellent communication and collaboration skills, with the ability to thrive in an agile and multicultural team environment. Nice to Have Experience working on the Databricks Platform Knowledge of Delta or Apache Iceberg file formats Passion for Machine Learning and AI; enthusiasm to explore and apply intelligent systems At GoTo, authenticity and inclusive culture are key to our thriving workplace, where diverse perspectives drive innovation and growth. Our team of GoGetters is passionate about learning, exploring, and working together to achieve success while staying committed to delivering exceptional experiences for our customers. We take pride in supporting our employees with comprehensive benefits, wellness programs, and global opportunities for professional and personal development. By maintaining an inclusive environment, we empower our teams to do their best work, make a meaningful impact, and grow their career. Learn more .
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Description This is a contract role for a Python Back End Developer. The Python Back End Developer will be responsible for day-to-day tasks including back-end web development, software development, and object-oriented programming (OOP). This hybrid role is based in Hyderabad, with some work-from-home flexibility. Qualifications Proficiency in Back-End Web Development and Software Development Strong understanding of Object-Oriented Programming (OOP) Basic skills in Front-End Development Solid Programming skills Experience with Cloud platforms - GCP/AWS Excellent problem-solving and analytical skills Ability to work both independently and in a team environment Experience with Python frameworks like Django or Flask is a plus Nice to have worked on data pipelines (Airflow, Netflix Conductor etc..) Worked on Apache Spark/beam, Kafka
Posted 4 days ago
10.0 years
0 Lacs
Ponnani, Kerala, India
Remote
Join the Journey: MlOps Engineer at Gaaya Realtech (Dubai, UAE 🌍) Imagine building the technology that’s reshaping real estate globally, backed by visionary architects and innovators at Gaaya Realtech. We don’t just construct buildings—we’re crafting smarter cities and intelligent spaces for tomorrow. About Gaaya Realtech We’re a powerhouse at the intersection of construction, architecture, and advanced realty. Our drive? Pioneering innovation and technology in real estate—making every project not just a structure, but a story. Our Client: Revolutionizing Global EdTech You’ll work directly with a funded, high-growth EdTech startup. Our mission tackles a worldwide challenge with scalable solutions, initially targeting the US and Canadian markets—all from the heart of Dubai. Think competitive packages, a collaborative work culture, and a journey that begins remotely before relocating to the dazzling UAE. The Opportunity: MlOps EngineerYour Story Starts Here Picture this: Each day, you contribute to real solutions that accelerate business and education for a global audience. You'll be central to deploying, automating, and managing machine learning systems that scale from POC to world-class production. Why You Belong Here Impact at Scale: Your work powers products that touch users across continents. Smart, Collaborative Team: Join a diverse, talented group—where ideas spark and innovation flourishes. Growth & Challenge: You’ll dive into pioneering projects, sharpening your edge in MlOps, cloud, and automation. Future-Forward Environment: Join us remotely today and collaborate in one of the world’s most inspiring cities (Dubai) tomorrow! What We’re Looking For Experience: 2–10 years in industry, ideally 2–5. Skills: Smart, solution-oriented, collaborative. Fluent communicator eager for knowledge-sharing. Passionate about deploying, scaling, and automating ML systems.
Posted 4 days ago
4.0 - 8.0 years
10 - 14 Lacs
Chennai
Work from Office
Role Description Provides leadership for the overall architecture, design, development, and deployment of a full-stack cloud native data analytics platform. Designing & Augmenting Solution architecture for Data Ingestion, Data Preparation, Data Transformation, Data Load, ML & Simulation Modelling, Java BE & FE, State Machine, API Management & Intelligence consumption using data products, on cloud Understand Business Requirements and help in developing High level and Low-level Data Engineering and Data Processing Documentation for the cloud native architecture Developing conceptual, logical and physical target-state architecture, engineering and operational specs. Work with the customer, users, technical architects, and application designers to define the solution requirements and structure for the platform Model and design the application data structure, storage, and integration Lead the database analysis, design, and build effort Work with the application architects and designers to design the integration solution Ensure that the database designs fulfill the requirements, including data volume, frequency needs, and long-term data growth Able to perform Data Engineering tasks using Spark Knowledge of developing efficient frameworks for development and testing using (Sqoop/Nifi/Kafka/Spark/Streaming/ WebHDFS/Python) to enable seamless data ingestion processes on to the Hadoop/BigQuery platforms. Enabling Data Governance and Data Discovery Exposure of Job Monitoring framework along validations automation Exposure of handling structured, Un Structured and Streaming data. Technical Skills Experience with building data platform on cloud (Data Lake, Data Warehouse environment, Databricks) Strong technical understanding of data modeling, design and architecture principles and techniques across master data, transaction data and derived/analytic data Proven background of designing and implementing architectural solutions which solve strategic and tactical business needs Deep knowledge of best practices through relevant experience across data-related disciplines and technologies, particularly for enterprise-wide data architectures, data management, data governance and data warehousing Highly competent with database design Highly competent with data modeling Strong Data Warehousing and Business Intelligence skills or including: Handling ELT and scalability issues for enterprise level data warehouse Creating ETLs/ELTs to handle data from various data sources and various formats Strong hands-on experience of programming language like Python, Scala with Spark and Beam. Solid hands-on and Solution Architecting experience in Cloud Technologies Aws, Azure and GCP (GCP preferred) Hands on working experience of data processing at scale with event driven systems, message queues (Kafka/ Flink/Spark Streaming) Hands on working Experience with GCP Services like BigQuery, DataProc, PubSub, Dataflow, Cloud Composer, API Gateway, Datalake, BigTable, Spark, Apache Beam, Feature Engineering/Data Processing to be used for Model development Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) Experience building data pipelines for structured/unstructured, real-time/batch, events/synchronous/ asynchronous using MQ, Kafka, Steam processing Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data Must be very strong in writing SparkSQL queries Strong organizational skills, with the ability to work autonomously as well as leading a team Pleasant Personality, Strong Communication & Interpersonal Skills Qualifications A bachelor's degree in computer science, computer engineering, or a related discipline is required to work as a technical lead Certification in GCP would be a big plus Individuals in this field can further display their leadership skills by completing the Project Management Professional certification offered by the Project Management Institute.
Posted 4 days ago
4.0 - 6.0 years
9 - 13 Lacs
Chennai
Work from Office
Role Description As a Senior Data Science and Modeling Specialist at Incedo, you will be responsible for developing and deploying predictive models and machine learning algorithms to support business decision-making. You will work with data scientists, data engineers, and business analysts to understand business requirements and develop data-driven solutions. You will be skilled in programming languages such as Python or R and have experience in data science tools such as TensorFlow or Keras. You will be responsible for ensuring that models are accurate, efficient, and scalable. Roles & Responsibilities: Developing and implementing machine learning models and algorithms to solve complex business problems Conducting data analysis and modeling using statistical and data analysis tools Collaborating with other teams to ensure the consistency and integrity of data Providing guidance and mentorship to junior data science and modeling specialists Presenting findings and recommendations to stakeholders Technical Skills Skills Requirements: Proficiency in statistical analysis techniques such as regression analysis, hypothesis testing, or time-series analysis. Knowledge of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning. Experience with data wrangling and data cleaning techniques using tools such as Python, R, or SQL. Understanding of big data technologies such as Hadoop, Spark, or Hive. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 4 days ago
4.0 - 6.0 years
6 - 10 Lacs
Gurugram
Work from Office
Role Description : As a Senior Big Data Platform Engineer at Incedo, you will be responsible for designing and implementing big data platforms to support large-scale data integration projects. You will work with data architects and data engineers to define the platform architecture and build the necessary infrastructure. You will be skilled in big data technologies such as Hadoop, Spark, and Kafka and have experience in cloud computing platforms such as AWS or Azure. You will be responsible for ensuring the performance, scalability, and security of the big data platform and troubleshooting any issues that arise. Roles & Responsibilities: Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka Creating and managing data warehouses, data lakes and data marts Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Troubleshooting and resolving big data platform issues Collaborating with other teams to ensure the consistency and integrity of data Technical Skills Skills Requirements: Experience with big data processing technologies such as Apache Hadoop, Apache Spark, or Apache Kafka. Understanding of distributed computing concepts such as MapReduce, Spark RDDs, or Apache Flink data streams. Familiarity with big data storage solutions such as HDFS, Amazon S3, or Azure Data Lake Storage. Knowledge of big data processing frameworks such as Apache Hive, Apache Pig, or Apache Impala. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 4 days ago
7.0 - 9.0 years
11 - 16 Lacs
Gurugram
Work from Office
Role Description : As a Technical Lead - Datawarehousing Development at Incedo, you will be responsible for designing and developing data warehousing solutions. You should have experience with ETL tools such as Informatica, Talend, or DataStage and be proficient in SQL. Roles & Responsibilities: Design and develop data warehousing solutions using tools like Hadoop, Spark, or Snowflake Write efficient and optimized ETL scripts Collaborate with cross-functional teams to develop and implement data warehousing features and enhancements Debug and troubleshoot complex data warehousing issues Ensure data security, availability, and scalability of production systems Technical Skills Skills Requirements: Proficiency in ETL (Extract, Transform, Load) processes and tools such as Informatica, Talend, or DataStage. Experience with data modeling and schema design for data warehousing applications. Knowledge of data warehouse technologies such as Amazon Redshift, Snowflake, or Oracle Exadata. Familiarity with business intelligence (BI) tools such as Tableau, Power BI, or QlikView. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred .
Posted 4 days ago
5.0 - 8.0 years
11 - 16 Lacs
Gurugram
Work from Office
Role Description Role Description: Senior Scala Data Engineer Scala Data Engineer needs to be able to understand existing code and help refactor, and migrate into new environment. Role and responsibilities * Read existing scala spark code. * Create unit tests for scala spark code * Enhance and Write scala spark code. * Proficient in working with S3 file with csv and parquet format. * Proficient in working with mongodb. Building up environments independently to test assigned work, Execute manual and automated tests. Experience with enterprise tools, like Git, Azure, TFS. Experience with JIRA or similar defect tracking tool. Engage and participate on an Agile team of a world-class software developers. Apply independence and creativity to problem solving across project assignments. Effectively communicate with team members, project managers and clients, as required. Core Skills: Scala Spark AWS Glue AWS Step Functions Maven Terraform Technical Skills Technical skills requirements The candidate must demonstrate proficiency in, Reading and writing scala spark code. Good programming knowledge using Scala and Python. SQL & BDD framework knowledge Experience in aws stack like S3, Glue, Step Functions * Experience in Agile/Scrum development Full SDLC from development to production deployment. Good Comm Skills.
Posted 4 days ago
3.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Role Description : As a Software Engineer - Data Science and Modeling at Incedo, you will be responsible for developing and deploying predictive models and machine learning algorithms to support business decision-making. You will work with data scientists, data engineers, and business analysts to understand business requirements and develop data-driven solutions. You will be skilled in programming languages such as Python or R and have experience in data science tools such as TensorFlow or Keras. You will be responsible for ensuring that models are accurate, efficient, and scalable. Roles & Responsibilities: Developing and implementing machine learning models and algorithms to solve complex business problems Conducting data analysis and modeling using statistical and data analysis tools Collaborating with other teams to ensure the consistency and integrity of data Providing guidance and mentorship to junior data science and modeling specialists Presenting findings and recommendations to stakeholders Technical Skills Skills Requirements: Proficiency in statistical analysis techniques such as regression analysis, hypothesis testing, or time-series analysis. Knowledge of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning. Experience with data wrangling and data cleaning techniques using tools such as Python, R, or SQL. Understanding of big data technologies such as Hadoop, Spark, or Hive. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 4 days ago
7.0 - 9.0 years
9 - 13 Lacs
Gurugram
Work from Office
Role Description : As a Technical Lead - Data Science and Modeling at Incedo, you will be responsible for developing and deploying predictive models and machine learning algorithms to support business decision-making. You will work with data scientists, data engineers, and business analysts to understand business requirements and develop data-driven solutions. You will be skilled in programming languages such as Python or R and have experience in data science tools such as TensorFlow or Keras. You will be responsible for ensuring that models are accurate, efficient, and scalable. Roles & Responsibilities: Developing and implementing machine learning models and algorithms to solve complex business problems Conducting data analysis and modeling using statistical and data analysis tools Collaborating with other teams to ensure the consistency and integrity of data Providing guidance and mentorship to junior data science and modeling specialists Presenting findings and recommendations to stakeholders Technical Skills Skills Requirements: Proficiency in statistical analysis techniques such as regression analysis, hypothesis testing, or time-series analysis. Knowledge of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning. Experience with data wrangling and data cleaning techniques using tools such as Python, R, or SQL. Understanding of big data technologies such as Hadoop, Spark, or Hive. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France