Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
We are looking for a skilled Data Engineer with expertise in Python/PySpark to join our team at Bay Area Technology Solutions. The ideal candidate will have 2-5y years of experience in the IT Services & Consulting industry. Roles and Responsibility Design, develop, and implement data pipelines using Python/PySpark. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems and architectures. Ensure data quality, integrity, and security. Optimize data processing workflows for performance and efficiency. Troubleshoot and resolve technical issues related to data engineering projects. Job Requirements Strong proficiency in Python programming language. Experience with PySpark is required. Knowledge of big data technologies and tools is necessary. Excellent problem-solving skills and attention to detail are essential. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills are needed.
Posted 2 days ago
2.0 - 4.0 years
4 - 6 Lacs
Mumbai, Hyderabad
Work from Office
Job Responsibilities. Collaborate with data scientists, software engineers, and business stakeholders to understand data requirements and design efficient data models.. Develop, implement, and maintain robust and scalable data pipelines, ETL processes, and data integration solutions.. Extract, transform, and load data from various sources, ensuring data quality, integrity, and consistency.. Optimize data processing and storage systems to handle large volumes of structured and unstructured data efficiently.. Perform data cleaning, normalization, and enrichment tasks to prepare datasets for analysis and modelling.. Monitor data flows and processes, identify and resolve data-related issues and bottlenecks.. Contribute to the continuous improvement of data engineering practices and standards within the organization.. Stay up-to-date with industry trends and emerging technologies in data engineering, artificial intelligence, and dynamic pricing. Candidate Profile. Strong passion for data engineering, artificial intelligence, and problem-solving.. Solid understanding of data engineering concepts, data modeling, and data integration techniques.. Proficiency in programming languages such as Python, SQL and Web Scrapping.. Understanding of databases like No Sql , relational database, In Memory database and technologies like MongoDB, Redis, Apache Spark would be add on... Knowledge of distributed computing frameworks and big data technologies (e.g., Hadoop, Spark) is a plus.. Excellent analytical and problem-solving skills, with a keen eye for detail.. Strong communication and collaboration skills, with the ability to work effectively in a teamoriented environment.. Self-motivated, quick learner, and adaptable to changing priorities and technologies.. (ref:hirist.tech).
Posted 2 days ago
3.0 - 6.0 years
8 - 13 Lacs
Bengaluru
Work from Office
KPMG India is looking for Azure Data Engieer - Assistant Manager Azure Data Engieer - Assistant Manager to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 2 days ago
3.0 - 4.0 years
17 - 18 Lacs
Bengaluru
Work from Office
KPMG India is looking for Azure Data Engineer - Consultant Azure Data Engineer - Consultant to join our dynamic team and embark on a rewarding career journey Assure that data is cleansed, mapped, transformed, and otherwise optimised for storage and use according to business and technical requirements Solution design using Microsoft Azure services and other tools The ability to automate tasks and deploy production standard code (with unit testing, continuous integration, versioning etc.) Load transformed data into storage and reporting structures in destinations including data warehouse, high speed indexes, real-time reporting systems and analytics applications Build data pipelines to collectively bring together data Other responsibilities include extracting data, troubleshooting and maintaining the data warehouse
Posted 2 days ago
2.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Req ID: 327063 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a python,pySpark,ApacheSpark to join our team in Hyderabad, Telangana (IN-TG), India (IN). "At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company s growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring, the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here "NTT DATA Services currently seeks Python Developer to join our team in Hyderabad Design and build ETL solutions with experience in data engineering, data modelling in large-scale in both batch and real-time environments. Skills required: Python, PySpark, Apache Spark, Unix Shell Scripting, GCP, Big query, MongoDB, Kafka event streaming, API development, CI/CD. For software engineering 3: 6+yrs Mandate :Apache spark with python, pyspark, GCP with big query, database Secondary mandate: Abinitio ETL Good to have : Unix shell scripting Kafka event streaming" #LI-INPAS We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in RD to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com
Posted 2 days ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Associate Data Scientist - I Invent Health values creativity and the application of cutting-edge technology in AI and ML to enhance data quality, engineering efficiency, and overall productivity in the Healthcare Analytics domain. As a part of the Data Science team, you will play a crucial role in our data-driven initiatives. You will work closely with our data science team to extract meaningful insights from large and complex healthcare datasets. Responsibilities: Participating in end-to-end modelling projects, handling large variety and volume of data. Executing research and analytics, including data source identification, processing, model/algorithm development. Conducting exploratory data analysis, feature construction, and feature engineering. Performing Feature Engineering to create relevant features from raw data to enhance model performance and interpretability. Developing and evaluating advanced Predictive models using Python and Spark ML-Lib. Focusing on specialty areas such as Anomaly Detection, Time Series Forecasting, NLP and Generative AI. Deploying models and monitoring their performance post-deployment. Conducting proof-of-concept research to explore new modelling techniques and methodologies. Evaluating the performance of data science models using appropriate metrics and techniques. Collaborate with cross-functional teams (e.g., clinicians, biostatisticians, engineers) to ensure the successful implementation of data science solutions. Continuously seek to innovate and bring cutting-edge solutions to our customers. Developing high-quality data-driven solutions and collaborating effectively within a multidisciplinary team environment. Delivering presentations and reports on findings and translating complex concepts into actionable insights. Requirements: 2 - 4 years of Data Science experience. Previous experience in Healthcare domain preferred. Bachelors in a quantitative field (Operation Research, Computer Science, Engineering, Applied Math, Statistics, Physics, Analytics, etc.), Advanced courses in Business Analytics/Data Science/Data Analytics Strong problem-solving and analytical skills. Proficiency in Deep Learning libraries/frameworks like Kera’s and TensorFlow is a plus. Strong understanding of machine learning algorithms and statistical modelling techniques. Experience in handling large datasets and proficiency in data manipulation. Proficiency in PySpark is a plus. Proficiency in any BI tool like Tableau is a plus Ability to translate complex data science concepts into practical solutions. Excellent communication skills, both oral and written. Designing test strategies, plans and processes, methodologies and supporting documentation for functional and non-functional areas
Posted 2 days ago
6.0 - 9.0 years
5 - 10 Lacs
Noida
Work from Office
Relevant experience and skills: Must haves: At least 6-9 years of work experience in US and overseas payroll. Understanding of customer invoicing and timesheet management Quick learner & presentation skill Strong sense of urgency and results-orientation MS Office Advanced Excel and good Power point Acquaint with different client portals like wand, Fieldglass, Beeline, Coupa, Ariba Good to have: Experience of Background in IT staffing business ERP working knowledge Quick Book
Posted 2 days ago
5.0 - 9.0 years
12 - 17 Lacs
Noida
Work from Office
Spark/PySpark Technical hands on data processing Table designing knowledge using Hive - similar to RDBMS knowledge Database SQL knowledge for retrieval of data - transformation queries such as joins (full, left, right), ranking, group by Good Communication skills. Additional skills - GitHub, Jenkins, shell scripting would be added advantage Mandatory Competencies Big Data - Big Data - Pyspark Big Data - Big Data - SPARK Big Data - Big Data - Hadoop Big Data - Big Data - HIVE DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Beh - Communication and collaboration Database - Database Programming - SQL DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Basic Bash/Shell script writing
Posted 2 days ago
5.0 - 10.0 years
10 - 15 Lacs
Noida
Work from Office
We are seeking a Data Science Engineer to design, build, and optimize scalable data and machine learning systems. This role requires strong software engineering skills, a deep understanding of data science workflows, and the ability to work cross-functionally to translate business problems into production-level data solutions. Key Responsibilities: Design, implement, and maintain data science pipelines from data ingestion to model deployment. Collaborate with data scientists to operationalize ML models and algorithms in production environments. Develop robust APIs and services for ML model inference and integration. Build and optimize large-scale data processing systems using Spark, Pandas, or similar tools. Ensure data quality and pipeline reliability through rigorous testing, validation, and monitoring. Work with cloud infrastructure (AWS) for scalable ML deployment. Manage model versioning, feature engineering workflows, and experiment tracking. Optimize performance of models and pipelines for latency, cost, and throughput. Required Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in a data science, ML engineering, or software engineering role. Proficiency in Python (preferred) and SQL; knowledge of Java, Scala, or C++ is a plus. Experience with data science libraries like Scikit-learn, XGBoost, TensorFlow, or PyTorch. Familiarity with ML deployment tools such as ML flow, Sage Maker, or Vertex AI. Solid understanding of data structures, algorithms, and software engineering best practices. Experience working with databases (SQL, NoSQL) and data lakes (e.g., Delta Lake, Big Query). Preferred Qualifications: Experience with containerization and orchestration (Docker, Kubernetes). Experience working in Agile or cross-functional teams. Familiarity with streaming data platforms (Kafka, Spark Streaming, Flink). Soft Skills: Strong communication skills to bridge technical and business teams. Excellent problem-solving and analytical thinking. Self-motivated and capable of working independently or within a team. Passion for data and a curiosity-driven mindset. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - AI/ML Database - Database Programming - SQL Artifical Intelligence - AI Tools - TensorFlow Cloud - AWS - ECS DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes)
Posted 2 days ago
8.0 - 13.0 years
15 - 19 Lacs
Bengaluru
Work from Office
Job Description Where you ll work: India (Remote) Engineering at GoTo We re the trailblazers of remote work technology. We build powerful, flexible work software that empowers everyone to live their best life, at work and beyond. And blaze even more trails along the way. There s ample room for growth so you can blaze your own trail here too. When you join a GoTo product team, you ll take on a key role in this process and see your work be used by millions of users worldwide. Your Day to Day As a Staff Data Engineer, you would be: Design and Develop Pipelines : Build robust, scalable, and efficient ETL /ELT data pipelines to process structured data from diverse sources. Big Data Processing : Develop and optimize large-scale data workflows using Apache Spark, with strong hands-on experience in building ETL pipelines. Cloud-Native Data Solutions : Architect and implement data solutions using AWS services such as S3, EMR, Lambda, and EKS. Data Governance : Manage and govern data using catalogs like Hive or Unity Catalog ; ensure strong data lineage, access controls, and metadata management. Workflow Orchestration : Schedule, monitor, and orchestrate workflows using Apache Airflow or similar tools. Data Quality & Monitoring : Implement quality checks, logging, monitoring, and alerting to ensure pipeline reliability and visibility. Cross-Functional Collaboration : Partner with analysts, data scientists, and business stakeholders to deliver high-quality data for applications and enable self-service BI. Compliance & Security : Uphold best practices in data governance, security, and compliance across the data ecosystem. Mentorship & Standards : Mentor junior engineers and help evolve engineering practices including CI/CD, testing, and documentation. What We re Looking For As a Staff Data Engineer, your background will look like: Bachelor s or Master s degree in Computer Science , Engineering, or a related field. 8+ years of experience in data engineering or software development, with a proven record of maintaining production-grade pipelines. Proficient in Python and SQL for data transformation and analytics. Strong expertise in Apache Spark , including data lake management, ACID transactions, schema enforcement/evolution, and time travel. In-depth knowledge of AWS services especially S3, EMR, Lambda, and EKS with a solid grasp of cloud architecture and security best practices. Solid data modeling skills (dimensional, normalized) and an understanding of data warehousing and lakehouse paradigms. Experience with BI tools like Tableau or Power BI . Familiar with setting up data quality , monitoring, and observability frameworks. Excellent communication and collaboration skills, with the ability to thrive in an agile and multicultural team environment. Nice to Have Experience working on the Databricks Platform Knowledge of Delta or Apache Iceberg file formats Passion for Machine Learning and AI; enthusiasm to explore and apply intelligent systems At GoTo, authenticity and inclusive culture are key to our thriving workplace, where diverse perspectives drive innovation and growth. Our team of GoGetters is passionate about learning, exploring, and working together to achieve success while staying committed to delivering exceptional experiences for our customers. We take pride in supporting our employees with comprehensive benefits, wellness programs, and global opportunities for professional and personal development. By maintaining an inclusive environment, we empower our teams to do their best work, make a meaningful impact, and grow their career. Learn more .
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Description This is a contract role for a Python Back End Developer. The Python Back End Developer will be responsible for day-to-day tasks including back-end web development, software development, and object-oriented programming (OOP). This hybrid role is based in Hyderabad, with some work-from-home flexibility. Qualifications Proficiency in Back-End Web Development and Software Development Strong understanding of Object-Oriented Programming (OOP) Basic skills in Front-End Development Solid Programming skills Experience with Cloud platforms - GCP/AWS Excellent problem-solving and analytical skills Ability to work both independently and in a team environment Experience with Python frameworks like Django or Flask is a plus Nice to have worked on data pipelines (Airflow, Netflix Conductor etc..) Worked on Apache Spark/beam, Kafka
Posted 2 days ago
10.0 years
0 Lacs
Ponnani, Kerala, India
Remote
Join the Journey: MlOps Engineer at Gaaya Realtech (Dubai, UAE 🌍) Imagine building the technology that’s reshaping real estate globally, backed by visionary architects and innovators at Gaaya Realtech. We don’t just construct buildings—we’re crafting smarter cities and intelligent spaces for tomorrow. About Gaaya Realtech We’re a powerhouse at the intersection of construction, architecture, and advanced realty. Our drive? Pioneering innovation and technology in real estate—making every project not just a structure, but a story. Our Client: Revolutionizing Global EdTech You’ll work directly with a funded, high-growth EdTech startup. Our mission tackles a worldwide challenge with scalable solutions, initially targeting the US and Canadian markets—all from the heart of Dubai. Think competitive packages, a collaborative work culture, and a journey that begins remotely before relocating to the dazzling UAE. The Opportunity: MlOps EngineerYour Story Starts Here Picture this: Each day, you contribute to real solutions that accelerate business and education for a global audience. You'll be central to deploying, automating, and managing machine learning systems that scale from POC to world-class production. Why You Belong Here Impact at Scale: Your work powers products that touch users across continents. Smart, Collaborative Team: Join a diverse, talented group—where ideas spark and innovation flourishes. Growth & Challenge: You’ll dive into pioneering projects, sharpening your edge in MlOps, cloud, and automation. Future-Forward Environment: Join us remotely today and collaborate in one of the world’s most inspiring cities (Dubai) tomorrow! What We’re Looking For Experience: 2–10 years in industry, ideally 2–5. Skills: Smart, solution-oriented, collaborative. Fluent communicator eager for knowledge-sharing. Passionate about deploying, scaling, and automating ML systems.
Posted 2 days ago
4.0 - 8.0 years
10 - 14 Lacs
Chennai
Work from Office
Role Description Provides leadership for the overall architecture, design, development, and deployment of a full-stack cloud native data analytics platform. Designing & Augmenting Solution architecture for Data Ingestion, Data Preparation, Data Transformation, Data Load, ML & Simulation Modelling, Java BE & FE, State Machine, API Management & Intelligence consumption using data products, on cloud Understand Business Requirements and help in developing High level and Low-level Data Engineering and Data Processing Documentation for the cloud native architecture Developing conceptual, logical and physical target-state architecture, engineering and operational specs. Work with the customer, users, technical architects, and application designers to define the solution requirements and structure for the platform Model and design the application data structure, storage, and integration Lead the database analysis, design, and build effort Work with the application architects and designers to design the integration solution Ensure that the database designs fulfill the requirements, including data volume, frequency needs, and long-term data growth Able to perform Data Engineering tasks using Spark Knowledge of developing efficient frameworks for development and testing using (Sqoop/Nifi/Kafka/Spark/Streaming/ WebHDFS/Python) to enable seamless data ingestion processes on to the Hadoop/BigQuery platforms. Enabling Data Governance and Data Discovery Exposure of Job Monitoring framework along validations automation Exposure of handling structured, Un Structured and Streaming data. Technical Skills Experience with building data platform on cloud (Data Lake, Data Warehouse environment, Databricks) Strong technical understanding of data modeling, design and architecture principles and techniques across master data, transaction data and derived/analytic data Proven background of designing and implementing architectural solutions which solve strategic and tactical business needs Deep knowledge of best practices through relevant experience across data-related disciplines and technologies, particularly for enterprise-wide data architectures, data management, data governance and data warehousing Highly competent with database design Highly competent with data modeling Strong Data Warehousing and Business Intelligence skills or including: Handling ELT and scalability issues for enterprise level data warehouse Creating ETLs/ELTs to handle data from various data sources and various formats Strong hands-on experience of programming language like Python, Scala with Spark and Beam. Solid hands-on and Solution Architecting experience in Cloud Technologies Aws, Azure and GCP (GCP preferred) Hands on working experience of data processing at scale with event driven systems, message queues (Kafka/ Flink/Spark Streaming) Hands on working Experience with GCP Services like BigQuery, DataProc, PubSub, Dataflow, Cloud Composer, API Gateway, Datalake, BigTable, Spark, Apache Beam, Feature Engineering/Data Processing to be used for Model development Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) Experience building data pipelines for structured/unstructured, real-time/batch, events/synchronous/ asynchronous using MQ, Kafka, Steam processing Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data Must be very strong in writing SparkSQL queries Strong organizational skills, with the ability to work autonomously as well as leading a team Pleasant Personality, Strong Communication & Interpersonal Skills Qualifications A bachelor's degree in computer science, computer engineering, or a related discipline is required to work as a technical lead Certification in GCP would be a big plus Individuals in this field can further display their leadership skills by completing the Project Management Professional certification offered by the Project Management Institute.
Posted 2 days ago
4.0 - 6.0 years
9 - 13 Lacs
Chennai
Work from Office
Role Description As a Senior Data Science and Modeling Specialist at Incedo, you will be responsible for developing and deploying predictive models and machine learning algorithms to support business decision-making. You will work with data scientists, data engineers, and business analysts to understand business requirements and develop data-driven solutions. You will be skilled in programming languages such as Python or R and have experience in data science tools such as TensorFlow or Keras. You will be responsible for ensuring that models are accurate, efficient, and scalable. Roles & Responsibilities: Developing and implementing machine learning models and algorithms to solve complex business problems Conducting data analysis and modeling using statistical and data analysis tools Collaborating with other teams to ensure the consistency and integrity of data Providing guidance and mentorship to junior data science and modeling specialists Presenting findings and recommendations to stakeholders Technical Skills Skills Requirements: Proficiency in statistical analysis techniques such as regression analysis, hypothesis testing, or time-series analysis. Knowledge of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning. Experience with data wrangling and data cleaning techniques using tools such as Python, R, or SQL. Understanding of big data technologies such as Hadoop, Spark, or Hive. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
4.0 - 6.0 years
6 - 10 Lacs
Gurugram
Work from Office
Role Description : As a Senior Big Data Platform Engineer at Incedo, you will be responsible for designing and implementing big data platforms to support large-scale data integration projects. You will work with data architects and data engineers to define the platform architecture and build the necessary infrastructure. You will be skilled in big data technologies such as Hadoop, Spark, and Kafka and have experience in cloud computing platforms such as AWS or Azure. You will be responsible for ensuring the performance, scalability, and security of the big data platform and troubleshooting any issues that arise. Roles & Responsibilities: Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka Creating and managing data warehouses, data lakes and data marts Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Troubleshooting and resolving big data platform issues Collaborating with other teams to ensure the consistency and integrity of data Technical Skills Skills Requirements: Experience with big data processing technologies such as Apache Hadoop, Apache Spark, or Apache Kafka. Understanding of distributed computing concepts such as MapReduce, Spark RDDs, or Apache Flink data streams. Familiarity with big data storage solutions such as HDFS, Amazon S3, or Azure Data Lake Storage. Knowledge of big data processing frameworks such as Apache Hive, Apache Pig, or Apache Impala. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
7.0 - 9.0 years
11 - 16 Lacs
Gurugram
Work from Office
Role Description : As a Technical Lead - Datawarehousing Development at Incedo, you will be responsible for designing and developing data warehousing solutions. You should have experience with ETL tools such as Informatica, Talend, or DataStage and be proficient in SQL. Roles & Responsibilities: Design and develop data warehousing solutions using tools like Hadoop, Spark, or Snowflake Write efficient and optimized ETL scripts Collaborate with cross-functional teams to develop and implement data warehousing features and enhancements Debug and troubleshoot complex data warehousing issues Ensure data security, availability, and scalability of production systems Technical Skills Skills Requirements: Proficiency in ETL (Extract, Transform, Load) processes and tools such as Informatica, Talend, or DataStage. Experience with data modeling and schema design for data warehousing applications. Knowledge of data warehouse technologies such as Amazon Redshift, Snowflake, or Oracle Exadata. Familiarity with business intelligence (BI) tools such as Tableau, Power BI, or QlikView. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred .
Posted 2 days ago
5.0 - 8.0 years
11 - 16 Lacs
Gurugram
Work from Office
Role Description Role Description: Senior Scala Data Engineer Scala Data Engineer needs to be able to understand existing code and help refactor, and migrate into new environment. Role and responsibilities * Read existing scala spark code. * Create unit tests for scala spark code * Enhance and Write scala spark code. * Proficient in working with S3 file with csv and parquet format. * Proficient in working with mongodb. Building up environments independently to test assigned work, Execute manual and automated tests. Experience with enterprise tools, like Git, Azure, TFS. Experience with JIRA or similar defect tracking tool. Engage and participate on an Agile team of a world-class software developers. Apply independence and creativity to problem solving across project assignments. Effectively communicate with team members, project managers and clients, as required. Core Skills: Scala Spark AWS Glue AWS Step Functions Maven Terraform Technical Skills Technical skills requirements The candidate must demonstrate proficiency in, Reading and writing scala spark code. Good programming knowledge using Scala and Python. SQL & BDD framework knowledge Experience in aws stack like S3, Glue, Step Functions * Experience in Agile/Scrum development Full SDLC from development to production deployment. Good Comm Skills.
Posted 2 days ago
3.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Role Description : As a Software Engineer - Data Science and Modeling at Incedo, you will be responsible for developing and deploying predictive models and machine learning algorithms to support business decision-making. You will work with data scientists, data engineers, and business analysts to understand business requirements and develop data-driven solutions. You will be skilled in programming languages such as Python or R and have experience in data science tools such as TensorFlow or Keras. You will be responsible for ensuring that models are accurate, efficient, and scalable. Roles & Responsibilities: Developing and implementing machine learning models and algorithms to solve complex business problems Conducting data analysis and modeling using statistical and data analysis tools Collaborating with other teams to ensure the consistency and integrity of data Providing guidance and mentorship to junior data science and modeling specialists Presenting findings and recommendations to stakeholders Technical Skills Skills Requirements: Proficiency in statistical analysis techniques such as regression analysis, hypothesis testing, or time-series analysis. Knowledge of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning. Experience with data wrangling and data cleaning techniques using tools such as Python, R, or SQL. Understanding of big data technologies such as Hadoop, Spark, or Hive. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
7.0 - 9.0 years
9 - 13 Lacs
Gurugram
Work from Office
Role Description : As a Technical Lead - Data Science and Modeling at Incedo, you will be responsible for developing and deploying predictive models and machine learning algorithms to support business decision-making. You will work with data scientists, data engineers, and business analysts to understand business requirements and develop data-driven solutions. You will be skilled in programming languages such as Python or R and have experience in data science tools such as TensorFlow or Keras. You will be responsible for ensuring that models are accurate, efficient, and scalable. Roles & Responsibilities: Developing and implementing machine learning models and algorithms to solve complex business problems Conducting data analysis and modeling using statistical and data analysis tools Collaborating with other teams to ensure the consistency and integrity of data Providing guidance and mentorship to junior data science and modeling specialists Presenting findings and recommendations to stakeholders Technical Skills Skills Requirements: Proficiency in statistical analysis techniques such as regression analysis, hypothesis testing, or time-series analysis. Knowledge of machine learning algorithms and techniques such as supervised learning, unsupervised learning, or reinforcement learning. Experience with data wrangling and data cleaning techniques using tools such as Python, R, or SQL. Understanding of big data technologies such as Hadoop, Spark, or Hive. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Should be open to new ideas and be willing to learn and develop new skills. Should also be able to work well under pressure and manage multiple tasks and priorities. Qualifications 7-9 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
4.0 - 8.0 years
10 - 14 Lacs
Gurugram
Work from Office
The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement large-scale data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data systems and databases. Ensure data quality, integrity, and security. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve technical issues related to data engineering. Job Requirements Strong knowledge of data engineering principles and practices. Experience with data modeling, database design, and data warehousing. Proficiency in programming languages such as Python, Java, or C++. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 2 days ago
4.0 - 6.0 years
7 - 12 Lacs
Gurugram
Work from Office
Role Description : As a Senior Big Data Platform Engineer at Incedo, you will be responsible for designing and implementing big data platforms to support large-scale data integration projects. You will work with data architects and data engineers to define the platform architecture and build the necessary infrastructure. You will be skilled in big data technologies such as Hadoop, Spark, and Kafka and have experience in cloud computing platforms such as AWS or Azure. You will be responsible for ensuring the performance, scalability, and security of the big data platform and troubleshooting any issues that arise. Roles & Responsibilities: Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka Creating and managing data warehouses, data lakes and data marts Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Troubleshooting and resolving big data platform issues Collaborating with other teams to ensure the consistency and integrity of data Technical Skills Skills Requirements: Experience with big data processing technologies such as Apache Hadoop, Apache Spark, or Apache Kafka. Understanding of distributed computing concepts such as MapReduce, Spark RDDs, or Apache Flink data streams. Familiarity with big data storage solutions such as HDFS, Amazon S3, or Azure Data Lake Storage. Knowledge of big data processing frameworks such as Apache Hive, Apache Pig, or Apache Impala. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 2 days ago
6.0 years
15 - 90 Lacs
Pune, Maharashtra, India
On-site
📌 Job Title: Technical Architect 📍 Location: Pune (Onsite/Hybrid) 🏢 Company: AdElement Media Solutions Pvt. Ltd. 🕒 Experience Required: 6+ Years 💼 Industry: AdTech / Digital Advertising Technology Salary - 80 LPA 🔧 About The Role We are looking for a Technical Architect with a strong engineering background and hands-on experience in designing and developing scalable products . The ideal candidate must have core experience in AdTech platforms , product architecture, and engineering leadership. 🛠 Key Responsibilities Design, architect, and oversee the implementation of scalable and high-performance AdTech products. Collaborate with product managers, developers, and stakeholders to define technical direction. Own the end-to-end architecture decisions and ensure alignment with business goals. Review code, mentor engineers, and enforce engineering best practices. Drive technical innovation and optimize system design for performance, scalability, and reliability. Lead architecture discussions for RTB systems, data pipelines, ad servers, or bidding platforms. ✅ Required Qualifications & Skills Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. Minimum 6+ years of hands-on experience in software development and system design. Proven experience in building scalable backend systems in high-traffic environments. Strong programming skills in Node.js, Java, or Python. In-depth understanding of AdTech ecosystem – including DSPs, SSPs, RTB, DMPs, Ad servers, etc. Expertise in cloud platforms (AWS preferred), CI/CD, containerization (Docker, Kubernetes). Experience in handling data-intensive pipelines using tools like Kafka, Spark, or similar. Familiarity with performance tuning, monitoring tools, and high availability systems. 💡 Good To Have Experience working in startups or fast-paced product teams. Knowledge of data privacy regulations (GDPR, CCPA) related to online advertising. Prior experience in leading cross-functional engineering teams. 📩 How To Apply If you're passionate about architecting high-performance AdTech solutions and ready to make an impact, we’d love to hear from you. Please send your resume Skills:- Java, Amazon Web Services (AWS), DSP, SSP, designing and Product development
Posted 2 days ago
3.0 years
30 - 40 Lacs
Noida, Uttar Pradesh, India
On-site
About Us CLOUDSUFI, a Google Cloud Premier Partner , a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance. Our Values We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community. Equal Opportunity Statement CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/. Position Overview Seeking an experienced Data Engineer to design, develop, and productionize graph database solutions using Neo4j for economic data analysis and modeling. This role requires expertise in graph database architecture, data pipeline development, and production system deployment. Key Responsibilities Graph Database Development Design and implement Neo4j graph database schemas for complex economic datasets Develop efficient graph data models representing economic relationships, transactions, and market dynamics Create and optimize Cypher queries for complex analytical workloads Build graph-based data pipelines for real-time and batch processing Data Engineering & Pipeline Development Architect scalable data ingestion frameworks for structured and unstructured economic data Develop ETL/ELT processes to transform relational and time-series data into graph formats Implement data validation, quality checks, and monitoring systems Build APIs and services for graph data access and manipulation Production Systems & Operations Deploy and maintain Neo4j clusters in production environments Implement backup, disaster recovery, and high availability solutions Monitor database performance, optimize queries, and manage capacity planning Establish CI/CD pipelines for graph database deployments Economic Data Specialization Model financial market relationships, economic indicators, and trading networks Create graph representations of supply chains, market structures, and economic flows Develop graph analytics for fraud detection, risk assessment, and market analysis Collaborate with economists and analysts to translate business requirements into graph solutions Required Qualifications Technical Skills: **Neo4j Expertise**: 3+ years hands-on experience with Neo4j database development **Graph Modeling**: Strong understanding of graph theory and data modeling principles **Cypher Query Language**: Advanced proficiency in writing complex Cypher queries **Programming**: Python, Java, or Scala for data processing and application development **Data Pipeline Tools**: Experience with Apache Kafka, Apache Spark, or similar frameworks **Cloud Platforms**: AWS, GCP, or Azure with containerization (Docker, Kubernetes) Database & Infrastructure Experience with graph database administration and performance tuning Knowledge of distributed systems and database clustering Understanding of data warehousing concepts and dimensional modeling Familiarity with other databases (PostgreSQL, MongoDB, Elasticsearch) Economic Data Experience Experience working with financial datasets, market data, or economic indicators Understanding of financial data structures and regulatory requirements Knowledge of data governance and compliance in financial services Preferred Qualifications **Neo4j Certification**: Neo4j Certified Professional or Graph Data Science certification **Advanced Degree**: Master's in Computer Science, Economics, or related field **Industry Experience**: 5+ years in financial services, fintech, or economic research **Additional Skills**: Machine learning on graphs, network analysis, time-series analysis Technical Environment Neo4j Enterprise Edition with APOC procedures Apache Kafka for streaming data ingestion Apache Spark for large-scale data processing Docker and Kubernetes for containerized deployments Git, Jenkins/GitLab CI for version control and deployment Monitoring tools: Prometheus, Grafana, ELK stack Application Requirements Portfolio demonstrating Neo4j graph database projects Examples of production graph systems you've built Experience with economic or financial data modeling preferred Skills:- Graph Databases, Neo4J and Python
Posted 2 days ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Software Engineering Roles – Multiple Levels Level 1: Job Title: Software Engineer | 2-4 Years Of Experience | 5 Positions Level 2: Job Title: Associate Software Engineer | 5-9 Years Of Experience | 3 Positions Level 3: Job Title: Staff Software Engineer | 12-15 Years Of Experience | 2 Positions Location: Chennai, Tamil Nadu, India Duration: FTE / Permanent Type: On-Site The Challenge: We are looking for skilled Software Engineers at multiple levels to join our team, where you'll play a key role in developing and maintaining high-performance, event-driven systems for real-time applications. You'll work closely with the team to implement microservices, optimize code performance, and contribute to the overall success of our technical projects. Tech Stack to Focus: • JAVA • Spring Boot • Microservices • Kafka • Hadoop • SQL and NoSQL Roles & Responsibilities: Component Development: Collaborate in developing and maintaining components of high-performance, real-time systems, following the guidance of senior team members. Microservices Implementation: Build microservices using Java, Python, or Go, adhering to established architectural patterns for scalability and resilience. Performance Optimization: Enhance code performance by focusing on efficient memory management, concurrency, and I/O operations to meet demanding performance standards. Database Management: Work with both SQL and NoSQL databases to create efficient data storage and retrieval solutions for high-volume environments. Real-Time Analytics: Assist in developing real-time analytics features, contributing to the creation of insightful visualizations for stakeholders. Monitoring & Alerting: Participate in developing monitoring and alerting solutions, with a focus on key performance indicators and system health metrics. Infrastructure as Code (IaC): Support the implementation of IaC practices, helping to create and maintain deployment scripts for consistent and reliable deployments. Container Orchestration: Contribute to container orchestration strategies, focusing on efficient resource utilization and auto-scaling. Caching & Data Access: Implement and optimize caching strategies and data access patterns to improve system responsiveness. Code Reviews: Engage in code reviews, offering constructive feedback and incorporating suggestions to enhance code quality. Production Support: Assist in troubleshooting and resolving production issues, including participating in on-call rotations as required. Technical Documentation: Contribute to technical documentation, ensuring that system designs and implementations are clearly documented. Proof-of-Concept Projects: Participate in proof-of-concept initiatives, researching and implementing new technologies under the guidance of senior engineers. Knowledge Sharing: Actively participate in team knowledge-sharing sessions, presenting on new technologies and best practices. Essential Skills & Requirements: Educational Background: Bachelor’s degree in Computer Science or a related field. Technical Proficiency: Strong skills in at least one major programming language (Java, Python, or Go), with a focus on writing clean, maintainable code. Microservices & Event-Driven Systems: Experience with microservices architecture and event-driven systems. Distributed Systems: Solid understanding of distributed systems concepts and associated challenges. Database Skills: Practical experience working with both SQL and NoSQL databases. Cloud & Containerization: Familiarity with cloud platforms (AWS, Azure, GCP) and containerization technologies like Docker. Big Data: Basic understanding of big data technologies such as Hadoop, Spark, or Kafka. Version Control & CI/CD: Experience with version control systems (preferably Git) and CI/CD pipelines. Problem-Solving: Strong problem-solving abilities and experience in debugging complex issues. Communication & Teamwork: Excellent communication skills and a proven ability to work effectively within a team. Continuous Learning: Eagerness to learn new technologies and adapt to changing methodologies. Agile Practices: Basic understanding of agile development practices
Posted 2 days ago
5.0 - 7.0 years
0 Lacs
Rajasthan, India
Remote
At GKM IT , we’re passionate about building seamless digital experiences powered by robust and intelligent data systems. We’re on the lookout for a Data Engineer - Senior II to architect and maintain high-performance data platforms that fuel decision-making and innovation. If you enjoy designing scalable pipelines, optimising data systems, and leading with technical excellence, you’ll thrive in our fast-paced, outcome-driven culture. You’ll take ownership of building reliable, secure, and scalable data infrastructure—from streaming pipelines to data lakes. Working closely with engineers, analysts, and business teams, you’ll ensure that data is not just available, but meaningful and impactful across the organization. Requirements 5 to 7 years of experience in data engineering Architect and maintain scalable, secure, and reliable data platforms and pipelines Design and implement data lake/data warehouse solutions such as Redshift, BigQuery, Snowflake, or Delta Lake Build real-time and batch data pipelines using tools like Apache Airflow, Kafka, Spark, and DBT Ensure data governance, lineage, quality, and observability Collaborate with stakeholders to define data strategies, architecture, and KPIs Lead code reviews and enforce best practices Mentor junior and mid-level engineers Optimize query performance, data storage, and infrastructure Integrate CI/CD workflows for data deployment and automated testing Evaluate and implement new tools and technologies as required Demonstrate expert-level proficiency in Python and SQL Possess deep knowledge of distributed systems and data processing frameworks Be proficient in cloud platforms (AWS, GCP, or Azure), containerization, and CI/CD processes Have experience with streaming platforms like Kafka or Kinesis and orchestration tools Be highly skilled with Airflow, DBT, and data warehouse performance tuning Exhibit strong leadership, communication, and mentoring skills Benefits We don’t just hire employees—we invest in people. At GKM IT, we’ve designed a benefits experience that’s thoughtful, supportive, and actually useful. Here’s what you can look forward to: Top-Tier Work Setup You’ll be equipped with a premium MacBook and all the accessories you need. Great tools make great work. Flexible Schedules & Remote Support Life isn’t 9-to-5. Enjoy flexible working hours, emergency work-from-home days, and utility support that makes remote life easier. Quarterly Performance Bonuses We don’t believe in waiting a whole year to celebrate your success. Perform well, and you’ll see it in your pay check—quarterly. Learning is Funded Here Conferences, courses, certifications—if it helps you grow, we’ve got your back. We even offer a dedicated educational allowance. Family-First Culture Your loved ones matter to us too. From birthday and anniversary vouchers (Amazon, BookMyShow) to maternity and paternity leaves—we’re here for life outside work. Celebrations & Gifting, The GKM IT Way Onboarding hampers, festive goodies (Diwali, Holi, New Year), and company anniversary surprises—it’s always celebration season here. Team Bonding Moments We love food, and we love people. Quarterly lunches, dinners, and fun company retreats help us stay connected beyond the screen. Healthcare That Has You Covered Enjoy comprehensive health insurance for you and your family—because peace of mind shouldn’t be optional. Extra Rewards for Extra Effort Weekend work doesn’t go unnoticed, and great referrals don’t go unrewarded. From incentives to bonuses—you’ll feel appreciated.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France