Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
8 - 18 Lacs
pune
Hybrid
Role: Data Engineer II A growing AdTech organization is seeking a Data Engineer II to join a small, highly influential Data Engineering team. In this role, you will be responsible for evolving and optimizing high-volume, low-latency data pipeline architecture, as well as improving data flow and collection processes across multiple teams. The ideal candidate is an experienced data pipeline builder and data wrangler who thrives on optimizing systems and building scalable solutions from the ground up. You will support software engineers, product managers, business intelligence analysts, and data scientists on data initiatives while ensuring consistent application of best practices for data delivery architecture across projects. This role is well-suited for self-starters eager to optimize or re-design modern data architectures to support next-generation products and initiatives. Responsibilities: Design and maintain high-throughput data platform architecture handling hundreds of billions of daily events. Explore, refine, and assemble large, complex data sets aligned with business and product needs. Identify and implement internal process improvements, such as automation, data delivery optimization, and infrastructure re-design for scalability. Build infrastructure for optimal extraction, transformation, and loading (ETL) of data from diverse sources using Spark, EMR, Snowpark, Kafka, and related technologies. Collaborate with stakeholders across distributed teams to resolve data-related issues and support infrastructure requirements. Translate business requirements into clear technical solutions for both technical and non-technical audiences. Qualifications: 2+ years of experience in a Data Engineer role. Bachelors degree (or higher) in Computer Science or related Engineering field. Proven experience building and optimizing big data pipelines, architectures, and datasets. Strong working knowledge of Databricks/Spark and associated APIs. Proficiency in programming/scripting with Python, Java, or Scala . Experience with relational databases, SQL authoring/optimization, and working across multiple database technologies. Hands-on experience with AWS cloud services such as EC2, EMR, and RDS. Familiarity with NoSQL data stores (e.g., Elasticsearch, Apache Druid). Experience with data pipeline and workflow management tools such as Airflow . Ability to perform root cause analysis on complex data and processes to solve business problems and identify improvements. Strong experience with unstructured and semi-structured data formats (JSON, Parquet, Iceberg, Avro, Protobuf). Deep knowledge of data transformation, data structures, metadata, dependency, and workload management. Demonstrated ability to process, manipulate, and extract insights from large datasets. Working knowledge of stream processing, message queuing, and highly scalable big data storage systems. Experience collaborating with cross-functional teams in fast-paced environments. Preferred Skills: Experience with streaming systems such as Kafka, Spark Streaming, or Kafka Streams . Knowledge of Snowflake/Snowpark . Familiarity with DBT . Exposure to AdTech industry data and systems. Thanks & Regards, Gloria Dias Research Associate | LH Gloria.Dias@persolapac.com persolindia.com Pune, India CONFIDENTIAL NOTE: This e-mail and any attachments may contain confidential information. If you are not the intended recipient, please notify the sender immediately and delete this message. Unauthorized use or distribution of this communication is strictly prohibited. By submitting your curriculum vitae or other personal data to us in connection with your job application or in your capacity as our employee, contractor, associate, partner or vendor, you acknowledge that you have carefully read and agreed to the terms of our Privacy Policy and the consent notice thereunder. You hereby provide voluntary consent to the collection, use, processing and disclosure of your personal data by us and our affiliates, in accordance with and for the purposes set out in our Privacy Policy and for other legitimate purposes as specified under applicable law. Your submission of personal data via email implies that you have not expressly dissented to the processing of personal data for the stated purpose. For a detailed understanding of our data collection practices, please refer to our Privacy Policy accessible here. If at any time, you wish to expressly withdraw your consent or have any grievance, you can do so by submitting a request to our designated consent manager, as provided in our Privacy Policy. Your privacy is of utmost importance, and we are committed to address the queries you have in this regard. SECURITY NOTE: We at PERSOL India or our representatives, do not ask job seekers for fees, personal banking information, or payments through unofficial channels. Official communications will only come from @persolapac.com. Report any suspicious activity to Contactus.in@persolapac.com. Click here to find out how you can safeguard yourself from job scams.
Posted 9 hours ago
2.0 - 6.0 years
3 - 7 Lacs
penukonda
Work from Office
Job Description Customer Audit / Yard Audit / Port Audit Ensure Mass production vehicle quality by Stringent audits, Ensure Mass production cars Appearance, Function, dynamic, Water tightness, Torque quality in KIN, Ensure Yard Storage vehicle quality confirmation by Stringent audits, Ensure Yard Storage cars Appearance, Function, Parts condition (LTSM) & Paint quality in KIN, Ensure the port Parked cars Quality confirmation by Stringent audits, Ensure Port Storage cars Appearance, Function, Parts condition (LTSM) & Paint Battery status and Body & Parts Rust, Preparation of Audit Procedure, Preparation of Audit Plan, Preparation of Audit Check Sheet, Adherence of Audit Plan vs Actual, Inform and report to manager the daily Quality Audit status, Feed back the Audit cars Status to All stake department by Meeting, Reported issue Analysis and Responsibility fixing, Follow up and Validate the Countermeasures and Monitor the Effectiveness, Based on issue Campaign Initiation and Coordination, Skills Required customer audits, customer complaints, quality assurance, quality control, quality audit Location KIA India Private Limited, Penukonda, Andhra Pradesh, India Posted On 1750304219000 Years Of Experience 3 to 10 years
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
As a DataOps Engineer with 03 years of experience, you will be an integral part of our dynamic data engineering team, contributing to the development and maintenance of scalable and efficient data pipelines and workflows using modern data platforms and cloud-native tools. Your primary responsibilities will include designing, building, and optimizing robust data pipelines utilizing technologies such as Databricks, PySpark, and Kafka. You will also be involved in creating ETL/ELT workflows for large-scale data processing, integrating data systems with orchestration engines like Camunda, and ensuring data quality and observability practices are in place for pipeline reliability. Collaboration with data scientists, analysts, and business stakeholders will be essential as you work towards providing actionable insights and supporting data-driven decision-making across the organization. Additionally, your role will involve working with real-time analytics solutions using tools like Apache Druid and assisting in maintaining CI/CD pipelines and DevOps practices for data systems. To excel in this role, you should possess a solid understanding of Python programming, hands-on experience with PySpark, and familiarity with cloud-based data pipeline development. Proficiency in stream processing tools like Apache Kafka, working knowledge of SQL, and experience with data orchestration tools such as Camunda or Airflow are also required. Strong communication skills and the ability to work both independently and collaboratively in a team environment are vital for success in this position. While experience with Apache Druid, cloud platforms like Azure, and knowledge of DevOps practices and Agile software development environments are preferred, they are not mandatory. We offer a comprehensive benefits package, including medical insurance for employees and their families, a hybrid work culture, promoting work-life balance, a friendly working environment, and certification reimbursement based on project demands. Join us in driving data-driven innovation and growth within our organization.,
Posted 2 days ago
7.0 - 14.0 years
10 - 14 Lacs
pune
Work from Office
Hungry for challengesJoin a group with innovation at its heart and contribute to the automotive revolution! OPmobility is a world-leading provider of innovative solutions for a unique, safer and more sustainable mobility experience Innovation-driven since its creation, the Group develops and produces intelligent exterior systems, customized complex modules, lighting systems, clean energy systems and electrification solutions for all mobility companies With a ?11 4 billion economic revenue in 2023, a global network of 152 plants and 40 R&D centers, OPmobility relies on its 40,300 employees to meet the challenges of transforming mobility, Our ambitionProvide automakers with cutting-edge equipment and solutions to develop tomorrows clean and connected car, We are seeking a result driven Global Practice Lead for digital solutions to lead the strategy, governance and operation of agile delivery capability This role will be responsible for delivering scalable, cost effective and high-quality digital solutions through centralized delivery factory supporting global business groups The ideal candidate will be an expert in agile methodologies, and experience in technologies like Webapps, RPA and Microsoft power platform and share point This role is pivotal in driving digital transformation and business innovations at scale aligned to OPmobilitys digital acceleration strategy, Reporting to the Global Is Practice Director, the Global IS Practice Manager Digital solutions is responsible to: Set-up and operate OPmobilitys Digital Solutions practice (digital factory), a key part of the companys digital acceleration, which aims to deliver webapps, AI, RPA and Low code employee solutions with high value and low Time to Market Manage the Regional Digital Solutions Leaders Support the Regional Digital Solutions teams in the implementation and the run of the delivery model Promote Agile methodology in the company by using Digital Solutions as a showcase of our Agile capabilities Manage the Digital Solutions practice budget and ensure its return on investment, Key Responsabilities Provide strategic leadership to define and lead the vision, strategy and roadmap for the digital solutions factory Collaborate with global business group stakeholders to build digital solutions pipeline, manage prioritization & cross-pollination strategy Bring thought leadership on technology innovations and drive digital transformation initiatives aligned with OPmobilitys digital acceleration vision Establish & scale multi-skilled an agile POD based delivery model ensuring optimal resources to achieve higher velocity, quality & performance Implement best practices in E2E agile delivery lifecycle for product backlog, project executions, prioritization & sprint planning, value monitoring using right tools & ways of working Ensure high quality and timely delivery of digital solutions while managing expectations of global stakeholders from business groups Build strategy to implement & maintain quality assurance processes and standards ensuring digital solutions are robust and reliable Own and manage digital solutions factory budget including forecasting, cost control and vendor/partner management Ensure efficient & effective use of resources, build the culture of reusable assets to improve productivity Lead & mentor global regional leadership, solutioning & delivery teams fostering collaborative & high performing environment Drive workforce planning, hiring, onboarding, training and capability to build high performance team, promote the culture of innovation and continuous learning Define and implement delivery governance frameworks including metrics, KPIs and quality benchmarks Ensure compliance with enterprise architecture, security and data privacy standards Drive automation and DevOps practices across the factory for efficient CI/CD and code management Be a strong contributor in the acceleration of the digital transformation of the group by promoting agile methodology and lean start up development Required Experience And Profile Bachelors and Masters degrees in Business Administration Field or Engineering degree, ideally in information systems, Fluent English Knowledge of a second language is a plus At least 15 years of digital development experience with at least 5 years of experience running a delivery team working in Agile Has build teams, is able to manage remotely and orchestrate several delivery teams spread in different regions Experience of operating within global companies with matrix organizations (multi-regional and multi-divisions), Industrial/ automotive background is a plus Key Behavioral Skills Focus on value creation Inspirational leader, open to innovation Demonstrated ability to develop, coach and motivate people by challenging them and rewarding their efforts, Demonstrated capacity to build high-performing teams by defining clear roles and responsibilities, appointing the right people in the right positions, and by attracting and retaining talents, Entrepreneurial mindset combined with strong business acumen Excellent communication and pedagogical skills, Ability to develop effective relationships with different stakeholders, across different operating divisions, functions and geographies, Ability to prioritize and stay focused in a fast-paced environment Agile, hands-on, action-oriented As a responsible company, OPmobility pays particular attention to diversity and equality within its teams and the Group commits to treat all job applications equally,
Posted 3 days ago
12.0 - 16.0 years
0 Lacs
hyderabad, telangana
On-site
As a key leader in the architecture team, you will define and evolve the architectural blueprint for complex distributed systems built using Java, Spring Boot, Apache Kafka, and cloud-native technologies. You will ensure that system designs align with enterprise architecture principles, business objectives, and performance/scalability requirements. Collaborating closely with engineering leads, DevOps, data engineering, product managers, and customer-facing teams, you will drive architectural decisions, mentor technical teams, and foster a culture of technical excellence and innovation. Your key responsibilities will include owning and evolving the overall system architecture for Java-based microservices and data-intensive applications. You will define and enforce architecture best practices, lead technical design sessions, and design solutions focusing on performance, scalability, security, and reliability in high-volume, multi-tenant environments. Additionally, you will collaborate with product and engineering teams to convert business requirements into scalable technical architectures and drive the use of DevSecOps, automated testing, and CI/CD to improve development velocity and code quality. Basic qualifications for this role include 12-15 years of hands-on experience in Java-based enterprise application development, with at least 4-5 years in an architectural leadership role. Deep expertise in microservices architecture, Spring Boot, RESTful services, and API design is required, along with a strong understanding of distributed systems design, event-driven architecture, and domain-driven design. Proficiency in technologies such as Kafka, Spark, Kubernetes, Docker, AWS ecosystem, MongoDB, SQL databases, and multithreaded programming is essential. Preferred qualifications include exposure to tools for system architecture and diagramming, experience leading architectural transformations, knowledge of Data Mesh, Data Governance, or Master Data Management concepts, and certification in AWS, Kubernetes, or Software Architecture. Experience in regulated environments with compliance is a plus. Infor, a global leader in business cloud software products, focuses on industry-specific markets. With a commitment to Principle Based Management and eight Guiding Principles, Infor aims to create a culture that fosters innovation, improvement, and transformation while delivering long-term value to clients and supporters. To learn more about Infor, visit www.infor.com.,
Posted 2 weeks ago
7.0 - 10.0 years
14 - 18 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Job Summary: We are seeking a highly skilled and experienced Scala Developer with strong hands-on expertise in functional programming , RESTful API development , and building scalable microservices . The ideal candidate will have experience with Play Framework , Akka , or Lagom , and be comfortable working with both SQL and NoSQL databases in cloud-native, Agile environments. Key Responsibilities: Design, develop, and deploy scalable backend services and APIs using Scala . Build and maintain microservices using Play Framework , Akka , or Lagom . Develop RESTful APIs and integrate with internal/external services. Handle asynchronous programming , stream processing , and ensure efficient concurrency. Optimize and refactor code for better performance, readability, and scalability. Collaborate with cross-functional teams including Product, UI/UX, DevOps, and QA. Work with databases such as PostgreSQL , MySQL , Cassandra , or MongoDB . Participate in code reviews , documentation, and mentoring team members. Build and manage CI/CD pipelines using Docker , Git , and relevant DevOps tools. Follow Agile/Scrum practices and contribute to sprint planning and retrospectives. Must-Have Skills: Strong expertise in Scala and functional programming principles. Experience with Play Framework , Akka , or Lagom . Deep understanding of RESTful APIs , Microservices Architecture , and API integration . Proficiency with concurrency , asynchronous programming , and stream processing . Hands-on experience with SQL/NoSQL databases (PostgreSQL, MySQL, Cassandra, MongoDB). Familiarity with SBT or Maven as build tools. Experience with Git , Docker , and CI/CD workflows. Comfortable working in Agile/Scrum environments. Good to Have: Experience with data processing frameworks like Apache Spark . Exposure to cloud environments (AWS, GCP, or Azure). Strong debugging, troubleshooting, and analytical skills. Educational Qualification: Bachelors degree in Computer Science, Engineering, or a related field. Why Join Us? Opportunity to work on modern, high-impact backend systems. Collaborative and learning-driven environment. Be a part of a growing technology team building solutions at scale.
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
NTT DATA is looking for a Databricks Developer to join their team in Bangalore, Karnataka, India. As a Databricks Developer, your responsibilities will include pushing data domains into a massive repository and building a large data lake by highly leveraging Databricks. To be considered for this role, you should have at least 3 years of experience in a Data Engineer or Software Engineer role. An undergraduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field is required, while a graduate degree is preferred. You should also have experience with data pipeline and workflow management tools, advanced working SQL knowledge, and familiarity with relational databases. Additionally, an understanding of Datawarehouse (DWH) systems, ELT and ETL patterns, data models, and transforming data into various models is essential. You should be able to build processes supporting data transformation, data structures, metadata, dependency and workload management. Experience with message queuing, stream processing, and highly scalable big data data stores is also necessary. Preferred qualifications include experience with Azure cloud services such as ADLS, ADF, ADLA, and AAS. The role also requires a minimum of 2 years of experience in relevant skills. NTT DATA is a trusted global innovator of business and technology services with a commitment to helping clients innovate, optimize, and transform for long-term success. They serve 75% of the Fortune Global 100 and have a diverse team of experts in more than 50 countries. As a Global Top Employer, NTT DATA offers services in business and technology consulting, data and artificial intelligence, industry solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. They are known for providing digital and AI infrastructure solutions and are part of the NTT Group, investing over $3.6 billion each year in R&D to support organizations and society in moving confidently into the digital future. Visit their website at us.nttdata.com for more information.,
Posted 1 month ago
3.0 - 9.0 years
18 - 22 Lacs
Hyderabad
Work from Office
At Skillsoft, we are all about making work matter . We believe every team member has the potential to be AMAZING . We are bold, sharp, driven and most of all, true . Join us in our quest to democratize learning and help individuals unleash their edge. OVERVIEW : To succeed in this challenging journey, we have set up multiple co-located teams across the globe (Hyderabad, US, Europe), embracing the scaled agile framework, a Micro Services approach combined with the DevOps model. We have passionate engineers working full time on this new platform in Hyderabad and it’s only the beginning. You will get a chance to work with brilliant people and some of the best development and design teams, in addition to working with cutting edge technologies such as React, Java/Node JS, Docker, Kubernetes, AWS. We are looking for exceptional Java/Node based full stack developers to join our team You will work alongside the Architect and DevOps teams to fully form an autonomous development squad and be in-charge of a part of the product. OPPORTUNITY HIGHLIGHTS: Technical leadership: As a Principal Software Engineer, you will be responsible for technical leadership, providing guidance and mentoring to other team members, and ensuring that projects are completed on time and to the highest standards. Cutting-edge technology : Skillsoft is a technology-driven company that is constantly exploring new technologies to enhance the learning experience for their customers. As a Principal Software Engineer, you will have the opportunity to work with cutting-edge technology and help drive innovation. Agile environment : Skillsoft follows agile methodologies, which means that you will be part of a fast-paced, collaborative environment where you will have the opportunity to work on multiple projects simultaneously. Career growth: Skillsoft is committed to helping their employees grow their careers. As a Principal Software Engineer, you will have access to a wide range of learning and development opportunities, including training programs, conferences, and mentorship. Impactful work: Skillsoft's mission is to empower people through learning, and as a Principal Software Engineer, you will be a key contributor to achieving this mission. You will have the opportunity to work on products and features that have a significant impact on the learning and development of individuals and organizations worldwide. Overall, the Principal Software Engineer role at Skillsoft offers a challenging and rewarding opportunity for individuals who are passionate about technology, learning, and making a difference in the world. SKILLS & QUALIFICATIONS: Minimum 9+ years of software engineering development experience developing cloud-based enterprise solutions. Proficient in programming languages (Java, JavaScript, HTML5, CSS) Proficient in JavaScript frameworks (Node.js, React, Redux, Angular, Express.js) Proficient with frameworks (Spring Boot, Stream processing) Strong knowledge working with REST API, Web services and SAML integrations Proficient in working with databases preferably Postgres. Experienced working on DevOps tools (Docker, Kubernetes, Ansible, AWS) Experience with code versioning tools, preferably Git ( Github , Gitlab, etc ) and the feature branch workflow Working Experience on Kafka, RabbitMq (messaging queue systems) Sound knowledge on design principles and design patterns Strong problem solving and analytical skills and understanding of various data structures and algorithms. Must know how to code applications on Unix/Linux based systems. Experience with build automation tools like Maven, Gradle, NPM, WebPack , Grunt . Sound troubleshooting skills to address code bugs, performance issues and environment issues that may arise. Good understanding of the common security concerns of high volume publicly exposed systems Experience in working with Agile/Scrum environment. Strong analytical skills and the ability to understand complexities and how components connect and relate to each other OUR VALUES WE ARE PASSIONATELY COMMITTED TO LEADERSHIP, LEARNING, AND SUCCESS. WE EMBRACE EVERY OPPORTUNITY TO SERVE OUR CUSTOMERS AND EACH OTHER AS: ONE TEAM OPEN AND RESPECTFUL CURIOUS READY TRUE
Posted 1 month ago
6.0 - 10.0 years
12 - 15 Lacs
Bengaluru
Work from Office
We are looking for an experienced KAM to manage and streamline SC Ops for key clients. Role involve leading large SAP consultant team, ensuring seamless delivery, managing client expectations supporting business growth through new project initiatives
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
As a Scala Developer at our company, you will play a crucial role in designing, building, and enhancing our clients" online platform to ensure optimal performance and reliability. Your responsibilities will include researching, proposing, and implementing cutting-edge technology solutions while adhering to industry best practices and standards. You will be accountable for the resilience and availability of various products and will collaborate closely with a diverse team to achieve collective goals. To excel in this role, we are seeking a highly skilled Scala Developer with over 7 years of experience in crafting scalable and high-performance backend systems. Your expertise in functional programming, familiarity with contemporary data processing frameworks, and proficiency in working within cloud-native environments will be invaluable. You will be tasked with designing, creating, and managing backend services and APIs using Scala, optimizing existing codebases for enhanced performance, scalability, and reliability, and ensuring the development of clean, maintainable, and well-documented code. Collaboration is key in our team, and you will work closely with product managers, frontend developers, and QA engineers to deliver exceptional results. Your role will also involve conducting code reviews, sharing knowledge, and mentoring junior developers to foster a culture of continuous improvement. Experience with technologies such as Akka, Play Framework, and Kafka, as well as integration with SQL/NoSQL databases and external APIs, will be essential in driving our projects forward. Your hands-on experience with Scala and functional programming principles, coupled with your proficiency in RESTful APIs, microservices architecture, and API integration, will be critical in meeting the demands of the role. A solid grasp of concurrency, asynchronous programming, and stream processing, along with familiarity with SQL/NoSQL databases and tools like SBT or Maven, will further enhance your contributions to our team. Exposure to Git, Docker, and CI/CD pipelines, as well as a comfort level in Agile/Scrum environments, will be advantageous. Moreover, your familiarity with Apache Spark, Kafka, or other big data tools, along with experience in cloud platforms like AWS, GCP, or Azure, and an understanding of DevOps practices, will position you as a valuable asset in our organization. Proficiency in testing frameworks such as ScalaTest, Specs2, or Mockito will round out your skill set and enable you to deliver high-quality solutions effectively. In return, we offer a stimulating and innovative work environment where you will have ample opportunities for learning and professional growth. Join us in shaping the future of our clients" online platform and making a tangible impact in the digital realm.,
Posted 1 month ago
12.0 - 16.0 years
0 Lacs
haryana
On-site
You will be joining a renowned global digital engineering firm as a Senior Solution Architect, reporting to the Director Consulting. Your key responsibility will be to craft innovative solutions for both new and existing clients, with a primary focus on leveraging data to fuel the architecture and strategy of Digital Experience Platforms (DXP). Your expertise will guide the development of solutions heavily anchored in CMS, CDP, CRM, loyalty, and analytics-intensive platforms, integrating ML and AI capabilities. The essence of your approach will be centered around leveraging data to create composable, insightful, and effective DXP solutions. In this role, you will be client-facing, sitting face to face with prospective customers to shape technical and commercially viable solutions. You will also lead by example as a mentor, challenge others to push their boundaries, and strive to improve your skillset in the ever-evolving landscape of Omnichannel solutions. Collaboration with cross-functional teams will be a key aspect of your daily work, as you strategize, problem-solve, and communicate effectively with internal and external team members. Your mastery of written language will allow you to deliver compelling technical proposals to both new and existing clients. Your day-to-day responsibilities will include discussing technical solutions with clients, contributing to digital transformation strategies, collaborating with various teams to shape solutions based on client needs, constructing technical architectures, articulating transitions from current to future states, sharing knowledge and thought leadership within the organization, participating in discovery of technical project requirements, and estimating project delivery efforts based on your recommendations. The ideal candidate for this position will possess 12+ years of experience in design, development, and support of large-scale web applications, along with specific experience in cloud-native technologies, data architectures, customer-facing applications, client-facing technology consulting roles, and commerce platforms. A Bachelor's degree in a relevant field is required. In addition to fulfilling work, Material offers a high-impact work environment with a strong company culture and benefits. As a global company working with best-of-class brands worldwide, Material values inclusion, interconnectedness, and amplifying impact through people, perspectives, and expertise. The company focuses on learning and making an impact, creating experiences that matter, new value, and making a difference in people's lives. Material offers professional development, mentorship, a hybrid work mode, health and family insurance, leaves, wellness programs, and counseling sessions.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
In this role, you will contribute to the development of backend databases and frontend services as part of an Intelligent Asset Management team. You will be responsible for building cyber-secure, efficient applications that support IoT and Smart Data initiatives. This includes designing, developing, testing, and implementing APIs, microservices, and edge libraries for system communication interfaces. Key Responsibilities: - Design, develop, test, and implement APIs and microservices based on defined requirements. - Build secure and scalable web applications and backend services. - Develop edge libraries for system communication interfaces. - Collaborate within a scrum-style team to deliver high-quality software solutions. - Ensure integration with existing IT infrastructure, mainframe systems, and cloud services. - Maintain and optimize relational databases and data pipelines. - Participate in performance assessments and contribute to continuous improvement. Primary Skills: - Programming Languages: Proficient in at least three of the following: Go, Java, Angular, PostgreSQL, Kafka, Docker, Kubernetes, S3 programming. - API Development: RESTful API design and implementation. - Microservices Architecture: Experience in building and deploying microservices. - Containerization & Orchestration: Docker, Kubernetes. - Database Management: PostgreSQL, RDBMS, data querying and processing. - Stream Processing: Apache Kafka or similar technologies. Secondary Skills: - Web Development: Angular or other modern web frameworks. - System Integration: Experience with mainframe operations, ICL VME, and system interfaces. - IT Infrastructure & Virtualization: Understanding of cloud platforms, virtualization, and IT support systems. - Software Development Lifecycle: Agile methodologies, scrum practices. - Security & Compliance: Cybersecurity principles in application development. - Reporting & Data Management: Experience with data storage, reporting tools, and performance monitoring. Preferred Qualifications: - Bachelors or Masters degree in Computer Science, Information Technology, or related field. - Experience in IoT, Smart Data, or Asset Management domains. - Familiarity with mainframe systems and enterprise integration. - Certifications in relevant technologies (e.g., Kubernetes, Java, Go, Angular).,
Posted 1 month ago
10.0 - 15.0 years
0 Lacs
chennai, tamil nadu
On-site
Are you a skilled Data Architect with a passion for tackling intricate data challenges from various structured and unstructured sources Do you excel in crafting micro data lakes and spearheading data strategies at an enterprise level If this sounds like you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specifically catered to the lending domain. Your tasks will include defining and executing enterprise data strategies encompassing modeling, lineage, and governance. You will play a crucial role in architecting robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from diverse sources like APIs, PDFs, logs, and databases. Furthermore, you will be instrumental in establishing best practices related to data quality, metadata management, and data lifecycle control. Your hands-on involvement in implementing processes, strategies, and tools will be pivotal in creating innovative products. Collaboration with engineering and product teams to align data architecture with overarching business objectives will be a key aspect of your role. To excel in this position, you should bring to the table over 10 years of experience in data architecture and engineering. A deep understanding of both structured and unstructured data ecosystems is essential, along with practical experience in ETL, ELT, stream processing, querying, and data modeling. Proficiency in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python is a must. Additionally, expertise in cloud-native data platforms like AWS, Azure, or GCP is highly desirable, along with a solid foundation in data governance, privacy, and compliance standards. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. This role offers you the chance to lead data-first transformation, develop products that drive AI adoption, and the autonomy to design, build, and scale modern data architecture. You will be part of a forward-thinking, collaborative, and tech-driven culture with access to cutting-edge tools and technologies in the data ecosystem. If you are ready to shape the future of data with us, we encourage you to apply for this exciting opportunity based in Chennai. Join us in redefining data architecture and driving innovation in the realm of structured and unstructured data sources.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
The Senior Full Stack Software Engineer role entails responsibility for software development, maintenance, monitoring, problem resolution of both front- and back-end systems development solutions within .NET, Relativity or other eDiscovery tools. This position involves participation in projects across all SDLC lifecycles, from project inception to maintenance phase, focusing on analyzing, writing, building, and deploying software solutions of high quality. You will be accountable for creating and maintaining moderate to highly complex solutions addressing the informational and analytical needs of various groups, including data infrastructure, reporting, and applications. Your responsibilities will encompass all project lifecycle phases, such as requirements definition, solution design, application development, and system testing. You are expected to analyze end user data needs and develop user-oriented solutions that interface with existing applications. Documentation maintenance for work processes and procedures, making improvement suggestions, adhering to approved work changes, and providing backup support for projects are part of the role. Effective interaction and partnership across internal business teams, team planning, growth strategy assistance, InfoSec compliance execution, participation in system upgrades, and training on business functionality for system end users are also integral. You will work with minimal supervision, making a range of established decisions, escalating to the Manager when necessary, and providing regular updates. Adaptability, quick learning, and a big picture approach in project work are key attributes expected from you. Minimum Education Requirements: - Bachelor of Science in Computer Science or related field, or comparable business/technical experience. Minimum Experience Requirements: - At least 7-10 years of application development experience encompassing programming, data management, collection, modeling, and interpretation across complex data sets. - Proficiency in front-end technologies such as JavaScript, CSS3, and HTML5, and familiarity with third-party libraries like React Js, Angular, jQuery, and LESS. - Knowledge of server-side programming languages like .Net, Java, Ruby, or Python. - Familiarity with DBMS technology including SQLServer, Oracle, MongoDB, MySQL, and caching mechanisms like Redis, Memcached, and Varnish. - Ability to design, develop, and deploy full-stack web applications using both SQL and NoSQL databases, coach junior developers in the same, rapidly learn new tools, languages, and frameworks, and work with Enterprise Integration Patterns, SOA, Microservices, Stream processing, Event-Driven Architecture, Messaging Protocols, and Data Engineering. - Comfort with software development lifecycle, testing strategies, and working independently or as part of a team. Technical Skills: - Proficient in HTML5, CSS3, JavaScript (ES6+), modern web frontend frameworks, state management libraries, server-side languages, RESTful API design/development, database design/management, caching mechanisms, authentication, and authorization mechanisms like OAuth 2.0 and JWT, Microsoft Windows Server infrastructure, distributed systems, version control systems, CI/CD pipelines, and containerization technologies like Docker and Kubernetes. Consilio's True North Values: - Excellence: Making every client an advocate - Passion: Doing because caring - Collaboration: Winning through teamwork and communication - Agility: Flexing, adapting, and embracing change - People: Valuing, respecting, and investing in teammates - Vision: Creating clarity of purpose and a clear path forward,
Posted 1 month ago
4.0 - 9.0 years
15 - 20 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
We’re hiring a Scala Developer with 4+ years of experience in building scalable, high-performance backend systems. Strong in functional programming, backend services, distributed systems, and cloud environments.
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You will be responsible for building systems and APIs to collect, curate, and analyze data generated by biomedical dogs, devices, and patient data. Your immediate requirements will include developing APIs and backends to handle Electronic Health Record (EHR) data, time-series sensor streams, and sensor/hardware integrations via REST APIs. Additionally, you will work on data pipelines and analytics for physiological, behavioral, and neural signals, as well as machine learning and statistical models for biomedical and detection dog research. You will also be involved in web and embedded integrations connecting software to real-world devices. To excel in this role, you should have familiarity with domains such as signal processing, basic statistics, stream processing, online algorithms, databases (especially time series databases like victoriametrics, SQL including postgres, sqlite, duckdb), computer vision, and machine learning. Proficiency in Python, C++, or Rust is essential, as the stack primarily consists of Python with some modules in Rust/C++ where necessary. Firmware development is done in C/C++ (or Rust), and if you choose to work with C++/Rust, you may need to create a Python API using pybind11/PyO3. Your responsibilities will involve developing data pipelines for real-time and batch processing, as well as building robust APIs and backends for devices, research tools, and data systems. You will handle data transformations, storage, and querying for structured and time-series datasets, evaluate and enhance ML models and analytics, and collaborate with hardware and research teams to derive insights from messy real-world data. The focus will be on ensuring data integrity and correctness rather than brute-force scaling. If you enjoy creating reliable software and working with complex real-world data, we look forward to discussing this opportunity with you. Key Skills: backend development, computer vision, data transformations, databases, analytics, data querying, C, Python, C++, signal processing, data storage, statistical models, API development, Rust, data pipelines, firmware development, stream processing, machine learning,
Posted 1 month ago
10.0 - 15.0 years
0 Lacs
chennai, tamil nadu
On-site
Are you a hands-on Data Architect who excels at tackling intricate data challenges within structured and unstructured sources Are you passionate about crafting micro data lakes and spearheading enterprise-wide data strategies If this resonates with you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specific to the lending domain. Additionally, you will play a key role in defining and executing enterprise data strategies encompassing modeling, lineage, and governance. Your tasks will involve architecting and implementing robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from various sources such as APIs, PDFs, logs, and databases. Establishing best practices for data quality, metadata management, and data lifecycle control will also be part of your core responsibilities. Collaboration with engineering and product teams to align data architecture with business objectives will be crucial, as well as evaluating and integrating modern data platforms and tools like Databricks, Spark, Kafka, Snowflake, AWS, GCP, and Azure. Furthermore, you will mentor data engineers and promote engineering excellence in data practices. The ideal candidate for this role should possess a minimum of 10 years of experience in data architecture and engineering, along with a profound understanding of structured and unstructured data ecosystems. Hands-on proficiency in ETL, ELT, stream processing, querying, and data modeling is essential, as well as expertise in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python. Familiarity with cloud-native data platforms like AWS, Azure, or GCP is required, alongside a solid foundation in data governance, privacy, and compliance standards. A strategic mindset coupled with the ability to execute hands-on tasks when necessary is highly valued. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. As part of our team, you will have the opportunity to lead data-first transformation and develop products that drive AI adoption. You will enjoy the autonomy to design, build, and scale modern data architecture within a forward-thinking, collaborative, and tech-driven culture. Additionally, you will have access to the latest tools and technologies in the data ecosystem. Location: Chennai Experience: 10-15 Years | Full-Time | Work From Office If you are ready to shape the future of data alongside us, we invite you to apply now and embark on this exciting journey!,
Posted 1 month ago
4.0 - 9.0 years
2 - 3 Lacs
Sriperumbudur, Tambaram, Chennai
Work from Office
Role & responsibilities Responsible for safely and efficiently operating, maintaining, and monitoring boiler systems that produce hot water for various applications like heating or industrial processes . They ensure the equipment runs smoothly, adheres to safety regulations, and performs routine maintenance to prevent breakdowns Preferred candidate profile
Posted 1 month ago
7.0 - 8.0 years
9 - 10 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale . Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model:: Direct placement with client This is remote role Shift timings::10 AM to 7 PM
Posted 1 month ago
3.0 - 7.0 years
2 - 6 Lacs
Gurugram
Work from Office
- Bachelor s or master s degree in data science, Computer Science, Statistics, or a related field. Minimum of 7 years of experience in data analytics or a related field. Proficiency in data analysis tools and programming languages Required Candidate profile Experience with data visualization tools (e.g., Tableau, Power BI) Knowledge of machine learning techniques and statistical analysis Knowledge on stream processing for near real-time analytics is must
Posted 1 month ago
9.0 - 14.0 years
3 - 7 Lacs
Noida
Work from Office
We are looking for a skilled Data Engineer with 9 to 15 years of experience in the field. The ideal candidate will have expertise in designing and developing data pipelines using Confluent Kafka, ksqlDB, and Apache Flink. Roles and Responsibility Design and develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources, including databases, APIs, and message queues. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identifying bottlenecks and implementing optimizations. Job Bachelor's degree or higher from a reputed university. 8 to 10 years of total experience, with a majority related to ETL/ELT big data and Kafka. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema
Posted 2 months ago
11.0 - 16.0 years
40 - 45 Lacs
Pune
Work from Office
Role Description This role is for a Senior business functional analyst for Group Architecture. This role will be instrumental in establishing and maintaining bank wide data policies, principles, standards and tool governance. The Senior Business Functional Analyst acts as a link between the business divisions and the data solution providers to align the target data architecture against the enterprise data architecture principles, apply agreed best practices and patterns. Group Architecture partners with each division of the bank to ensure that Architecture is defined, delivered, and managed in alignment with the banks strategy and in accordance with the organizations architectural standards. Your key responsibilities Data Architecture: The candidate will work closely with stakeholders to understand their data needs and break out business requirements into implementable building blocks and design the solution's target architecture. AI/ML: Identity and support the creation of AI use cases focused on delivery the data architecture strategy and data governance tooling. Identify AI/ML use cases and architect pipelines that integrate data flows, data lineage, data quality. Embed AI-powered data quality, detection and metadata enrichment to accelerate data discoverability. Assist in defining and driving the data architecture standards and requirements for AI that need to be enabled and used. GCP Data Architecture & Migration: A strong working experience on GCP Data architecture is must (BigQuery, Dataplex, Cloud SQL, Dataflow, Apigee, Pub/Sub, ...). Appropriate GCP architecture level certification. Experience in handling hybrid architecture & patterns addressing non- functional requirements like data residency, compliance like GDPR and security & access control. Experience in developing reusable components and reference architecture using IaaC (Infrastructure as a code) platforms such as terraform. Data Mesh: The candidate is expected to have proficiency in Data Mesh design strategies that embrace the decentralization nature of data ownership. The candidate must have good domain knowledge to ensure that the data products developed are aligned with business goals and provide real value. Data Management Tool: Access various tools and solutions comprising of data governance capabilities like data catalogue, data modelling and design, metadata management, data quality and lineage and fine-grained data access management. Assist in development of medium to long term target state of the technologies within the data governance domain. Collaboration: Collaborate with stakeholders, including business leaders, project managers, and development teams, to gather requirements and translate them into technical solutions. Your skills and experience Demonstrable experience in designing and deploying AI tooling architectures and use cases Extensive experience in data architecture, within Financial Services Strong technical knowledge of data integration patterns, batch & stream processing, data lake/ data lake house/data warehouse/data mart, caching patterns and policy bases fine grained data access. Proven experience in working on data management principles, data governance, data quality, data lineage and data integration with a focus on Data Mesh Knowledge of Data Modelling concepts like dimensional modelling and 3NF. Experience of systematic structured review of data models to enforce conformance to standards. High level understanding of data management solutions e.g. Collibra, Informatica Data Governance etc. Proficiency at data modeling and experience with different data modelling tools. Very good understanding of streaming and non-streaming ETL and ELT approaches for data ingest. Strong analytical and problem-solving skills, with the ability to identify complex business requirements and translate them into technical solutions.
Posted 2 months ago
7.0 - 12.0 years
20 - 35 Lacs
Dubai, Pune, Chennai
Hybrid
Job Title: Confluent CDC System Analyst Role Overview: A leading bank in the UAE is seeking an experienced Confluent Change Data Capture (CDC) System Analyst/ Tech lead to implement real-time data streaming solutions. The role involves implementing robust CDC frameworks using Confluent Kafka , ensuring seamless data integration between core banking systems and analytics platforms. The ideal candidate will have deep expertise in event-driven architectures, CDC technologies, and cloud-based data solutions . Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solutions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .
Posted 2 months ago
8.0 - 13.0 years
5 - 10 Lacs
Hyderabad
Work from Office
6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.
Posted 2 months ago
8.0 - 13.0 years
5 - 10 Lacs
Bengaluru
Work from Office
6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
57101 Jobs | Dublin
Wipro
24505 Jobs | Bengaluru
Accenture in India
19467 Jobs | Dublin 2
EY
17463 Jobs | London
Uplers
12745 Jobs | Ahmedabad
IBM
12087 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11498 Jobs | Seattle,WA
Accenture services Pvt Ltd
10993 Jobs |
Oracle
10696 Jobs | Redwood City