Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We are looking for a software engineer to join OCI security & compliance platform team. The platform and algorithms monitor, detect threats, data breaches, and other malicious activities using machine learning and data science technologies. These services help organizations in maintaining their security and compliance posture. This role provides a fantastic opportunity to build an analytics solution and a data lake by sourcing and curating data from various internal + external providers. We leverage Spark, Kafka, Machine Learning, technologies running on OCI. You’ll work with product managers, designers, and engineers to build data driven features. You must enjoy the excitement of agile development and interacting with other exceptional engineers. Career Level - IC2 Responsibilities Develop highly available and scalable platform that aggregates and analyzes streams of events with small window of durability Design, deploy and manage large scale data systems and services built on OCI Develop, maintain and tune threat detection algorithms Develop test bed and tools to help reduce noise and improve time to detect threats Desired Skills and Experience: 1+ years of hands-on large-scale cloud application software development 1+ years of experience in cloud infrastructure security and risk assessment 1+ years of hands-on experience with three of the following technologies: Kafka, Radis, AWS, Kubernetes, Rest APIs, Linux 1+ year of experience using and building highly available streaming data solutions like Flink or Spark Streaming 1+ years of experience building application on Oracle Cloud Infrastructure Critical thinking: ability to track down complex data and engineering issues, and analyze data to solve problems Experience with development methodology with short release cycles. Excellent problem solving and communication skills with both technical and non-technical audiences. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 3 days ago
16.0 years
0 Lacs
Delhi, India
Remote
Senior Data Modeller – Telecom Domain (TMF, Kafka) | AI Product Company Locations: Gurugram, Noida, Hyderabad, Bengaluru (Remote/Hybrid options available) Experience: 7–16 Years Industry: Telecommunications, AI-Driven Solutions Openings: 1 Compensation: Best in the Industry Cient - Fast growing an AI-driven product and solution company redefining the telecom space through cutting-edge data and digital platforms. Join our fast-paced, innovation-focused team and help shape the future of data architecture in telecommunications. Job Summary We’re seeking a Senior Data Modeller with deep expertise in the telecom domain to design and standardize enterprise-wide data models. You'll work across key areas—Customer, Product, Billing, Network—aligning with TM Forum standards (SID, eTOM) and modern data architecture principles to enable advanced analytics, machine learning, and digital transformation initiatives. Key Responsibilities Design logical and physical data models tailored to telecom use cases Align with TM Forum standards (SID, eTOM, ODA) and data mesh principles Build and optimize schemas (Normalized, Star, Snowflake) Maintain data lineage , metadata , and version control Collaborate with engineering teams on Azure , Databricks , and Kafka integrations Implement data privacy , compliance tagging (GDPR), and ensure data quality Required Skills & Experience 7+ years in Data Modelling, with 3+ years in Telecom domain Strong knowledge of TM Forum frameworks and telecom business processes Hands-on with tools like SSAS, dbt, Informatica Proficient in SQL , schema design, metadata documentation Cloud expertise in Azure Synapse , Databricks , Snowflake Experience with telecom systems: CRM, Billing, Network, Campaign Management Familiarity with Data Mesh , Domain-Driven Design , and regulatory compliance Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field Excellent communication and stakeholder management skills Passion for scalable, high-impact data architecture Contact M 9341626895 bdm@intellisearchonline.net
Posted 3 days ago
5.0 - 10.0 years
11 - 19 Lacs
Hyderabad
Hybrid
Role & responsibilities About the Role: Our platform services engineering team is responsible for providing reliable platform for abstracting the AWS services and orchestrating multiple APIs as part of Experian Software Services division. We are seeking a Software Engineer for our Platform Services team to help us continuously improve how we build, secure and run our rapidly growing Cloud platform. Much of our software development focuses on building APIs for In-house applications and for abstracting AWS services and eliminating work through automation. On platform services team, you will have opportunity to use your expertise in coding, system design thinking and analytical skills to build shared services, tools and APIs for the rest of the product development teams abstracting the underlying stack complexities of Cloud. What Youll Do Here Build platform tools & services to enable product engineering teams to deploy and run their applications (data pipelines, analytical applications, micro services etc.) on cloud. What You'll Need To Succeed Must Have Skills: Solid professional coding experience in Java / J2EE Technologies. Strong working knowledge of Object-Oriented Design principles and J2EE Design patterns. Strong working knowledge of RESTful APIs & NoSQL Databases & MQ (or) Kafka. Strong working knowledge in Automation Testing Unit and Integration testing. Experience on running containers (Docker/LXC) in a production environment using one of the container orchestration services (Kubernetes, Docker Swarm, AWS ECS, AWS EKS) Experience with cloud platform like AWS. Experience with continuous integration/delivery/deployment (CI/CD) practices and frameworks. Experience with End-to-end Application architecture and System Design. Experience with Agile development methodologies. Willingness to learn and implement new programming languages if needed. Good understanding on observability into the APIs and services developed using various tools like Prometheus, InfluxDB, Dynatrace, Grafana, Splunk etc. Nice to have skills: Good understanding of Linux, networking, cloud design patterns, API's, and security. Experience with Terraform, Packer, Ansible, Chef, Jenkins or any other similar tooling. Preferred candidate profile Company-Nisum Technologies, Experience: 5 Years to 12 years Job location:Hyderabad Job mode: Work from office,Hybird, Job Type:Permanent. Notice period: immediate to 30 days. Must Have Skills: Solid professional coding experience in Java / J2EE Technologies. Strong working knowledge of Object-Oriented Design principles and J2EE Design patterns. Strong working knowledge of RESTful APIs & NoSQL Databases & MQ (or) Kafka. Strong working knowledge in Automation Testing Unit and Integration testing. Experience on running containers (Docker/LXC) in a production environment using one of the container orchestration services (Kubernetes, Docker Swarm, AWS ECS, AWS EKS) Experience with cloud platform like AWS. Experience with continuous integration/delivery/deployment (CI/CD) practices and frameworks. Experience with End-to-end Application architecture and System Design. Experience with Agile development methodologies. Willingness to learn and implement new programming languages if needed. Good understanding on observability into the APIs and services developed using various tools like Prometheus, InfluxDB, Dynatrace, Grafana, Splunk etc. Nice to have skills: Good understanding of Linux, networking, cloud design patterns, API's, and security. Experience with Terraform, Packer, Ansible, Chef, Jenkins or any other similar tooling.
Posted 3 days ago
10.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Databricks Data Architect Location: Mumbai Onsite Salary: Negotiable Immediate Joiners Experience : 10+ Years We are seeking an experienced Databricks Data Architect with a strong background in designing scalable data platforms in the manufacturing or energy sector . The ideal candidate will have over 10 years of experience in designing and implementing enterprise-grade data solutions, with strong proficiency in Azure Databricks and big data technologies . Key Responsibilities: · Architect and deliver scalable, cloud-native data solutions to support both real-time and batch processing needs. · Work closely with business and technical stakeholders to understand business requirements, define data strategy, governance, and architecture standards. · Ensure data quality, integrity, and security across platforms and systems. · Define data models, data integration patterns, and governance frameworks to support analytics use cases. · Collaborate with DevOps and Engineering teams to ensure robust CI/CD pipelines and deliver production-grade deployments. · Define and enforce data architecture standards, frameworks, and best practices across data engineering and analytics teams. · Implement data governance, security, and compliance measures, including data cataloguing, access controls, and regulatory adherence. · Lead capacity planning and performance tuning efforts to optimize data processing and query performance. · Create and maintain architecture documentation, including data flow diagrams, data models, entity-relationship diagrams, system interfaces etc. · Design clear and impactful visualizations to support key analytical objectives. Required Skills and Experience: · Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field. · Strong proficiency in Azure Databricks and big data technologies (Apache Spark, Kafka, Event Hub). · Deep understanding of data modeling, data lakes, batch and real-time/streaming data processing. · Proven experience with high volume data pipeline orchestration and ETL/ELT workflows. · Experience designing and implementing data lakes, data warehouses, and lakehouse architectures. · Proven experience in designing and implementing data visualization solutions for actionable insights. · Strong understanding of data integration patterns, APIs, and message streaming (e.g., Event Hub, Kafka). · Experience with metadata management, and data quality frameworks. · Excellent problem-solving skills and the ability to translate business needs into technical solutions. · Experience with structured and unstructured data ingestion, transformation, and processing at scale. · Excellent communication, documentation, and stakeholder management skills. Preferred Qualifications: · Familiarity with lakehouse architectures using Delta Lake. · Knowledge of manufacturing/energy domain-specific standards and protocols. · Experience with IoT data and time-series analysis. · Knowledge of data governance, security, and compliance best practices.
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description and Requirements "At BMC trust is not just a word - it's a way of life!" Hybrid Description and Requirements "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The IZOT product line includes BMC’s Intelligent Z Optimization & Transformation products, which help the world’s largest companies to monitor and manage their mainframe systems. The modernization of mainframe is the beating heart of our product line, and we achieve this goal by developing products that improve the developer experience, the mainframe integration, the speed of application development, the quality of the code and the applications’ security, while reducing operational costs and risks. We acquired several companies along the way, and we continue to grow, innovate, and perfect our solutions on an ongoing basis. BMC is looking for a talented Java Product Developer to join our family working on complex and distributed software, developing, and debugging software products, implementing features, and assisting the firm in assuring product quality. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Contribute to be a member of the Development team responsible for developing and debugging of software products. Work on complex problems where analysis of situations or data requires an in-depth evaluation of several factors. Work across product lines with interaction of other BMC products Independently design and implement features that address customer needs with a sense of ownership within given timelines As every BMC employee, you will be given the opportunity to learn, be included in global projects, challenge yourself and be the innovator when it comes to solving everyday problems. To ensure you’re set up for success, you will bring the following skillset & experience: Experienced of 6 yrs in Java on different OS platform Linux primary (other: Windows, z/OS) Experienced with Java language and Java backend programming JSON, SQL, XML libraries Knowledgeable with Front end programming Angular, CSS, JavaScript , HTML Knowledgeable with databases - mainly Derbi Experienced with build systems Make files, Maven. Experienced with GitHub, Jenkins, Apache Tomcat (configuration, architecture etc.) Experienced with postman, Putty or any other telnet client for windows, eclipse Experience working in resolving Security issues resulting by Dynamic\static\cryptographic\PenTest scans - an Advantage Knowledge of technologies such REST API’s, Spring Framework/Boot, ActiveMQ/Kafka/RabbitMQ or similar. Experienced in an Agile environment and agile methodologies Good communication skills – both written and verbal in English BSc/BA in Computer Science, Engineering or a related field with proven field experience. CA-DNP Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 2,790,000 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply.
Posted 3 days ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Responsibilities Successfully and independently lead large features, multi-person efforts that usually cross engineering team boundaries. Responsible for the successful delivery of their projects, including coordination, planning, risk management, design, development, testing, rollout, and maintenance. Work closely with your manager to align the priorities of your team with the priorities of the entire organization. Write clean, concise, modular and well-tested code. Lead high quality, loosely coupled, reliable and extensible engineering designs, soliciting feedback and building consensus. Provide meaningful feedback and guide other engineers to resolve open questions in their designs and code. Lead long term health and quality of the team’s codebase and systems by starting necessary processes and removing outdated ones. Drive collaboration with their teammates, partners and stakeholders to advance team goals. Assist and teach engineers on an individual basis. Lift the skills and expertise of those around them. Cultivate learning and growth within their team. Exemplify and uphold software engineering standards within their team. Is an excellent source of insights and wisdom within their team. Conduct engineering interviews. Participate in activities to attract and retain talent, including building community inside or outside of Kredivo Group. Qualifications And Experience Must have: At least 7 years of hands-on experience in developing, testing, and deploying large-scale applications in any language or stack. At least 2 years of experience in leading a team of engineers. Good experience with Golang. Able to use Go concurrency patterns and contribute to building reusable Go components. Strong experience in designing loosely coupled, reliable and extensible distributed services. Great understanding of clean architecture, S.O.L.I.D principles, and event-driven architecture. Experience with message broker services like SQS, Kafka, etc. Strong data modeling experience in both Relational and NoSQL databases. Good experience with cloud-based systems and infrastructure services like AWS, GCP, etc. Strong cross-team collaboration, communication (written and spoken) skills and analytical skills with attention to details. Self-driven with a passion for learning new things quickly, solving challenging problems, and the drive to get better with the support from the manager. Nice To Have A bachelor degree in computer science, information technology, or equivalent educatioPR #L1 -PR1
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Looking for Kafka Architects @Hyderabad/Pune Locations. Experience: 10 to 13Yrs Key Points:- Cloudera Kafka Distribution /OpenSearch Kafka Exp in API with Juniper High Availability of Kafka cluster and disaster recovery ( HOT HOT environment) Implementation of Kafka in containers or in cloud environment Extend / Manipulate storage for Kafka clusters, replication b/w two Kafka clusters SSL certificates / Authentication / authorization 5+ years of experience in designing Kafka environment and application integration exp in monitoring tools like SPLUNK, Prometheus, Grafana JMX Metrix exp in Kafka connecters , Ksql database * good to have :- Product Grid block application testing Architect, being able to attend Architecture forums and demonstrate pros/cons of different solutions Oracle database(RDBMS) / PostgreSQL database Undertake regular assessments of the platform health and stability, create improvement plans and ensure automation/lifecycle management is undertaken. If interested please share your updated profile to sravani@livecjobs.com. Ph No:9390207054 Regards Sravani
Posted 3 days ago
5.0 years
0 Lacs
Kondapur, Telangana, India
On-site
Responsibilities Architect, design, and develop scalable NestJS microservices. Develop and optimize database schemas and queries in PostgreSQL . Build and maintain modern, responsive frontend interfaces using React.js / Next.js / Angular (based on stack). Collaborate with product managers, designers, and other developers to deliver high-quality features. Implement unit and integration tests to ensure code quality. Lead code reviews, mentor junior developers, and guide best practices. Ensure security, performance, and scalability of the application. Work with containerized environments (Docker) and deployment pipelines (CI/CD). Requirements 5+ years of professional experience as a Full Stack Developer. Strong proficiency in NestJS and the Node.js ecosystem. Solid experience with PostgreSQL (query optimization, schema design, etc.). Deep understanding of microservices architecture and message brokers (e.g., Kafka, RabbitMQ, NATS). Experience with frontend frameworks like React.js , Next.js , or Angular . Knowledge of RESTful APIs, GraphQL, and authentication methods (JWT, OAuth). Proficiency in using Docker and cloud platforms (AWS/GCP/Azure). Familiarity with testing frameworks (Jest, Mocha, etc.). Excellent problem-solving and communication skills. Nice to Have Experience with event-driven architectures and CQRS. Familiarity with TypeORM/Prisma. Knowledge of Kubernetes or serverless functions. Exposure to DevOps practices and infrastructure as code (e.g., Terraform).
Posted 3 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer Community Banking , you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, equity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Proficient in coding in one or more languages Overall knowledge of the Software Development Life Cycle Strong hands on experience in building cloud native applications using Java, Micro Services using Spring Boot/Cloud, Kafka, SQL, RDBS and NoSQL DBs like Cassandra, JUnit, Cucumber Expertise on AWS Cloud, Jenkins, Splunk, Dynatrace, Datadog, Bit Bucket, JIRA, Agile Methodology, Cloud Foundry, Chaos Testing, Performance testing using BlazeMeter Extensive experience in Java Fullstack developer with Springboot based microservices, AWS, database technologies, react.js Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred Qualifications, Capabilities, And Skills Familiarity with modern front-end technologies Exposure to cloud technologies
Posted 3 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description What we are looking for: We are looking for a passionate and skilled Software Engineer to join our Platform Engineering team focused on combating misinformation at scale. You will help design, build, and maintain robust backend systems and data pipelines that support misinformation detection, reporting, and mitigation efforts. What You'll Do: Build and maintain scalable and resilient backend systems using Python and/or JVM-based languages. Collaborate with cross-functional teams including data scientists, ML engineers, and product managers. Design and implement data processing pipelines using tools like Apache Spark and Kafka. Develop APIs and services that facilitate real-time misinformation flagging and tracking. Leverage Kubernetes and Docker for containerization and orchestration. Ensure code quality and maintainability through CI/CD processes using tools like GitHub Actions and Terraform. Use observability tools like Grafana and ELK stack to monitor system health and debug issues proactively. Write unit/integration tests and perform code reviews. What You'll Bring: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field Strong programming experience in Python, Scala, Java, or Kotlin. Hands-on experience with CI/CD, GitHub, and infrastructure as code (Terraform). Knowledge of Docker, Kubernetes, and cloud platforms like AWS or Azure. Experience with data processing technologies such as Kafka, Spark, and ETL pipelines. Familiarity with observability tools such as Grafana, OpenSearch/Elasticsearch, and centralized logging. Good understanding of software architecture patterns and distributed systems. Ability to work in a fast-paced and collaborative environment. Experience working on misinformation detection or trust & safety platforms. Familiarity with Postgres or other relational databases. Exposure to key-value data stores like Redis. Interest or background in social impact technology or content moderation. What We Offer: Enjoy flexible paid time off options for enhanced work-life balance. Comprehensive health insurance tailored for you. Employee assistance programs cover mental health, legal, financial, wellness, and behaviour areas to ensure your overall well-being. Complimentary CalmApp subscription for you and your loved ones, because mental wellness matters. Energetic work environment with a hybrid work style, providing the balance you need. Benefit from our family leave program, which grows with your tenure at Meltwater. Thrive within our inclusive community and seize ongoing professional development opportunities to elevate your career. Where You'll Work: Hitec city, Hyderabad. When You'll Join: As per the offer letter Our Story At Meltwater, we believe that when you have the right people in the right environment, great things happen. Our best-in-class technology empowers our 27,000 customers around the world to make better business decisions through data. But we can’t do that without our global team of developers, innovators, problem-solvers, and high-performers who embrace challenges and find new solutions for our customers. Our award-winning global culture drives everything we do and creates an environment where our employees can make an impact, learn every day, feel a sense of belonging, and celebrate each other’s successes along the way. We are innovators at the core who see the potential in people, ideas and technologies. Together, we challenge ourselves to go big, be bold, and build best-in-class solutions for our customers. We’re proud of our diverse team of 2,200+ employees in 50 locations across 25 countries around the world. No matter where you are, you’ll work with people who care about your success and get the support you need to unlock new heights in your career. We are Meltwater. Inspired by innovation, powered by people. Equal Employment Opportunity Statement Meltwater is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: At Meltwater, we are dedicated to fostering an inclusive and diverse workplace where every employee feels valued, respected, and empowered. We are committed to the principle of equal employment opportunity and strive to provide a work environment that is free from discrimination and harassment. All employment decisions at Meltwater are made based on business needs, job requirements, and individual qualifications, without regard to race, color, religion or belief, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, veteran status, or any other status protected by the applicable laws and regulations. Meltwater does not tolerate discrimination or harassment of any kind, and we actively promote a culture of respect, fairness, and inclusivity. We encourage applicants of all backgrounds, experiences, and abilities to apply and join us in our mission to drive innovation and make a positive impact in the world.
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Test Specialist Senior Location: Chennai Work Type: Hybrid Position Description This is a full stack software engineer position based out of GTBC, Chennai, India. The position is part of Industrial Systems IT org and involves product delivery for Finished Vehicle Logistics. Skills Required Spring Boot, Java Full stack, Big Query,, GCP, Pubsub, KAFKA, Angular Experience Required 8+ plus years of software development experience with strong handle on software development process including analysis, design, coding, unit & functional testing, problem/defect resolution and planning Experience in Springboot, Java, GCP Big Query, Pub Sub, Kafka, Angular Experience Preferred Work closely with product manager, product owners and business stakeholders to understand the business needs, articulate and capture business requirements and priorities. Develop and deliver quality software products that meet Education Required Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 3 days ago
130.0 years
7 - 8 Lacs
Hyderābād
On-site
Job Description Senior Data Engineer – Manufacturing Data Products The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Join a team that is passionate about using data, analytics, and insights to drive decision-making and create custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centres focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company's IT operating model, Tech Centres are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Centre helps ensure we can manage and improve each location, from investing in the growth, success, and well-being of our people to making sure colleagues from each IT division feel a sense of belonging, to managing critical emergencies. Together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centres. Role Overview: The Senior Data Engineer will lead the design and development of scalable data pipelines and reusable data products, models and assets that support external partner integration and broader divisional use in organization. This role will be responsible for understanding business/data requirements, formulating appropriate solution architectures, building data structures and models supporting business goals, and generally providing technical leadership to the group. This role requires deep expertise in cloud-native data engineering, data contracts, and medallion architecture (Bronze/Silver/Gold layers). This role will work closely with cross-functional teams to implement a common data model, support data mesh principles, and enable AI/ML capabilities across the platform. The engineer must demonstrate fluency with cloud technologies, such as AWS, and big data environments. Collaboration and the ability to communicate solutions to a non-technical audience is key. Relevant knowledge of manufacturing processes and systems is big benefit, including manufacturing execution, lab management, ERP, quality, and supply chain. Interest in mentoring and leading other colleagues is a plus. What will you do in this role: Build sets of reusable data products to support multiple digital and analytical solutions across the division. Design and implement ingestion pipelines from external partners (CMOs/CTLs) using structured data contracts. Build and maintain medallion architecture (Bronze, Silver, Gold layers) to support real-time and batch data processing. Enable data contextualization, lineage, and observability across ingestion and transformation layers. Support FAIR data principles (Findable, Accessible, Interoperable, Reusable) through mesh architecture and localized data ownership. Integrate legacy systems (ERP, MES, LIMS, QMS) into a unified analytics platform. Implement scalable orchestration using tools like Apache Airflow or Dagster. Contribute to AI/ML model deployment pipelines and support AI-augmented analytics. Ensure compliance with data governance, security, and regulatory standards (e.g., GxP). Participate in DevSecOps practices and CI/CD pipelines for data deployments. Work with suite of AWS cloud services and other modern cloud technologies and capabilities and participate in the continuous push for new patters and technology. What Should you have: 10+ years in data engineering or related IT roles. 5+ years in cloud-native data platforms (AWS preferred) and big data environments. Strong experience with SQL, Python, DBT, and data lakehouse architectures. Hands-on experience with AWS services (S3, Glue, IAM, DMS), Trino, and GitHub. Familiarity with data mesh, data contracts, and medallion architecture. Experience with orchestration tools (Apache Airflow, Dagster) and streaming data (Kafka, Kinesis). Understanding of manufacturing data domains (quality, supply chain, production). Significant working experience with data analysis and data modeling. Knowledge of master/reference data management and metadata frameworks. Strong collaboration and communication skills, especially across global teams. Preferred Qualifications: Experience working with external manufacturing partners or in regulated industries. Exposure to AI/ML model integration and ML Ops practices. Familiarity with data governance tools and observability platforms. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Benefits Management, Business, Business Analysis, Business Data Analytics, Business Process Improvements, Data Analysis, Data Analytics, Data Development, Data Integration, Data Modeling, Governance Management, Management Process, Program Management, Stakeholder Relationship Management, Statistical Data Modeling, Strategic Planning, Strategic Sourcing, Workforce Planning Preferred Skills: Job Posting End Date: 09/10/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R350690
Posted 3 days ago
5.0 years
5 - 9 Lacs
Hyderābād
On-site
Project description Our client is a global technology change and delivery organization comprising nearly 500 individuals located in Switzerland, Poland, Singapore and India. Providing global records and document processing, archiving, and retrieval solutions to all business divisions focusing on supporting Legal, Regulatory, and Operational functions. Responsibilities Collaborate closely with business stakeholders to analyze requirements and translate them into technical solutions. Design, develop, and implement cloud-native solutions in Microsoft Azure using Databricks, Kafka, and other relevant technologies. Build, deploy, and maintain CI/CD pipelines following DevOps best practices. Optimize infrastructure for performance, scalability, and cost-efficiency. Monitor systems proactively and implement automated alerting, logging, and self-healing workflows. Ensure compliance with security and governance standards across deployments. Continuously improve development workflows, tooling, and cloud environments Skills Must have Proven experience in DevOps practices and tools, including CI/CD, Infrastructure as Code (IaC), and monitoring, min 5+ years in this role Strong proficiency in Microsoft Azure, including key services like Azure Data Lake, Azure Functions, and Azure DevOps. Hands-on expertise with Databricks (Spark, Delta Lake, and workspace management). Kafka experience, especially setting up producers/consumers, stream processing, and data pipelines. Solid coding/scripting skills in languages like Python, Bash, or PowerShell. Familiarity with Git, Terraform or ARM templates, and containerization technologies (Docker, Kubernetes). Passion for emerging technologies, problem-solving, and continuous learning. Excellent communication skills and a collaborative mindset. Nice to have Experience in Agile Framework Other Languages English: C2 Proficient Seniority Senior Hyderabad, IN, India Req. VR-111704 DevOps BCM Industry 11/07/2025 Req. VR-111704
Posted 3 days ago
4.0 years
0 Lacs
Hyderābād
On-site
Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, all using our unique combination of data, analytics and software. We also assist millions of people to realise their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.co m Job Description We are looking for an enthusiastic Senior Software Engineer to work at the forefront of our cloud modernisation efforts. Key Responsibilities You will be #LI-hybrid based in Hyderabad and reporting to Director Engineering. You will work as a senior developer in an agile team to deliver high-quality software solutions within agreed timelines, aligned with business requirements and agile principles. Translate business requirements into clean, scalable code with minimal defects. Collaborate closely with cross-functional team members to design, develop, test, and release software. Contribute to development processes and practices, fostering a culture of continuous integration, delivery, and improvement. Provide clear and concise documentation for code, processes, and system architecture to support knowledge sharing and maintainability. Qualifications Bachelor's degree in engineering. 4+ years of hands-on experience in software development. Proven experience building secure, high-volume, mission-critical web systems in regulated industries (finance/insurance). Translating business requirements into clean, scalable code using design patterns and security best practices. Strong individual contributor and effective team collaborator. Technical Skills Proficient in .NET Core / .NET 6+ / .NET Framework, building APIs with ASP.NET and C#. Experienced in writing unit tests (NUnit), using mocking frameworks, and applying TDD. Integration testing using BDD. Expertise in building REST/SOAP/gRPC APIs using microservices/SOA. Expertise in creating, maintaining, and reusing frameworks/libraries. Hands-on with AWS for deploying and managing applications. Clean code, clean architecture, SOLID principles, and design patterns. Expertise with SQL/NoSQL databases. Proficient with Docker, Kubernetes, and Git (Bitbucket, GitHub, GitLab) & CI/CD practices. Expertise in Agile teams (Scrum or Kanban). Familiar with static code analysis and vulnerability management. Desirable Skills Working knowledge of GenAI tools for coding. AWS certifications or relevant public cloud certifications. Deploying & scaling services on Kubernetes, Amazon ECS/EKS. Familiarity with Domain-Driven Design (DDD) and Event-Driven Architecture (EDA). Event streaming/messaging tools (Kafka, EventBridge, Kinesis, RabbitMQ, ActiveMQ). Proficient in Infrastructure as Code (IaC) using Terraform, CloudFormation, or CDK. CI/CD tools like GitHub Actions, GitLab CI, or Jenkins. Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Global Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site and Glassdoor to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here
Posted 3 days ago
7.0 years
0 Lacs
Hyderābād
On-site
About Us: Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here . Overview: Join us at Seismic, a cutting-edge technology company leading the way in the SaaS industry. We specialize in delivering modern, scalable, and multi-cloud solutions that empower businesses to succeed in today's digital era. Leveraging the latest advancements in technology, including Generative AI, we are committed to driving innovation and transforming the way businesses operate. As we embark on an exciting journey of growth and expansion, we are seeking top engineering talent to join our Integrations team in Hyderabad, India. As a Senior Engineer I, you will play a crucial role in developing and optimizing backend systems that power our web application, including content discovery, knowledge management, learning and coaching, meeting intelligence and various AI capabilities as well as the front end developer support ecosystem including the app exchange and core integrations developed and distributed by Seismic. You will collaborate with cross-functional teams to design, build, and maintain scalable, high-performance systems that deliver exceptional value to our customers. This position offers a unique opportunity to make a significant impact on our company's growth and success by contributing to the technical excellence and innovation of our software solutions. Seismic AI: Integrations are a critical product area in Seismic. We support a robust developer ecosystem with native integrations as well as a developer support and app exchange ecosystem. Our focus is to enable the Seismic platform value platform to be integrated with our customers’ systems and processes to make our customers more productive and efficient in their day-to-day tasks, and to drive more successful sales outcomes. Why Join Us: Opportunity to be a key technical leader in a rapidly growing company and drive innovation in the SaaS industry. Work with cutting-edge technologies and be at the forefront of AI advancements. Competitive compensation package, including salary, bonus, and equity options. A supportive, inclusive work culture. Professional development opportunities and career growth potential in a dynamic and collaborative environment. Who you are:: Experience : 7+ years of experience in software engineering and a proven track record of building and scaling microservices and working with data retrieval systems. Technical Expertise: Experience with C# and .NET, unit testing, object-oriented programming, and relational databases. Experience with Infrastructure as Code (Terraform, Pulumi, etc.), event driven architectures with tools like Kafka, feature management (Launch Darkly) is good to have. Strong proficiency in frontend technologies such as HTML, CSS, JavaScript, and frameworks/libraries such as React, Angular, or Vue.js. Experience with TypeScript and state management libraries like Redux is a plus. Solid experience in writing and executing automated tests using frameworks such as Jest, Mocha, or Cypress. Understanding of web accessibility standards (WCAG) and experience implementing accessible designs. AI and Data Science Basic Knowledge: Experience in integrating AI-driven features into frontend applications, or basic understanding of AI and data science concepts is a big plus. Cloud Expertise : Experience with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure. Knowledge of cloud-native services for AI/ML, data storage, and processing. Experience deploying containerized applications into Kubernetes is a plus. SaaS Knowledge: Extensive experience in SaaS application development and cloud technologies, with a deep understanding of modern distributed systems and cloud operational infrastructure. Product Development: Experience in collaborating with product management and design, with the ability to translate business requirements into technical solutions that drive successful delivery. Proven record of driving feature development from concept to launch. Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Fast-paced Environment: Experience working in a fast-paced, dynamic environment, preferably in a SaaS or technology-driven company. What you'll be doing:: Distributed Systems Development: Design, develop, and maintain backend systems and services for AI, information extraction or information retrieval functionality, ensuring high performance, scalability, and reliability. Frontend Development: Design, develop, and maintain frontend solutions for Seismic developed applications and developer ecosystem, ensuring high performance, scalability, and reliability. Performance Tuning: Monitor and optimize service performance, addressing bottlenecks and ensuring low-latency query responses. Technical Leadership: Provide technical guidance and mentorship to junior engineers, promoting best practices in backend development. Collaboration: Work closely with cross-functional and geographically distributed teams, including product managers, frontend engineers, and UX designers, to deliver seamless and intuitive experiences. Continuous Improvement: Stay updated with the latest trends and advancements in software and technologies, conducting research and experimentation to drive innovation. Job Posting Footer: If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Linkedin Posting Section: #LI-ST1
Posted 3 days ago
8.0 years
2 - 5 Lacs
Hyderābād
On-site
Position Overview: ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation. Job Responsibilities Architect, design, and optimize big data and AI/ML solutions on the Databricks platform. Develop and implement highly scalable ETL pipelines for processing large datasets. Lead the adoption of Apache Spark for distributed data processing and real-time analytics. Define and enforce data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights. Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate data workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability across all data processes. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. 8+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analytics. Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Hands-on experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills in a fast-paced environment. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 3 days ago
5.0 years
12 - 24 Lacs
Hyderābād
On-site
We are seeking a passionate and experienced DevOps Engineer to join our dynamic team. The ideal candidate will have a strong background in cloud platforms, CI/CD pipelines, automation, and containerization, with a proven ability to optimize infrastructure and deployment workflows. You will play a crucial role in ensuring the reliability, scalability, and security of our systems. Key Responsibilities: Design, implement, and manage CI/CD pipelines using Jenkins , GitLab CI , or similar tools. Manage cloud infrastructure primarily on AWS (GCP basic exposure is a plus) using Infrastructure as Code (IaC) tools like Terraform , CloudFormation , and Atlantis . Deploy, manage, and troubleshoot containerized applications using Kubernetes and tools like Helm . Ensure observability using monitoring and logging tools such as Prometheus , Grafana , ELK Stack , AWS CloudWatch , Loki , etc. Automate configuration management using Ansible and streamline deployment processes. Manage source control and branching strategies using GitHub , GitLab , BitBucket , or similar platforms. Maintain secure systems and applications using tools like HashiCorp Vault , IAM , and WAF . Drive cloud cost optimization strategies (FinOps) by monitoring resource utilization and implementing savings measures. Troubleshoot and resolve infrastructure, network, and deployment issues. Collaborate closely with development, QA, and product teams to improve system reliability and development velocity. Participate in on-call rotations and incident management processes. Technical Skills Required: Cloud Platforms: AWS (Primary), GCP (Basic) Infrastructure as Code: Terraform, CloudFormation, Atlantis CI/CD Tools: Jenkins, GitLab CI Containerization & Orchestration: Docker, Kubernetes, Helm Configuration Management: Ansible Monitoring & Logging: Prometheus, Grafana, Kibana, ELK, Loki, NewRelic, CloudWatch Scripting & Programming: Shell Scripting, Python (basic knowledge) Version Control: GitHub, GitLab, BitBucket Artifact Management: Nexus, JFrog Artifactory API Gateway: Nginx, AWS API Gateway, Kong (Optional) Database Knowledge: MySQL, MongoDB, PostgreSQL, Redis (for basic troubleshooting) Operating Systems: Linux (Ubuntu, CentOS, Amazon Linux) Nice to Have: Knowledge of SIEM solutions & security tools. Experience with cost optimization tools like AWS Cost Explorer, Trusted Advisor. Exposure to message brokers like Kafka or ActiveMQ. Experience working in Agile/Scrum environments. Certifications: AWS Certified DevOps Engineer, AWS Solutions Architect (Associate), or similar. Educational Qualification: Bachelor's degree in Computer Science, Information Technology, or a related field. Soft Skills: Strong problem-solving and analytical thinking. Excellent collaboration and communication skills. Proactive, detail-oriented, and results-driven mindset. Ability to work independently and in a team environment. Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,468,414.68 per year Experience: AWS: 5 years (Required) Terraform: 5 years (Required) Work Location: In person
Posted 3 days ago
5.0 years
5 - 8 Lacs
Hyderābād
On-site
Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Your future duties and responsibilities Position: Senior Software Engineer Experience: 5-10 years Category: Software Development/ Engineering Shift Timings: 1:00 pm to 10:00 pm Main location: Hyderabad Work Type: Work from office Skill: Spark (PySpark), Python and SQL. Employment Type: Full Time Position ID: J0625-0215 Required qualifications to be successful in this role Required qualifications to be successful in this role Must have Skills: 5+ yrs. Development experience with Spark (PySpark), Python and SQL. Extensive knowledge building data pipelines Hands on experience with Databricks Devlopment Strong experience with Strong experience developing on Linux OS. Experience with scheduling and orchestration (e.g. Databricks Workflows,airflow, prefect, control-m). Good to have skills: Solid understanding of distributed systems, data structures, design principles. Agile Development Methodologies (e.g. SAFe, Kanban, Scrum). Comfortable communicating with teams via showcases/demos. Play key role in establishing and implementing migration patterns for the Data Lake Modernization project. Actively migrate use cases from our on premises Data Lake to Databricks on GCP. Collaborate with Product Management and business partners to understand use case requirements and reporting. Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation) . Document and showcase feature designs/workflows. Participate in team meetings and discussions around product development. Stay up to date on industry latest industry trends and design patterns. 3+ years experience with GIT. 3+ years experience with CI/CD (e.g. Azure Pipelines). Experience with streaming technologies, such as Kafka, Spark. Experience building applications on Docker and Kubernetes. Cloud experience (e.g. Azure, Google). Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 3 days ago
12.0 years
0 Lacs
Hyderābād
On-site
Overview: PepsiCo Data BI & Integration Platforms is seeking an experienced Cloud Platform Databricks SME, responsible for overseeing the Platform administration, Security, new NPI tools integration, migrations, platform maintenance and other platform administration activities on Azure/AWS. The ideal candidate will have hands-on experience with Azure/AWS services – Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities: Databricks Subject Matter Expert (SME) plays a pivotal role in admin, security best practices, platform sustain support, new tools adoption, cost optimization, supporting new patterns/design solutions using the Databricks platform. Here’s a breakdown of typical responsibilities: Core Technical Responsibilities Architect and optimize big data pipelines using Apache Spark, Delta Lake, and Databricks-native tools. Design scalable data ingestion and transformation workflows, including batch and streaming (e.g., Kafka, Spark Structured Streaming). Create integration guidelines to configure and integrate Databricks with other existing security tools relevant to data access control. Implement data security and governance using Unity Catalog, access controls, and data classification techniques. Support migration of legacy systems to Databricks on cloud platforms like Azure, AWS, or GCP. Manage cloud platform operations with a focus on FinOps support, optimizing resource utilization, cost visibility, and governance across multi-cloud environments. Collaboration & Advisory Act as a technical advisor to data engineering and analytics teams, guiding best practices and performance tuning. Partner with architects and business stakeholders to align Databricks solutions with enterprise goals. Lead proof-of-concept (PoC) initiatives to demonstrate Databricks capabilities for specific use cases. Strategic & Leadership Contributions Mentor junior engineers and promote knowledge sharing across teams. Contribute to platform adoption strategies, including training, documentation, and internal evangelism. Stay current with Databricks innovations and recommend enhancements to existing architectures. Specialized Expertise (Optional but Valuable) Machine Learning & AI integration using MLflow, AutoML, or custom models. Cost optimization and workload sizing for large-scale data processing. Compliance and audit readiness for regulated industries. Qualifications: Bachelor’s degree in computer science. At least 12 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 5 years in a Platform admin role Strong understanding of data security principles and best practices. Expertise in Databricks platform, security features, Unity Catalog, and data access control mechanisms. Experience with data classification and masking techniques. Strong understanding of cloud cost management, with hands-on experience in usage analytics, budgeting, and cost optimization strategies across multi-cloud platforms. Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in Azure/AWS/Databricks platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.
Posted 3 days ago
5.0 years
4 - 6 Lacs
Hyderābād
On-site
Position Overview: ShyftLabs is seeking a skilled Databricks Engineer to support in designing, developing, and optimizing big data solutions using the Databricks Unified Analytics Platform. This role requires strong expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to drive data-driven insights and ensure scalable, high-performance data architectures. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsiblities Design, implement, and optimize big data pipelines in Databricks. Develop scalable ETL workflows to process large datasets. Leverage Apache Spark for distributed data processing and real-time analytics. Implement data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable advanced AI/ML workflows. Monitor and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability in all pipelines. Basic Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. 5+ years of hands-on experience with Databricks and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analysis. Experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills with an ability to work in a fast-paced environment. Preferred Qualifications Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 3 days ago
1.0 years
4 - 7 Lacs
Hyderābād
On-site
Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, all using our unique combination of data, analytics and software. We also assist millions of people to realise their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.co m Job Description We are looking for an enthusiastic Software Engineer to contribute to our cloud modernisation efforts. Key Responsibilities You will be #LI-hybrid based in Hyderabad and reporting to Director Engineering. Develop high-quality software in an agile team, ensuring delivery aligned with business requirements and agile principles. You will translate business requirements into clean, scalable, low-defect code. Collaborate with cross-functional teams to design, develop, test, and release software. Contribute to development practices that support continuous integration, delivery, and improvement. Integrate DevSecOps practices into the development lifecycle, using static code analysis and security scanning tools. Maintain clear documentation for code, processes, and system architecture to support knowledge sharing and maintainability. Qualifications 1–4 years of hands-on experience in software development. Degree in B.Sc. Computer Science, MCA, B.Tech in Engineering, or higher. Proven experience building secure, mission-critical, high-volume web systems in regulated industries (e.g., finance, insurance). Required Technical Skills & Knowledge Proficient in .NET 6+ / .NET Core / .NET Framework, building scalable REST/SOAP/RPC APIs using ASP.NET and C#. Expertise in writing unit and integration tests using xUnit/nUnit, with experience in TDD and BDD. Familiar with mocking frameworks, clean architecture, SOLID principles, and design patterns. Expertise in SQL/NoSQL databases. Knowledge of Docker; familiarity with Kubernetes Desirable & Useful Skills Working knowledge of GenAI tools for coding. Familiarity with Python/Scala, Spark, or other big data frameworks. Agile environments (Scrum or Kanban). AWS or other public cloud certifications Domain-Driven Design (DDD) and Event-Driven Architecture (EDA). Familiarity with event streaming/messaging tools (Kafka, EventBridge, Kinesis, RabbitMQ, ActiveMQ). Knowledge of Infrastructure as Code (IaC) using Terraform, CloudFormation, or CDK. CI/CD tools (GitHub Actions, GitLab CI, Jenkins). Exposure to static code analysis and vulnerability management. Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Global Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site and Glassdoor to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here
Posted 3 days ago
10.0 years
5 - 9 Lacs
Gurgaon
On-site
Position Title: Enterprise Integration Architect Position Type: Regular - Full-Time Position Location: Gurgaon Requisition ID: 37105 Job Overview: We are seeking a highly experienced Senior Application Integration Architect to lead the design, development, and governance of enterprise integration solutions across cloud, hybrid, and on-premises environments. This role requires deep expertise in integration architecture patterns, API-led connectivity, Event Driven Architecture, API Management and iPaaS platforms to drive digital transformation initiatives and ensure scalable, secure, and maintainable integrations across our application landscape. You will collaborate closely with enterprise architects, product teams, infrastructure, and security stakeholders to shape the integration strategy, define best practices, and oversee execution. Key Responsibilities: Define and implement enterprise integration architectures leveraging API-led and event-driven patterns to enable agility, reusability, and rapid delivery Lead the design and governance of integration solutions, ensuring compliance with enterprise-wide architecture principles. Provide best practices for API management, EDA (Event-Driven Architecture), and hybrid cloud integrations. Collaborate with business stakeholders, solution architects, and development teams to design and implement integration solutions. Define integration patterns, standards, and guidelines to improve performance, reusability, and security. Ensure smooth integration with SAP S/4HANA, legacy systems, third-party applications, and external APIs. Drive CI/CD automation, DevOps practices, and monitoring strategies for integration solutions. Troubleshoot complex integration issues and optimize solutions for better performance. Guide and mentor integration developers in implementing best practices and architectural guidelines. Required Skills & Qualifications: 10+ years of experience in enterprise integration architecture and iPaaS solutions. 5+ years of hands-on experience with any iPaaS platform with Cloud Integration, API Management, Event Mesh, B2B Integration capabilities. Strong knowledge of integration architectures, patterns, middleware, APIs (REST/SOAP), microservices, event-driven architectures, and security best practices. Proficiency in API security, OAuth, JWT, SAML, and other authentication mechanisms. Hands-on experience with CI/CD pipelines, DevOps, and monitoring tools for integration solutions. Knowledge of Kafka, MQ, Webhooks, and other event streaming technologies. Ability to lead architecture discussions, influence stakeholders, and align technology with business goals. Excellent communication, leadership, and documentation skills. Soft Skills: Strong communication and stakeholder management skills. Strategic thinking with a passion for problem solving and innovation. Ability to work in a fast-paced , cross-functional environment. Proven leadership and mentoring capabilities. McCain Foods is an equal opportunity employer. As a global family-owned company, we strive to be the employer of choice in the diverse communities around the world in which we live and work. We recognize that inclusion drives our creativity, resilience, and success and makes our business stronger. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, age, veteran status, disability, or any other protected characteristic under applicable law. McCain is an accessible employer. If you require an accommodation throughout the recruitment process (including alternate formats of materials or accessible meeting rooms), please let us know and we will work with you to find appropriate solutions. Your privacy is important to us. By submitting personal data or information to us, you agree this will be handled in accordance with McCain’s Global Privacy Policy and Global Employee Privacy Policy , as applicable. You can understand how your personal information is being handled here . Job Family: Information Technology Division: Global Digital Technology Department: GDTC India Function Location(s): IN - India : Haryana : Gurgaon Company: McCain Foods(India) P Ltd
Posted 3 days ago
4.0 years
6 - 7 Lacs
Bengaluru
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: · Developing Java applications. Design, develop, and implement Java-based applications that meet business requirements and provide optimal user experiences. · Collaborating with team members. Work closely with stakeholders, project managers, and front-end and back-end developers to create effective software solutions. · Debugging and troubleshooting. Identify and resolve technical issues that impact functionality and performance, ensuring seamless application development. · Implementing unit testing. Conduct comprehensive unit testing to validate the functionality of each application module, adhering to software development lifecycle best practices. · Writing optimized Java code. Ensure Java code quality by following best practices and utilizing object-oriented programming principles to build scalable applications. · Utilizing frameworks. Leverage Java frameworks such as Spring, Hibernate, and Java EE to expedite development while maintaining high standards. · Building and managing APIs. Develop and manage APIs to facilitate smooth communication between applications and web services. · Conducting code reviews. Participate in code reviews to ensure code quality, maintain consistency, and share insights on design patterns. · Using development tools. Utilize tools like Git, Maven, Docker, and AWS to manage project dependencies and streamline deployment. Mandatory skill sets: · Java · Springboot · Microservices · Nosql DB Preferred skill sets: · Kafka · Docker · Kubernetes · Cloud(AWS/Azure) Years of experience required: 4+ Years Education qualification: BE/B.Tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Java Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 3 days ago
10.0 years
0 Lacs
Bengaluru
On-site
Who we are : Money. A subject that’s always on our mind. We go through a rollercoaster of emotions when it comes to money. That’s why at Jupiter, our mission is to enable people to improve their relationship with money. We are a financial services platform that leverages technology to offer our customers seamless and user-friendly products related to banking, loans, and investments. So whether it’s a savings account, credit cards, investment and payments, it's all on Jupiter. We also help users cut through the banking jargons, offer smart insights based on spending and provide users with a range of new age features to enable and make sense of their finances. Our journey so far: Jupiter was founded in 2019 by Jitendra Gupta. Best known as Citrus Pay’s founder, Jiten found himself wondering why personal finance experiences were not customer-centric when everything else like food ordering and entertainment were. That’s how our journey began in 2019, with a simple, powerful idea : to create a modern, customer-centric experience that empowers individuals when it comes to their money. From our humble beginnings, we've strived relentlessly to make managing money easier. In 2021 when we launched we had a waitlist with over 100,000 people eager to experience the world of Jupiter - and within 10 months we crossed 1 million users. Today, we have 2.7 million + happy users on Jupiter. Our Insights feature today tells how much and where you’ve been spending automatically, even on your other savings accounts! Our Edge CSB Bank RuPay credit card is getting a lot of love for it’s head-turning transparent design, not to mention it being a rewarding experience. Our No-Penalty SIP and Daily SIP at Rs 10/- has helped scale our investments business and we have over 100,000 users investing on our platform to fulfill their money aspirations. We've embraced cutting-edge technology, harnessed the power of data analytics, and assembled a diverse team of creative thinkers and industry experts who share our vision for a more accessible, transparent, and inclusive financial ecosystem. We’re now looking for like-minded folks to join the crew. Roles and responsibilities: Build, develop and be hands-on Create clear career paths for team members to help them grow with regular and deep mentoring, attract and retain top talent keeping long-term goals in mind. Drive strategy and vision across teams in a rapidly evolving environment. Work closely with diverse stakeholders across product, tech, and business teams; enable them by providing scalable and quality technology solutions. Facilitate platform solutions that span beyond the team and across the ecosystem. Own key engineering metrics within your area. Drive agility in execution and create a strong team culture of responsiveness through leading by example. Own decision-making around prioritization for better predictability. What is needed for this role: Have experience of 10+ years and managed a team Deep understanding of technologies and architecture in a highly scalable and available environment. Advanced working experience in a public cloud such as AWS - Kubernetes ecosystem, datastores such as Postgres/Redis/DynamoDB, and messaging systems such as Apache Kafka/SQS. Entrepreneurial mentality, hunger, and passion to build quality products along with good engineering practices. Excellent engineering leadership ability, including recruiting, inspiring, and retaining talent. Very good written and verbal communication skills. Demonstrated ability of judgment, cross-collaboration, and problem-solving skills.
Posted 3 days ago
5.0 years
6 - 9 Lacs
Bengaluru
On-site
As a member of the Support organization, your focus is to deliver post-sales support and solutions to the Oracle customer base while serving as an advocate for customer needs. This involves resolving post-sales non-technical customer inquiries via phone and electronic means, as well as, technical questions regarding the use of and troubleshooting for our Electronic Support Services. A primary point of contact for customers, you are responsible for facilitating customer relationships with Support and providing advice and assistance to internal Oracle employees on diverse customer situations and escalated issues. Career Level - IC3 As a Sr. Support Engineer, you will be the technical interface to customers, Original Equipment Manufacturers (OEMs) and Value-Added Resellers (VARs) for resolution of problems related to the installation, recommended maintenance and use of Oracle products. Have an understanding of all Oracle products in their competencies and in-depth knowledge of several products and/or platforms. Also, you should be highly experienced in multiple platforms and be able to complete assigned duties with minimal direction from management. In this position, you will routinely act independently while researching and developing solutions to customer issues. RESPONSIBILITIES: To manage and resolve Service Requests logged by customers (internal and external) on Oracle products and contribute to proactive support activities according to product support strategy and model Provide expert-level troubleshooting and technical support for Oracle’s Big Data Service (BDS) , DFS, DIS, Data Catalog, and associated cloud services Diagnose and resolve complex issues across the Hadoop ecosystem (e.g., HDFS, YARN, Spark, Hive, Impala, Sqoop, Oozie) Manage cluster configurations, upgrades, patches, and installations using tools like Ambari Support real-time data processing frameworks (Kafka, Flink) and ETL pipelines (ODI, Informatica) Collaborate with OCI platform teams to support secure and scalable AI/ML data workflows Engage in hands-on support for agentic frameworks (LangChain, Semantic Kernel, CrewAI) and RAG-based systems Interact regularly with customers, build technical documentation, and contribute to knowledge sharing Collaborate cross-functionally with product engineering, infrastructure, and cloud ops teams for holistic support delivery Qualifications Bachelor’s degree in Computer Science, Engineering, or a related technical field 5+ years of proven experience supporting Oracle Big Data platforms including Oracle’s Big Data Service (BDS) , DFS, DIS, Data Catalog, and Oracle Cloud Infrastructure (OCI) Strong expertise in Hadoop ecosystem: HDFS, YARN, Spark, Hive, Impala, Sqoop, Oozie, Ranger, Kerberos Experience in Linux OS administration, networking, TLS/SSL, and SSO integration Experience with data integration tools (ODI/Informatica) and cloud data sources (FusionApps/BICC, Snowflake) Hands-on experience with LLMs, agentic frameworks (LangChain, Semantic Kernel, CrewAI), RAG pipelines, and vector databases (FAISS, Pinecone, Weaviate) Proficiency in Python and Shell scripting Skills & Competencies Deep understanding of Oracle’s Big Data Service (BDS) , Data Flow Service (DFS), Data Integration Service(DIS), Data Catalog architecture and operations Cluster administration using Ambari and troubleshooting across the Cloudera stack Real-time processing using Kafka, Flink AI/ML workflow support, including OCI Gen AI services and integration of agentic pipelines Working knowledge of cloud services, networking, system-level security, and distributed architectures Experience supporting multi-tier enterprise applications Personal Competencies Strong customer focus with ability to handle escalations and technical deep dives Structured problem-solving mindset Self-motivated with a continuous learning attitude Excellent communication, documentation, and global collaboration skills Results-oriented with a passion for service quality and technical excellence
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France