Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
punjab
On-site
As a GCP Data Engineer in Australia, you will be responsible for leveraging your experience in Google Cloud Platform (GCP) to handle various aspects of data engineering. Your role will involve working on data migration projects from legacy systems such as SQL and Oracle. You will also be designing and building ETL pipelines for data lake and data warehouse solutions on GCP. In this position, your expertise in GCP data and analytics services will be crucial. You will work with tools like Cloud Dataflow, Cloud Dataprep, Apache Beam/Cloud composer, Cloud BigQuery, Cloud Fusion, Cloud PubSub, Cloud storage, and Cloud Functions. Additionally, you will utilize Cloud Native GCP CLI/gsutil for operations and scripting languages like Python and SQL to enhance data processing efficiencies. Furthermore, your experience with data governance practices, metadata management, data masking, and encryption will be essential. You will utilize GCP tools such as Cloud Data Catalog and GCP KMS tools to ensure data security and compliance. Overall, this role requires a strong foundation in GCP technologies and a proactive approach to data engineering challenges in a dynamic environment.,
Posted 1 month ago
10.0 - 20.0 years
10 - 18 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Google Cloud Certification Associate Cloud Engineer or Professional Cloud Architect/Engineer Hands-on experience with GCP services (Compute Engine, GKE, Cloud SQL, BigQuery, etc.) Strong command of Linux , shell scripting , and networking fundamentals Proficiency in Terraform , Cloud Build , Cloud Functions , or other GCP-native tools Experience with containers and orchestration – Docker, Kubernetes (GKE) Familiarity with monitoring/logging – Cloud Monitoring , Prometheus , Grafana Understanding of IAM , VPCs , firewall rules , service accounts , and Cloud Identity Excellent written and verbal communication skills.
Posted 1 month ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
As an excellent hands-on engineer, you will be working on developing next-generation media server software as a part of a core team dedicated to revolutionizing video sharing technology over the internet. Your role will involve contributing significantly to the development of server-side components, providing a learning opportunity in the video streaming space. You should have good hands-on experience with AWS, solid programming skills in C/C++ and Python, along with knowledge of AWS services like Lambda, EFS, auto-scaling, and load balancing. Experience in building and provisioning dockerized applications is highly preferable, along with a good understanding of the HTTP protocol. Familiarity with Web Servers (Apache, Nginx), Databases (MySQL, Redis, MongoDB, Firebase), Python frameworks (Django, Flask), Source Control (Git), REST APIs, and strong understanding of memory management, file I/O, network I/O, concurrency, and multithreading is expected. Your specific responsibilities will include working on scalable video deployments, extending the Mobile Application Backend for customer-specific features, maintaining and extending existing software components in the Media Server software, and fostering a multi-paradigm engineering culture with a cross-functional team. To excel in this role, you should bring strong coding skills and experience with Python and cloud functions, at least 1-2 years of experience with AWS services and GitHub, 6 to 12 months of experience in S3 or other storage/CDN services, exposure to NoSQL databases for developing mobile backends, and proficiency in Agile and Jira tools. A BS or equivalent in Computer Science or Engineering is preferred. If you are ready to take on this exciting opportunity, please send your CV to careers@crunchmediaworks.com.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
About GlobalLogic GlobalLogic, a leader in digital product engineering with over 30,000 employees, helps brands worldwide in designing and developing innovative products, platforms, and digital experiences. By integrating experience design, complex engineering, and data expertise, GlobalLogic assists clients in envisioning possibilities and accelerating their transition into the digital businesses of tomorrow. Operating design studios and engineering centers globally, GlobalLogic extends its deep expertise to customers in various industries such as communications, financial services, automotive, healthcare, technology, media, manufacturing, and semiconductor. GlobalLogic is a Hitachi Group Company. Requirements Leadership & Strategy As a part of GlobalLogic, you will lead and mentor a team of cloud engineers, providing technical guidance and support for career development. You will define cloud architecture standards and best practices across the organization and collaborate with senior leadership to develop a cloud strategy and roadmap aligned with business objectives. Your responsibilities will include driving technical decision-making for complex cloud infrastructure projects and establishing and maintaining cloud governance frameworks and operational procedures. Leadership Experience With a minimum of 3 years in technical leadership roles managing engineering teams, you should have a proven track record of successfully delivering large-scale cloud transformation projects. Experience in budget management, resource planning, and strong presentation and communication skills for executive-level reporting are essential for this role. Certifications (Preferred) Preferred certifications include Google Cloud Professional Cloud Architect, Google Cloud Professional Data Engineer, and additional relevant cloud or security certifications. Technical Excellence You should have over 10 years of experience in designing and implementing enterprise-scale Cloud Solutions using GCP services. As a technical expert, you will architect and oversee the development of sophisticated cloud solutions using Python and advanced GCP services. Your role will involve leading the design and deployment of solutions utilizing Cloud Functions, Docker containers, Dataflow, and other GCP services. Additionally, you will design complex integrations with multiple data sources and systems, implement security best practices, and troubleshoot and resolve technical issues while establishing preventive measures. Job Responsibilities Technical Skills Your expertise should include expert-level proficiency in Python and experience in additional languages such as Java, Go, or Scala. Deep knowledge of GCP services like Dataflow, Compute Engine, BigQuery, Cloud Functions, and others is required. Advanced knowledge of Docker, Kubernetes, and container orchestration patterns, along with experience in cloud security, infrastructure as code, and CI/CD practices, will be crucial for this role. Cross-functional Collaboration Collaborating with C-level executives, senior architects, and product leadership to translate business requirements into technical solutions, leading cross-functional project teams, presenting technical recommendations to executive leadership, and establishing relationships with GCP technical account managers are key aspects of this role. What We Offer At GlobalLogic, we prioritize a culture of caring, continuous learning and development, interesting and meaningful work, balance and flexibility, and a high-trust organization. Join us to experience an inclusive culture, opportunities for growth and advancement, impactful projects, work-life balance, and a safe, reliable, and ethical global company. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for creating innovative digital products and experiences since 2000. Collaborating with forward-thinking companies globally, GlobalLogic continues to transform businesses and redefine industries through intelligent products, platforms, and services.,
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
J ob Title: Senior Data Engineer - Big Data, ETL & Java Experience Level: 5 + Years Employment Type: Full - time About the Role EXL is seeking a Senior Software Engineer with a strong foundation in Java , along with expertise in Big Data technologies and ETL development . In this role, you'll design and implement scalable, high - performance data and backend systems for clients in retail, media, and other data - driven industries. You'll work across cloud platforms such as AWS and GCP to build end - to - end data and application pipelines. Key Responsibilities . Design, develop, and maintain scalable data pipelines and ETL workflows using Apache Spark, Apache Airflow, and cloud platforms (AWS/GCP). . Build and support Java - based backend components , services, or APIs as part of end - to - end data solutions. . Work with large - scale datasets to support transformation, integration, and real - time analytics. . Optimize Spark, SQL, and Java processes for performance, scalability, and reliability. . Collaborate with cross - functional teams to understand business requirements and deliver robust solutions. . Follow engineering best practices in coding, testing, version control, and deployment. Required Qualifications . 5 + years of hands - on experience in software or data engineering. . Proven experience in developing ETL pipelines using Java and Spark . . Strong programming experience in Java (preferably with frameworks such as Spring or Spring Boot). . Experience in Big Data tools including Apache Spark , Apache Airflow , and cloud services such as AWS EMR, Glue, S3, Lambda or GCP BigQuery, Dataflow, Cloud Functions. . Proficiency in SQL and experience with performance tuning for large datasets. . Familiarity with data modeling, warehousing , and distributed systems. . Experience working in Agile development environments. . Strong problem - solving skills and attention to detail. . Excellent communication skills Preferred Qualifications . Experience building and integrating RESTful APIs or microservices using Java. . Exposure to data platforms like Snowflake, Databricks, or Kafka. . Background in retail, merchandising, or media domains is a plus. . Familiarity with CI/CD pipelines , DevOps tools, and cloud - based development workflows.
Posted 1 month ago
10.0 - 18.0 years
12 - 22 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Google Cloud Certification Associate Cloud Engineer or Professional Cloud Architect/Engineer Hands-on experience with GCP services (Compute Engine, GKE, Cloud SQL, BigQuery, etc.) Strong command of Linux , shell scripting , and networking fundamentals Proficiency in Terraform , Cloud Build , Cloud Functions , or other GCP-native tools Experience with containers and orchestration – Docker, Kubernetes (GKE) Familiarity with monitoring/logging – Cloud Monitoring , Prometheus , Grafana Understanding of IAM , VPCs , firewall rules , service accounts , and Cloud Identity Excellent written and verbal communication skills
Posted 1 month ago
1.0 - 4.0 years
3 - 6 Lacs
Hyderabad
Work from Office
Developed cross-platform apps using Flutter and Dart, integrated Firebase for auth and Firestore for data, implemented Razorpay QR and RazorpayX Escrow for automated payments, and built secure, scalable, real-time features.
Posted 1 month ago
3.0 - 8.0 years
14 - 20 Lacs
Noida
Work from Office
Location: Noida (Onsite) Department: Engineering Employment Type: Full-time Role Overview We are looking for a Senior App Developer with at least 3 years of experience in building high-quality mobile applications using Flutter. Youll be responsible for designing and delivering scalable, performant, and responsive mobile apps for Android and iOS that support critical event-tech workflows. This is an onsite role based in Noida—ideal for someone who thrives in collaborative product-engineering environments and has a keen eye for user experience. Key Responsibilities Build, maintain, and optimize mobile apps using Flutter & Dart for both Android and iOS platforms Implement state management solutions (Provider, Riverpod, Bloc, etc.) to manage shared app logic Integrate with REST APIs, Firebase services, and other backend platforms Ensure responsive UI across various screen sizes and mobile devices Manage authentication workflows including JWT-based login, session handling, and role-based access Handle app deployments via Play Console and App Store Connect , including signing, release builds, and updates Collaborate closely with product, design, and backend teams to deliver smooth, user-friendly features Write clean, maintainable, and well-documented code and participate in peer code reviews Required Skills & Experience 3+ years of mobile app development experience, with at least 2 published Flutter apps Strong command of Flutter & Dart, along with state management using Provider Experience with Firebase services: Authentication, Firestore/Realtime Database, Cloud Functions, and Push Notifications Proficient in REST API integration, responsive UI/UX, and common mobile design patterns Familiar with JWT-based authentication flows and secure mobile app practices Hands-on experience with deployment processes for Android and iOS platforms Solid understanding of Git workflows and working in Agile/Scrum teams Good to Have Experience with Riverpod, Bloc, or GetX for state management Knowledge of Clean Architecture or MVVM in Flutter Exposure to Firebase Analytics, Crashlytics, and Remote Config Familiarity with CI/CD tools (e.g., Codemagic, Bitrise) for automated builds and deployment Basic knowledge of native code (Swift/Kotlin) for platform-specific functionality Experience with app performance optimization using DevTools, isolates, etc. Understanding of unit and widget testing using Flutter's test framework Familiarity with design tools like Figma or Zeplin for UI handoff Experience integrating features like QR code rendering, PDF generation, or badge scanning systems Annual CTC - INR 14 to 20 LPA
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As a member of the JM Financial team, you will be part of a culture that values recognition and rewards for the hard work and dedication of its employees. We believe that a motivated workforce is essential for the growth of our organization. Our management team acknowledges and appreciates the efforts of our personnel through promotions, bonuses, awards, and public recognition. By fostering an atmosphere of success, we celebrate achievements such as successful deals, good client ratings, and customer reviews. Nurturing talent is a key focus at JM Financial. We aim to prepare our employees for future leadership roles by creating succession plans and encouraging direct interactions with clients. Knowledge sharing and cross-functional interactions are integral to our business environment, fostering inclusivity and growth opportunities for our team members. Attracting and managing top talent is a priority for JM Financial. We have successfully built a diverse talent pool with expertise, new perspectives, and enthusiasm. Our strong brand presence in the market enables us to leverage the expertise of our business partners to attract the best talent. Trust is fundamental to our organization, binding our programs, people, and clients together. We prioritize transparency, two-way communication, and trust across all levels of the organization. Opportunities for growth and development are abundant at JM Financial. We believe in growing alongside our employees and providing them with opportunities to advance their careers. Our commitment to nurturing talent has led to the appointment of promising employees to leadership positions within the organization. With a focus on employee retention and a supportive environment for skill development, we aim to create a strong future leadership team. Emphasizing teamwork, we value both individual performance and collaborative group efforts. In a fast-paced corporate environment, teamwork is essential for achieving our common vision. By fostering open communication channels and facilitating information sharing, we ensure that every member of our team contributes to delivering value to our clients. As a Java Developer at JM Financial, your responsibilities will include designing, modeling, and building services to support new features and products. You will work on an integrated central platform to power various web applications, developing a robust backend framework and implementing features across different products using a combination of technologies. Researching and implementing new technologies to enhance our services will be a key part of your role. To excel in this position, you should have a BTech Degree in Computer Science or equivalent experience, with at least 3 years of experience building Java-based web applications in Linux/Unix environments. Proficiency in scripting languages such as JavaScript, Ruby, or Python, along with compiled languages like Java or C/C++, is required. Experience with Google Cloud Platform services, knowledge of design methodologies for backend services, and building scalable infrastructure are essential skills for this role. Our technology stack includes JavaScript, Angular, React, NextJS, HTML5/CSS3/Bootstrap, Windows/Linux/OSX Bash, Kookoo telephony, SMS Gupshup, Sendgrid, Optimizely, Mixpanel, Google Analytics, Firebase, Git, Bash, NPM, Browser Dev Console, NoSQL, Google Cloud Datastore, Google Cloud Platform (App Engine, PubSub, Cloud Functions, Bigtable, Cloud Endpoints). If you are passionate about technology and innovation, and thrive in a collaborative environment, we welcome you to join our team at JM Financial.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an AI/ML Engineer, you will be responsible for identifying, defining, and delivering AI/ML and GenAI use cases in collaboration with business and technical stakeholders. Your role will involve designing, developing, and deploying models using Google Cloud's Vertex AI platform. You will be tasked with fine-tuning and evaluating Large Language Models (LLMs) for domain-specific applications and ensuring responsible AI practices and governance in solution delivery. Collaboration with data engineers and architects is essential to ensure robust and scalable pipelines. It will be your responsibility to document workflows and experiments for reproducibility and handover readiness. Your expertise in supervised, unsupervised, and reinforcement learning will be applied to develop solutions using Vertex AI features including AutoML, Pipelines, Model Registry, and Generative AI Studio. In this role, you will work on GenAI workflows, which includes prompt engineering, fine-tuning, and model evaluation. Proficiency in Python is required for developing in ML frameworks such as TensorFlow, PyTorch, scikit-learn, and Hugging Face Transformers. Effective communication and collaboration across product, data, and business teams are crucial for the success of the projects. The ideal candidate should have hands-on experience with Vertex AI on GCP for model training, deployment, endpoint management, and MLOps. Practical knowledge of PaLM, Gemini, or other LLMs via Vertex AI or open-source tools is preferred. Proficiency in Python for ML pipeline scripting, data preprocessing, and evaluation is necessary. Expertise in ML/GenAI libraries like scikit-learn, TensorFlow, PyTorch, Hugging Face, and LangChain is expected. Experience with CI/CD for ML, containerization using Docker/Kubernetes, and familiarity with GCP services like BigQuery, Cloud Functions, and Cloud Storage are advantageous. Knowledge of media datasets and real-world ML applications in OTT, DTH, and Web platforms will be beneficial in this role. Qualifications required for this position include a Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or related fields. The candidate should have at least 3 years of hands-on experience in ML/AI or GenAI projects. Any relevant certifications in ML, GCP, or GenAI technologies are considered a plus.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
Beinex is seeking a skilled and motivated Google Cloud Consultant to join our dynamic team. As a Google Cloud Consultant, you will play a pivotal role in assisting our clients in harnessing the power of Google Cloud technologies to drive innovation and transformation. If you are passionate about cloud solutions, client collaboration, and cutting-edge technology, we invite you to join our journey. Responsibilities - Collaborate with clients to understand their business objectives and technology needs, translating them into effective Google Cloud solutions - Design, implement, and manage Google Cloud Platform (GCP) architectures, ensuring scalability, security, and performance - Provide technical expertise and guidance to clients on GCP services, best practices, and cloud-native solutions and adopt an Infrastructure as Code (IaC) approach to establish an advanced infrastructure for both internal and external stakeholders - Conduct cloud assessments and create migration strategies for clients looking to transition their applications and workloads to GCP - Work with cross-functional teams to plan, execute, and optimise cloud migrations, deployments, and upgrades - Assist clients in optimising their GCP usage by analysing resource utilisation, recommending cost-saving measures, and enhancing overall efficiency - Collaborate with development teams to integrate cloud-native technologies and solutions into application design and development processes - Stay updated with the latest trends, features, and updates in the Google Cloud ecosystem and provide thought leadership to clients - Troubleshoot and resolve technical issues related to GCP services and configurations - Create and maintain documentation for GCP architectures, solutions, and best practices - Conduct training sessions and workshops for clients to enhance their understanding of GCP technologies and usage Key Skills Requirements - Profound expertise in Google Cloud Platform services, including but not limited to Compute Engine, App Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, Cloud Functions, VPC, IAM, and Cloud Security - Strong understanding of GCP networking concepts, including VPC peering, firewall rules, VPN, and hybrid cloud configurations - Experience with Infrastructure as Code (IaC) tools such as Terraform, Deployment Manager, or Google Cloud Deployment Manager - Hands-on experience with containerisation technologies like Docker and Kubernetes - Proficiency in scripting languages such as Python and Bash - Familiarity with cloud monitoring, logging, and observability tools and practices - Knowledge of DevOps principles and practices, including CI/CD pipelines and automation - Strong problem-solving skills and the ability to troubleshoot complex technical issues - Excellent communication skills to interact effectively with clients, team members, and stakeholders - Previous consulting or client-facing experience is a plus - Relevant Google Cloud certifications are highly desirable Perks: Careers at Beinex - Comprehensive Health Plans - Learning and development - Workation and outdoor training - Hybrid working environment - On-site travel Opportunity - Beinex Branded Merchandise,
Posted 1 month ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
Capgemini Invent is the digital innovation, consulting, and transformation brand of the Capgemini Group, a global business line that combines market-leading expertise in strategy, technology, data science, and creative design to help CxOs envision and build what's next for their businesses. In this role, you should have developed/worked on at least one Gen AI project and have experience in data pipeline implementation with cloud providers such as AWS, Azure, or GCP. You should also be familiar with cloud storage, cloud database, cloud data warehousing, and Data lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, and S3. Additionally, a good understanding of cloud compute services, load balancing, identity management, authentication, and authorization in the cloud is essential. Your profile should include a good knowledge of infrastructure capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs. performance and scaling. You should be able to contribute to making architectural choices using various cloud services and solution methodologies. Proficiency in programming using Python is required along with expertise in cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud. Understanding networking, security, design principles, and best practices in the cloud is also important. At Capgemini, we value flexible work arrangements to provide support for maintaining a healthy work-life balance. You will have opportunities for career growth through various career growth programs and diverse professions tailored to support you in exploring a world of opportunities. Additionally, you can equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner with a rich heritage of over 55 years. We have a diverse team of 340,000 members in more than 50 countries, working together to accelerate the dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. Trusted by clients to unlock the value of technology, we deliver end-to-end services and solutions leveraging strengths from strategy and design to engineering, fueled by market-leading capabilities in AI, cloud, and data, combined with deep industry expertise and partner ecosystem. Our global revenues in 2023 were reported at 22.5 billion.,
Posted 1 month ago
0.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Work with the team in capacity of GCP Data Engineer on day to day activities Solve problems at hand with utmost clarity and speed Train and coach other team members Ability to turn around quickly Work with Data analysts and architects to help them solve any specific issues with tooling/processes Design, build and operationalize large scale enterprise data solutions and applications using one or more of GCP data and analytics services in combination with 3rd parties - Python/Java/React.js, AirFlow ETL skills - GCP services (BigQuery, Dataflow, Cloud SQL, Cloud Functions, Data Lake Design and build production data pipelines from ingestion to consumption within a big data architecture GCP BQ modeling and performance tuning techniques RDBMS and No-SQL database experience Knowledge on orchestrating workloads on cloud Implement Data warehouse & Big/Small data designs, data lake solutions with very good data quality capabilities Understanding and knowledge of deployment strategies CI/CD.
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Cloud Engineering Team Leader at GlobalLogic, you will be responsible for providing technical guidance and career development support to a team of cloud engineers. You will define cloud architecture standards and best practices across the organization, collaborating with senior leadership to develop a cloud strategy aligned with business objectives. Your role will involve driving technical decision-making for complex cloud infrastructure projects, establishing and maintaining cloud governance frameworks, and operational procedures. With a background in technical leadership roles managing engineering teams, you will have a proven track record of successfully delivering large-scale cloud transformation projects. Experience in budget management, resource planning, and strong presentation and communication skills for executive-level reporting are essential. Preferred certifications include Google Cloud Professional Cloud Architect, Google Cloud Professional Data Engineer, and additional relevant cloud or security certifications. You will leverage your 10+ years of experience in designing and implementing enterprise-scale Cloud Solutions using GCP services to architect sophisticated cloud solutions using Python and advanced GCP services. Leading the design and deployment of solutions utilizing Cloud Functions, Docker containers, Dataflow, and other GCP services will be part of your responsibilities. Ensuring optimal performance and scalability of complex integrations with multiple data sources and systems, implementing security best practices and compliance frameworks, and troubleshooting and resolving technical issues will be key aspects of your role. Your technical skills will include expert-level proficiency in Python with experience in additional languages, deep expertise with GCP services such as Dataflow, Compute Engine, BigQuery, Cloud Functions, and others, advanced knowledge of Docker, Kubernetes, and container orchestration patterns, extensive experience in cloud security, proficiency in Infrastructure as Code tools like Terraform, Cloud Deployment Manager, and CI/CD experience with advanced deployment pipelines and GitOps practices. As part of the GlobalLogic team, you will benefit from a culture of caring, continuous learning and development opportunities, interesting and meaningful work, balance and flexibility in work arrangements, and being part of a high-trust organization. You will have the chance to work on impactful projects, engage with collaborative teammates and supportive leaders, and contribute to shaping cutting-edge solutions in the digital engineering domain.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
ahmedabad, gujarat
On-site
You are an experienced Senior MEAN Stack Developer with 2-4 years of hands-on experience in designing, developing, and maintaining scalable web applications. Your expertise lies in MongoDB, Express.js, Angular, and Node.js (MEAN stack) with strong problem-solving abilities and leadership skills. Your responsibilities will include designing, developing, and deploying full-stack web applications using the MEAN stack. You will architect and optimize scalable, high-performance web applications, develop RESTful APIs and GraphQL services for seamless integration with frontend applications, and implement authentication and authorization mechanisms such as JWT, OAuth, and Role-Based Access Control. Additionally, you will optimize database queries and performance in MongoDB using Mongoose. In this role, you will mentor and guide junior developers, conduct code reviews and technical discussions, integrate third-party APIs, cloud services, and DevOps solutions for automation and deployment. You will also implement CI/CD pipelines, ensure best practices for software development and deployment, troubleshoot complex issues, debug applications, and improve code quality while staying updated with emerging technologies and contributing to the continuous improvement of development. To excel in this position, you should possess 3-5 years of experience in MEAN stack development, strong proficiency in Angular 15+ and frontend optimization techniques, advanced knowledge of Node.js and Express.js, including asynchronous programming and event-driven architecture. Expertise in MongoDB, MySQL & PostgreSQL, building microservices-based architectures, Docker, Kubernetes, CI/CD pipelines, and proficiency in Git, GitHub, or GitLab for version control is essential. Experience with message queues, WebSockets, real-time data processing, caching strategies, unit testing, integration testing, TDD, analytical and debugging skills, performance optimization, as well as excellent communication and leadership skills are required. Skills & Qualifications: - Strong proficiency in Angular 15+ and frontend optimization techniques - Advanced knowledge of Node.js and Express.js - Expertise in MongoDB, MySQL & PostgreSQL - Experience in building microservices-based architectures - Proficiency in Docker, Kubernetes, CI/CD pipelines - Proficiency in Git, GitHub, or GitLab - Experience with message queues (Redis, RabbitMQ, Kafka) - Understanding of WebSockets, real-time data processing, caching strategies - Hands-on experience in unit testing, integration testing, TDD - Strong analytical and debugging skills - Experience in performance optimization - Excellent communication and leadership skills Additional Skills: - Experience with GraphQL API development - Familiarity with AWS, Azure, Google Cloud Platform - Knowledge of Serverless architecture, cloud functions - Knowledge of Next.js, React.js - Experience in Angular Universal (Server-Side Rendering SSR) - Knowledge of Nginx, PM2, load balancing strategies - Exposure to AI/ML-based applications using Node.js - Utilization of AI tools like ChatGPT (ref:hirist.tech),
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You are NTT DATA, a global company striving to hire exceptional, innovative, and passionate individuals to grow with the organization. If you wish to be part of an inclusive, adaptable, and forward-thinking team, then this opportunity is for you. Currently, we are looking for a GCP BigQuery Developer to join our team in Hyderabad, Telangana (IN-TG), India (IN). As a Senior Application Developer in GCP, you should have mandatory skills in ETL Experience, Google Cloud Platform BigQuery, SQL, and Linux. Additionally, experience with Cloud Run and Cloud Functions would be desirable. We are seeking a Senior ETL Development professional with strong hands-on experience in Linux and SQL. While optional, experience or a solid conceptual understanding of GCP BigQuery is preferred. About NTT DATA: NTT DATA is a trusted global innovator of business and technology services with a value of $30 billion. We cater to 75% of the Fortune Global 100 and are dedicated to assisting clients in innovating, optimizing, and transforming for long-term success. Being a Global Top Employer, we have a diverse team of experts in over 50 countries and a robust partner ecosystem. Our services range from business and technology consulting to data and artificial intelligence solutions, industry-specific services, and the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is at the forefront of digital and AI infrastructure globally and is a part of the NTT Group, investing over $3.6 billion annually in R&D to facilitate a confident and sustainable transition into the digital future. Visit us at us.nttdata.com.,
Posted 1 month ago
10.0 - 17.0 years
10 - 20 Lacs
Pune, Chennai, Bengaluru
Hybrid
Job Description: Cloud Infrastructure & Deployment Design and implement secure, scalable, and highly available cloud infrastructure on GCP. Provision and manage compute, storage, network, and database services. Automate infrastructure using Infrastructure as Code (IaC) tools such as Terraform or Deployment Manager. Architecture & Design Translate business requirements into scalable cloud solutions. Recommend GCP services aligned with application needs and cost optimization. Participate in high-level architecture and solution design discussions. DevOps & Automation Build and maintain CI/CD pipelines (e.g., using Cloud Build, Jenkins, GitLab CI). Integrate monitoring, logging, and alerting (e.g., Stackdriver / Cloud Operations Suite). Enable autoscaling, load balancing, and zero-downtime deployments. Security & Compliance Ensure compliance with security standards and best Migration & Optimization Support cloud migration projects from on-premise or other cloud providers to GCP. Optimize performance, reliability, and cost of GCP workloads. Documentation & Support Maintain technical documentation and architecture diagrams. Provide L2/L3 support for GCP-based services and incidents. Required Skills and Qualifications: Google Cloud Certification Associate Cloud Engineer or Professional Cloud Architect/Engineer Hands-on experience with GCP services (Compute Engine, GKE, Cloud SQL, BigQuery, etc.) Strong command of Linux , shell scripting , and networking fundamentals Proficiency in Terraform , Cloud Build , Cloud Functions , or other GCP-native tools Experience with containers and orchestration – Docker, Kubernetes (GKE) Familiarity with monitoring/logging – Cloud Monitoring , Prometheus , Grafana Understanding of IAM , VPCs , firewall rules , service accounts , and Cloud Identity Excellent written and verbal communication skills
Posted 1 month ago
4.0 - 9.0 years
7 - 17 Lacs
Mumbai, Mumbai Suburban, Mumbai (All Areas)
Hybrid
Job Title: Gcp Data Engineer Senior Associate / Manager Experience: 4 to 11 years Location: Mumbai Notice period: Immediate to 30 days Job Description Designing, building and deploying cloud solution for enterprise applications, with expertise in Cloud Platform Engineering. -Expertise in application migration projects including optimizing technical reliability and improving application performance -Good understanding of cloud security frameworks and cloud Security standards -Solid knowledge and extensive experience of GCP and its cloud services. -Experience with GCP services as Compute Engine, Dataproc, Dataflow, Big Query, Secret Manager, Kubernetes Engine and c. -Experiencing in Google storage products like Cloud Storage, Persistent Disk, Nearline, Coldline and Cloud Filestore -Experience in Database products like Datastore, Cloud SQL, Cloud Spanner & Cloud Bigtable -Experience with implementing containers using cloud native container orchestrators in GCP -Strong cloud programming skill with experience in API and Cloud Functions development using Python -Hands-on experience with enterprise config & DevOps tools including Ansible, BitBucket, Git, Jira, and Confluence. -Strong knowledge of cloud Security practices and Cloud IAM Policy preparation for GCP -Knowledge and experience in API development, AI/ML, Data Lake, Data Analytics, Cloud Monitoring tool like Stackdriver -Ability to participate in fast-paced DevOps and System Engineering teams within Scrum agile processes -Should have Understanding of data modelling, Data warehousing concepts -Understand the current application infrastructure and suggest changes to it.
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You will be joining NTT DATA as a GCP BigQuery Developer in Hyderabad, Telangana, India. As a Sr. Application Developer for GCP, your primary responsibilities will include utilizing your ETL experience, expertise in Google Cloud Platform BigQuery, SQL, and Linux. In addition to the mandatory skills, having experience in Cloud Run and Cloud Functions would be beneficial for this role. We are looking for a Senior ETL Developer with a strong hands-on background in Linux and SQL. While experience in GCP BigQuery is preferred, a solid conceptual understanding is required at a minimum. NTT DATA is a trusted global innovator providing business and technology services to 75% of the Fortune Global 100 companies. As a Global Top Employer, we have experts in over 50 countries and a robust partner ecosystem. Our services cover consulting, data and artificial intelligence, industry solutions, as well as application, infrastructure, and connectivity development, implementation, and management. Join us to be a part of our commitment to helping clients innovate, optimize, and transform for long-term success. Visit us at us.nttdata.com to learn more about our contributions to digital and AI infrastructure worldwide.,
Posted 1 month ago
1.0 - 4.0 years
2 - 3 Lacs
Bengaluru
Remote
We are hiring a Full Stack Developer with strong exposure to AI tools, APIs, and product development workflows. This role is for someone who can independently design, build, and deploy full-stack applications, and also integrate AI-powered components such as RSVP agents, recommendation systems, conversational flows, and automation tools. Responsibilities Build and maintain full-stack web apps using React, Node.js, Python Integrate AI/ML APIs like OpenAI, Cohere, LangChain, Pinecone, etc. Architect intelligent features using vector databases, RAG pipelines, and custom agent flows Work on both frontend + backend and own deployment, testing & CI/CD Collaborate closely with product & automation teams Must-Have Skills Strong proficiency in JavaScript/TypeScript, Python, Node.js Comfortable with NoSQL, PostgreSQL, Firebase, or Supabase Experience with API integrations, automation, and microservices Understanding of AI agentic flows, embeddings, and webhooks Experience deploying products on Vercel, Render, or similar Good to Have Familiarity with tools like LangChain, LlamaIndex, or OpenAI Assistants Working knowledge of Next.js, Tailwind CSS, and prompt engineering Work Culture Fully remote Flat structure Fast execution environment Product-first mindset
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As an iOS Developer with our B2C company, you will be responsible for building the iOS app from scratch. You will architect and develop new flows and features on our iOS app, ensuring the performance, quality, and responsiveness of the application. Your contributions will be vital in designing, architecting, and developing apps that are elegant, efficient, secure, highly available, and maintainable. Collaborating with a team, you will define, design, and ship new features, identifying and correcting bottlenecks and fixing bugs along the way. To excel in this role, you must possess strong iOS fundamentals and have experience with offline storage, threading, and performance tuning. Your proficiency in Objective-C/Swift programming, Xcode, and iOS SDK will be crucial. Additionally, you should be well-versed in RESTful APIs to connect iOS applications to back-end services and possess expertise in iOS UI design principles, patterns, and best practices. Knowledge of Cloud Firestore, Cloud Functions, Cloud Messaging APIs, and push notifications is essential. You should have ownership skills, demonstrating the ability to own problems end-to-end. Your enthusiasm and dedication to building a product used by millions of users are key, along with your problem-solving abilities and determination to achieve results. In return, we offer a high pace of learning, the opportunity to build a product from scratch, high autonomy and ownership, and the chance to work with a great and ambitious team on something that truly matters. You will receive a top-of-the-class market salary, meaningful ESOP ownership, and benefits including health insurance, paid sick time, paid time off, and provident fund. This is a full-time, permanent position with day shift hours from Monday to Friday, including a yearly bonus. A Bachelor's degree is preferred, and you should have at least 4 years of experience in iOS development. The work location is in person.,
Posted 1 month ago
12.0 - 14.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Summary: We are seeking an experienced and highly motivated Delivery Lead to spearhead the successful implementation of Google Contact Center AI (CCAI) solutions, with a strong emphasis on CCAI Agent Assist and Dialogflow (ES/CX) . This role is critical in bridging the gap between solution design and technical execution. The Delivery Lead will be responsible for leading project teams, managing client relationships, and ensuring the on-time, on-budget, and high-quality delivery of complex conversational AI and agent augmentation projects. You will act as the primary point of contact for project stakeholders, proactively identifying and mitigating risks, and ensuring that strategic objectives are met through technical excellence. Key Responsibilities: Project Leadership & Management (60%): Lead the full lifecycle of CCAI projects from initiation and planning through execution, monitoring, control, and closure. Develop and manage comprehensive project plans, including scope definition, detailed timelines, resource allocation, and budget tracking. Serve as the primary client contact for project delivery, establishing strong relationships, managing expectations, and providing regular progress updates. Lead and motivate diverse project teams (Solution Architects, NLU Specialists, Engineers, QA Analysts), fostering a collaborative and high-performing environment. Proactively identify, assess, and mitigate project risks and issues, implementing contingency plans to ensure successful outcomes. Manage project scope changes effectively, ensuring proper documentation and communication to all stakeholders. Conduct regular internal and external project review meetings, preparing and presenting status reports to senior management and clients. Ensure projects adhere to defined quality standards, best practices, and governance frameworks (e.g., Agile/Scrum). Technical Oversight & Quality Assurance (30%): Understand and validate the technical solution architecture for CCAI Agent Assist and Dialogflow, ensuring it aligns with client requirements and business objectives. Provide technical guidance and oversight to the engineering and development teams, ensuring adherence to design specifications and best practices for NLU and conversational AI. Specifically oversee the implementation of Dialogflow agents (intents, entities, flows, fulfillment logic) and CCAI Agent Assist features (real-time knowledge base integration, smart reply suggestions, sentiment analysis, script nudges). Ensure seamless integration of CCAI solutions with existing contact center platforms (e.g., Genesys, Twilio, Salesforce Service Cloud, Zendesk) and enterprise systems. Work closely with QA to define comprehensive testing strategies (unit, integration, UAT, performance) for conversational AI flows and agent assistance capabilities. Facilitate technical problem-solving during project execution, collaborating with architects and engineers to overcome complex challenges. Ensure solutions are built for scalability, security, reliability, and maintainability. Stakeholder Management & Communication (10%): Translate technical concepts and project updates into clear, concise language for non-technical stakeholders and business leadership. Negotiate and resolve conflicts effectively, maintaining positive client relationships. Collaborate with pre-sales teams to refine project scope and estimates during the planning phase. Facilitate knowledge transfer and training for client teams post-deployment, ensuring successful adoption and ongoing support. Required Qualifications: Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. 12+ years of experience in technical project management, delivery leadership, or a similar client-facing role. 3-5+ years of demonstrable experience leading the delivery of Google Cloud-based AI solutions, with specific hands-on project experience involving: Google CCAI Agent Assist (critical) Google Dialogflow (ES and/or CX) (critical) Strong understanding of conversational AI principles, NLU, and contact center operations. Proven experience managing complex projects with cross-functional technical teams. Familiarity with core Google Cloud Platform (GCP) services relevant to AI deployments (e.g., Cloud Functions, BigQuery, Pub/Sub). Experience with Agile/Scrum methodologies and tools (e.g., Jira, Confluence). Exceptional leadership, communication, interpersonal, and presentation skills (both written and verbal). Strong analytical, problem-solving, and negotiation abilities. Proven ability to manage multiple projects concurrently and adapt to changing priorities. Preferred Qualifications: Master's degree or PMP/Agile certification (CSM, PMI-ACP). Google Cloud Certification (e.g., Professional Cloud Architect, Professional Collaboration Engineer). Hands-on experience with contact center platforms beyond CCAI (e.g., Genesys, Avaya, Cisco, Five9). Experience with other Google AI services (e.g., Speech-to-Text, Text-to-Speech, Vertex AI, Gemini models) and understanding of their integration potential. Technical background in software development (e.g., Python, Node.js) to understand implementation complexities. Experience in pre-sales activities, including solution scoping and effort estimation. Understanding of data privacy and security best practices in a contact center environment.
Posted 1 month ago
7.0 - 12.0 years
27 - 35 Lacs
Bengaluru
Work from Office
Job Overview We are hiring a seasoned Site Reliability Engineer with strong experience in building and operating scalable systems on Google Cloud Platform (GCP). You will be responsible for ensuring system availability, performance, and security in a complex microservices ecosystem, while collaborating cross-functionally to improve infrastructure reliability and developer velocity. Key Responsibilities - Design and maintain highly available, fault-tolerant systems on GCP using SRE best practices. - Implement SLIs/SLOs, monitor error budgets, and lead post-incident reviews with RCA documentation. - Automate infrastructure provisioning (Terraform/Deployment Manager) and CI/CD workflows. - Operate and optimize Kubernetes (GKE) clusters including autoscaling, resource tuning, and HPA policies. - Integrate observability across microservices using Prometheus, Grafana, Stackdriver, and OpenTelemetry. - Manage and fine-tune databases (MySQL/Postgres/BigQuery/Firestore) for performance and cost. - Improve API reliability and performance through Apigee (proxy tuning, quota/policy handling, caching). - Drive container best practices including image optimization, vulnerability scanning, and registry hygiene. - Participate in on-call rotations, capacity planning, and infrastructure cost reviews. Must-Have Skills - Minimum 8 years of total experience, with at least 3 years in SRE, DevOps, or Platform Engineering roles. - Strong expertise in GCP services (GKE, IAM, Cloud Run, Cloud Functions, Pub/Sub, VPC, Monitoring). - Advanced Kubernetes knowledge: pod orchestration, secrets management, liveness/readiness probes. - Experience in writing automation tools/scripts in Python, Bash, or Go. - Solid understanding of incident response frameworks and runbook development. - CI/CD expertise with GitHub Actions, Cloud Build, or similar tools. Good to Have - Apigee hands-on experience: API proxy lifecycle, policies, debugging, and analytics. - Database optimization: index tuning, slow query analysis, horizontal/vertical sharding. - Distributed monitoring and tracing: familiarity with Jaeger, Zipkin, or GCP Trace. - Service Mesh (Istio/Linkerd) and secure workload identity configurations. - Exposure to BCP/DR planning, infrastructure threat modeling, and compliance (ISO/SOC2). Educational & Certification Requirements - B.Tech / M.Tech / MCA in Computer Science or equivalent. - GCP Professional Cl
Posted 1 month ago
10.0 - 15.0 years
12 - 22 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Job Description: Cloud Infrastructure & Deployment Design and implement secure, scalable, and highly available cloud infrastructure on GCP. Provision and manage compute, storage, network, and database services. Automate infrastructure using Infrastructure as Code (IaC) tools such as Terraform or Deployment Manager. Architecture & Design Translate business requirements into scalable cloud solutions. Recommend GCP services aligned with application needs and cost optimization. Participate in high-level architecture and solution design discussions. DevOps & Automation Build and maintain CI/CD pipelines (e.g., using Cloud Build, Jenkins, GitLab CI). Integrate monitoring, logging, and alerting (e.g., Stackdriver / Cloud Operations Suite). Enable autoscaling, load balancing, and zero-downtime deployments. Security & Compliance Ensure compliance with security standards and best Migration & Optimization Support cloud migration projects from on-premise or other cloud providers to GCP. Optimize performance, reliability, and cost of GCP workloads. Documentation & Support Maintain technical documentation and architecture diagrams. Provide L2/L3 support for GCP-based services and incidents. Required Skills and Qualifications: Google Cloud Certification Associate Cloud Engineer or Professional Cloud Architect/Engineer Hands-on experience with GCP services (Compute Engine, GKE, Cloud SQL, BigQuery, etc.) Strong command of Linux , shell scripting , and networking fundamentals Proficiency in Terraform , Cloud Build , Cloud Functions , or other GCP-native tools Experience with containers and orchestration – Docker, Kubernetes (GKE) Familiarity with monitoring/logging – Cloud Monitoring , Prometheus , Grafana Understanding of IAM , VPCs , firewall rules , service accounts , and Cloud Identity Excellent written and verbal communication skills
Posted 1 month ago
10.0 - 12.0 years
12 - 13 Lacs
Pune
Remote
Job Description: Cloud Infrastructure & Deployment Design and implement secure, scalable, and highly available cloud infrastructure on GCP. Provision and manage compute, storage, network, and database services. Automate infrastructure using Infrastructure as Code (IaC) tools such as Terraform or Deployment Manager. Architecture & Design Translate business requirements into scalable cloud solutions. Recommend GCP services aligned with application needs and cost optimization. Participate in high-level architecture and solution design discussions. Role & responsibilities Preferred candidate profile
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City