Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
10 - 12 Lacs
Bengaluru
Work from Office
Senior Data Engineer (Databricks, PySpark, SQL, Cloud Data Platforms, Data Pipelines) Job Summary Synechron is seeking a highly skilled and experienced Data Engineer to join our innovative analytics team in Bangalore. The primary purpose of this role is to design, develop, and maintain scalable data pipelines and architectures that empower data-driven decision making and advanced analytics initiatives. As a critical contributor within our data ecosystem, you will enable the organization to harness large, complex datasets efficiently, supporting strategic business objectives and ensuring high standards of data quality, security, and performance. Your expertise will directly contribute to building robust, efficient, and secure data solutions that drive business value across multiple domains. Software Required Software & Tools: Databricks Platform (Hands-on experience with Databricks notebooks, clusters, and workflows) PySpark (Proficient in developing and optimizing Spark jobs) SQL (Advance proficiency in writing complex SQL queries and optimizing queries) Data Orchestration Tools such as Apache Airflow or similar (Experience in scheduling and managing data workflows) Cloud Data Platforms (Experience with cloud environments such as AWS, Azure, or Google Cloud) Data Warehousing Solutions (Snowflake highly preferred) Preferred Software & Tools: Kafka or other streaming frameworks (e.g., Confluent, MQTT) CI/CD tools for data pipelines (e.g., Jenkins, GitLab CI) DevOps practices for data workflows Programming LanguagesPython (Expert level), and familiarity with other languages like Java or Scala is advantageous Overall Responsibilities Architect, develop, and maintain scalable, resilient data pipelines and architectures supporting business analytics, reporting, and data science use cases. Collaborate closely with data scientists, analysts, and cross-functional teams to gather requirements and deliver optimized data solutions aligned with organizational goals. Ensure data quality, consistency, and security across all data workflows, adhering to best practices and compliance standards. Optimize data processes for enhanced performance, reliability, and cost efficiency. Integrate data from multiple sources, including cloud data services and streaming platforms, ensuring seamless data flow and transformation. Lead efforts in performance tuning and troubleshooting data pipelines to resolve bottlenecks and improve throughput. Stay up-to-date with emerging data engineering technologies and contribute to continuous improvement initiatives within the team. Technical Skills (By Category) Programming Languages: EssentialPython, SQL PreferredScala, Java Databases/Data Management: EssentialData modeling, ETL/ELT processes, data warehousing (Snowflake experience highly preferred) Preferred NoSQL databases, Hadoop ecosystem Cloud Technologies: EssentialExperience with cloud data services (AWS, Azure, GCP) and deployment of data pipelines in cloud environments PreferredCloud native data tools and architecture design Frameworks and Libraries: EssentialPySpark, Spark SQL, Kafka, Airflow PreferredStreaming frameworks, TensorFlow (for data prep) Development Tools and Methodologies: EssentialVersion control (Git), CI/CD pipelines, Agile methodologies PreferredDevOps practices in data engineering, containerization (Docker, Kubernetes) Security Protocols: Familiarity with data security, encryption standards, and compliance best practices Experience Minimum of 8 years of professional experience in Data Engineering or related roles Proven track record of designing and deploying large-scale data pipelines using Databricks, PySpark, and SQL Practical experience in data modeling, data warehousing, and ETL/ELT workflows Experience working with cloud data platforms and streaming data frameworks such as Kafka or equivalent Demonstrated ability to work with cross-functional teams, translating business needs into technical solutions Experience with data orchestration and automation tools is highly valued Prior experience in implementing CI/CD pipelines or DevOps practices for data workflows (preferred) Day-to-Day Activities Design, develop, and troubleshoot data pipelines for ingestion, transformation, and storage of large datasets Collaborate with data scientists and analysts to understand data requirements and optimize existing pipelines Automate data workflows and improve pipeline efficiency through performance tuning and best practices Conduct data quality audits and ensure data security protocols are followed Manage and monitor data workflows, troubleshoot failures, and implement fixes proactively Contribute to documentation, code reviews, and knowledge sharing within the team Stay informed of evolving data engineering tools, techniques, and industry best practices, incorporating them into daily work processes Qualifications Bachelor's or Master's degree in Computer Science, Information Technology, or related field Relevant certifications such as Databricks Certified Data Engineer, AWS Certified Data Analytics, or equivalent (preferred) Continuous learning through courses, workshops, or industry conferences on data engineering and cloud technologies Professional Competencies Strong analytical and problem-solving skills with a focus on scalable solutions Excellent communication skills to effectively collaborate with technical and non-technical stakeholders Ability to prioritize tasks, manage time effectively, and deliver within tight deadlines Demonstrated leadership in guiding team members and driving project success Adaptability to evolving technological landscapes and innovative thinking Commitment to data privacy, security, and ethical handling of information
Posted 1 day ago
2.0 years
1 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are an early-stage startup using AI to revolutionize the recruitment landscape. We are transforming the recruitment process by adopting an AI-driven approach and candidate-centricity. Our AI platform empowers candidates to refine their interview skills and improve success rates with intelligent feedback from AI-powered Mock interviews. It enables recruiters to conduct interviews more efficiently and at a lower cost with an AI Interview assistant that facilitates smarter interviews, offering deeper insights for better decision-making. Job Description: Woyage.AI is seeking a Software Engineer QA (Stipend Only Initially) to quality test our AI-powered platform for recruitment services with automation. The ideal candidates for this opportunity will work with senior members of the engineering team to test manually and implement the automation test for the web, API, and mobile platforms in a very agile, fun, and exciting environment. This position directly reports to the Chief Technology Officer . Roles & Responsibilities: Write and maintain test plans, test strategies, test cycles, and test cases for functional, regression, performance, and integration testing. Design, develop, and execute automated test scripts for Web and API. Partner with product, UI/UX, and engineering teams to drive QA initiatives from planning through product release. Effectively use tools like SpreadSheet for test case management, Jira for bug tracking, and Confluence for documentation. Job Requirements: Bachelor’s degree in Computer Science or equivalent coursework. 2+ Years of Experience in Automation QA Engineering testing API, Web, and Mobile applications. Knowledge\Experience in automation test frameworks PyTest, Cypress, SuperTest, or similar. Knowledge\Experience in Python \ JavaScript. Knowledge\Experience in test tools like (PostMan, etc). Knowledge\Experience in Testing Methodologies for all types of testing. Knowledge\Experience in Scrum\Agile. Knowledge\Experience in collaboration and development tools (Git, Slack, Confluence, Jira). Knowledge\Experience in Cloud (AWS, GCP) and AI services is a plus. Type: Full Time 6 Months Stipend and then the role will transition into a full-time position based on both organizational performance and individual contributions. The timeline for this decision will depend on revenue growth or the successful completion of the next funding round. In Person, 5 Days, Coimbatore Facility Compensation: Stipend of Rs 10,000/monthly for initial 6 Months. Equity (Stocks) will be assigned after 6 months based on the individual performance. Full Time Compensation will be provided after generating revenue or securing funding through pre-seed or seed rounds, which are expected to happen between 6 months and 9 months.
Posted 1 day ago
0.0 - 1.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description: Nxcar is transforming used car transactions to be as transparent and delightful as buying a new car. We offer free listings for individuals and dealers, used car loans, extended warranty, RC Transfer, and other used car-specific services. We inspect and digitize vehicle inventory at Nxcar premium dealerships and provide comprehensive CarScope so that you can buy a pre-owned vehicle with confidence. Nxcar is building a next-generation vehicle listing platform with integrated AI models to provide real-time insights and recommendations to customers for each vehicle as they browse. Job Overview: We are seeking an enthusiastic AI Application Engineer to join our innovative team in the newly created department - AI Studio. This role is crucial for building and integrating advanced AI/ML solutions into our existing systems. We are looking for a self-motivated individual with hands-on experience in developing AI/ML models that deliver actionable outputs. You should be comfortable experimenting with new technologies and methodologies in a fast-paced environment. The role comes with high growth opportunities, offering the potential for career advancement as you contribute to building cutting-edge AI solutions. Key Responsibilities: Integrate new technologies and AI-driven solutions to enhance our products and services. Work closely with cross-functional teams to implement AI models that are scalable, reliable, and efficient. Continuously learn and apply the latest AI/ML trends and tools, experimenting with new ideas and technologies. Write clean, efficient, and maintainable code for machine learning models and algorithms. Test and evaluate model performance, fine-tuning as necessary for optimal outcomes. Contribute to the creation of the AI Studio department, bringing in fresh ideas and collaborating on team initiatives. Provide insights and guidance based on data analysis to improve the model-building process. Develop and deploy AI/ML models that address real-world problems and deliver measurable outcomes. Build LLM Based projects from scratch Skills & Qualifications: 0-1 years of hands-on experience in AI/ML model development and deployment. Graduation from Tier 1 Engineering College. Proficiency in Python, R, or other relevant programming languages. Experience with popular AI/ML frameworks like TensorFlow, PyTorch, Scikit-learn, or similar. Strong problem-solving skills with the ability to understand business requirements and translate them into technical solutions. Basics of LLM , Langchain , Lang graph and similar agentic AI tools and Familiarity with Flask/Fast API. Ability to work in a fast-paced environment and willingness to learn and adapt to new technologies. Strong coding skills and experience in software engineering best practices. Familiarity with cloud platforms (AWS, GCP, Azure) for deploying AI models is a plus. A proactive mindset with a passion for innovation and creating impactful solutions. Location: Gurgaon, Sector 42(In office)
Posted 1 day ago
6.0 years
0 Lacs
Greater Kolkata Area
On-site
Description Job functions Salesforce security and compliance expert for customers and prospects Understand our business and the problems we are trying to solve, deeply, when it comes to our core security services Support the sales and pre-sales teams in responding to customer risk and security questionnaires and queries Build customer trust through managing and hosting in-person customer/prospect security meetings Be the Salesforce field expert for the Salesforce trust story covering security, architecture, reliability, performance, privacy and compliance. Interface with Product Management and Security teams to ensure all the latest security features and capabilities are properly represented in customer responses Collaborate with the Salesforce Legal, Privacy and other teams on customer-specific contract requirements Interface to Salesforce security engineering and product management teams Ensure teams are aware of gaps in our security/compliance capabilities that are impacting customers and prospects Ensure field sales, services and partner teams are consistently enabled with the latest and best positioning around Salesforce security and compliance Gather customer security/compliance requests, and liaison with Salesforce product managers to maintain a security product roadmap Provide input and assist in developing compliance-related documentation: white papers, standard questionnaires, security best practices, etc. Develop SME capabilities for selected Salesforce Services and work with the product teams and global SMEs within the team to stay updated on the latest developments. Support drafting white papers and security collateral Desired Qualifications Bachelor's degree with 6+ years of experience in information security, governance, and compliance Experience with cloud platforms like AWS, GCP, Azure. Understanding the architectural and security nuances. Excellent cross-functional collaboration and communication skills across product, security, Marketing, Field Sales, and more. Excellent communication and presentation skills Desired Skills And Experience Familiarity with one or more security and regulatory frameworks: NIST 800-53, NIST Cybersecurity Framework, PCI-DSS, ISO 27001, ISO 27017, ISO 27018, CSA, Monetary Authority of Singapore (MAS) Outsourcing Guidelines and TRM, Personal Data Protection laws in Singapore, Malaysia, Thailand, Indonesia, Vietnam etc, BNM Outsourcing guidelines and Risk Management in IT (RMiT) etc. Managed one or more compliance certifications/audits, either as an auditor or responder ( PCI-DSS, ISO27001, SOC-1/2, IRAP/ISMS, MTCS, etc.) Experience with completing customer security/compliance questionnaires Familiarity with Data Protection Laws in Australia Experience interpreting the intent of specific customer questions, and mapping them to industry standard controls Familiarity with public cloud architectures, security practices and compliance documentation Experience working in the Financial Services, Insurance, Banking, Superannuation, Telecommunication services industry Strong team player About Salesforce Salesforce, the Customer Success Platform and world's #1 CRM, empowers companies to connect with their customers in a whole new way. We are the fastest growing of the top 10 enterprise software companies, the World's Most Innovative Company according to Forbes, and one of Fortune's 100 Best Companies to Work for six years running. The growth, innovation, and Aloha spirit of Salesforce are driven by our incredible employees who thrive on delivering success for our customers while also finding time to give back through our 1/1/1 model, which leverages 1% of our time, equity, and product to improve communities around the world. Salesforce is a team sport, and we play to win. Join us!
Posted 1 day ago
10.0 years
0 Lacs
India
Remote
Position : Senior Java Developer Experience : 10+ Years Location : Remote Description : We are looking for an experienced Java Developer with a strong background in designing and building scalable backend systems and APIs. The ideal candidate should have deep expertise in core Java , Spring Boot , microservices , and cloud platforms. Key Skills : Expertise in Core Java , Java 11+ Strong experience with Spring Boot , REST APIs , Microservices Proficient in SQL/NoSQL , JPA/Hibernate Experience with CI/CD , version control (Git), and testing frameworks Familiarity with Docker , Kubernetes , and cloud platforms (AWS/GCP/Azure) Good understanding of design patterns and scalable architecture Responsibilities : Design and develop robust, scalable Java applications Collaborate with cross-functional teams on architecture and design Write clean, testable, maintainable code Optimize performance and troubleshoot issues Participate in agile development and code reviews
Posted 1 day ago
6.0 - 11.0 years
18 - 22 Lacs
Chennai, Trivandrum
Work from Office
Job Description: Key Responsibilities: Design and develop machine learning and deep learning systems using frameworks like TensorFlow, PyTorch, and Scikit-learn. Implement real-time systems for forecasting, optimization, and personalized recommendations. Research and apply advanced AI/ML techniques, including Generative AI models (e.g., GPT, LLaMA) and NLP architectures. Lead end-to-end machine learning projects, from problem framing to deployment and monitoring. Manage model lifecycle activities, including training, testing, retraining, and fine-tuning for optimal performance. Collaborate with analytics teams, practice leads, subject matter experts, and customer teams to understand business objectives and technical requirements. Conduct statistical analysis to address business challenges and drive innovation in AI/ML applications. Deploy machine learning applications on cloud platforms (e.g., GCP, Azure, AWS) and ensure seamless integration with existing systems. Monitor system performance, address data drift, and ensure scalability and adaptability to evolving requirements. Qualifications: Minimum 7+ years for Principal Data Scientist roles (3+ years for Data Scientist roles) with proven expertise in AI/ML, particularly in Generative AI, NLP, and computer vision. Proficient in Python, Java, and R. Strong expertise in TensorFlow, PyTorch, Scikit-learn, Django, Flask, Numpy, and Pandas. Comprehensive knowledge of data structures, SQL, multi-threading, and APIs (RESTful, OData). Hands-on experience with Tableau and Power BI for data visualization. Strong understanding of statistics, probability, algorithms, and optimization techniques. Experience deploying ML applications on GCP, Azure, or AWS. Familiarity with ERP/SAP integration, post-production support, and lifecycle management of ML systems. Industry experience in retail, telecom, or supply chain optimization is a plus. Exceptional problem-solving skills, strong leadership capabilities, and excellent communication skills.
Posted 1 day ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
We are looking for a highly skilled Java Engineer to join our dynamic team supporting AMD onsite in Hyderabad. This role is ideal for someone with strong backend development experience and a deep understanding of Java technologies, microservices, cloud, and CI/CD pipelines. Responsibilities Lead multidimensional projects involving cross-functional teams. Provide architectural input and lead code reviews. Resolve complex engineering issues and collaborate across teams. Design and develop software using Agile methodology. Contribute to detailed design documents and best practices. Mentor junior engineers and uplift technical standards. Apply secure coding practices and integrate cloud-based solutions. Primary Skills Java (v11+) Spring Boot, Spring MVC, Spring Data, Spring Security Microservices & REST APIs Docker & Kubernetes CI/CD (Jenkins, GitHub Actions, GitLab CI/CD) SQL & RDBMS (PostgreSQL, MySQL, Oracle) Cloud Platforms: AWS, GCP, or Azure Messaging Systems: Kafka or RabbitMQ (plus) Financial Systems Integration: Anaplan, Oracle Financials (plus) Qualifications 5+ years of backend development experience Strong problem-solving and communication skills Bachelors/Masters in Computer Science or equivalent
Posted 1 day ago
12.0 - 13.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Required Skills: Proven experience as a Node.js Developer or similar role. Strong proficiency in JavaScript/TypeScript and experience with modern JS frameworks (React, Angular, etc.). Solid understanding of web application architecture and RESTful APIs. Experience with databases like MongoDB, MySQL, PostgreSQL, etc. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Excellent communication and leadership skills. Experience with containerization and orchestration tools (Docker, Kubernetes). Knowledge of CI/CD pipelines and automated testing. Bachelor's degree in Computer Science, Engineering, or a related field (Master??s preferred).
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Chennai
Work from Office
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Software Engineer Senior Location: Chennai Work Type: Onsite Position Description: As a Software Engineer on our team, you will be instrumental in developing and maintaining key features for our applications. You'll be involved in all stages of the software development lifecycle, from design and implementation to testing and deployment. Responsibilities: Develop and Maintain Application Features: Implement new features and maintain existing functionality for both the front-end and back-end of our applications. Front-End Development: Build user interfaces using React or Angular, ensuring a seamless and engaging user experience. Back-End Development: Design, develop, and maintain robust and scalable back-end services using [Backend Tech - e.g., Node.js, Python/Django, Java/Spring, React]. Cloud Deployment: Deploy and manage applications on Google Cloud Platform (GCP), leveraging services like [GCP Tech - e.g., App Engine, Cloud Functions, Kubernetes]. Performance Optimization: Identify and address performance bottlenecks to ensure optimal speed and scalability of our applications. Code Reviews: Participate in code reviews to maintain code quality and share knowledge with team members. Unit Testing: Write and maintain unit tests to ensure the reliability and correctness of our code. SDLC Participation: Actively participate in all phases of the software development lifecycle, including requirements gathering, design, implementation, testing, and deployment. Collaboration: Work closely with product managers, designers, and other engineers to deliver high-quality software that meets user needs. Skills Required: Python, GCP, Angular, DevOps Skills Preferred: API, Tekton, Terraform Experience Required: 5+ years of professional software development experience Education Required: Bachelor's Degree TekWissen Group is an equal opportunity employer supporting workforce diversity.
Posted 1 day ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Title/Position: Data QA Engineer Job Location : Pune Employment Type: Full Time Shift Time : UK Shift Job Overview: We are seeking a detail-oriented and highly motivated Data Quality Assurance Engineer to join our dynamic team. As a QA Engineer, will be responsible for designing and executing data validation strategies, identifying anomalies, and collaborating with cross-functional teams to uphold data quality standards across the organization. For delivering high-quality software solutions. Responsibilities: Develop and execute test plans, test cases, and automation scripts for ETL pipelines and data validation. Perform SQL-based data validation to ensure data integrity, consistency, and correctness. Work closely with data engineers to test and validate Data Hub implementations. Automate data quality tests using Python and integrate them into CI/CD pipelines. Debug and troubleshoot data-related issues across different environments. Ensure data security, compliance, and governance requirements are met. Collaborate with stakeholders to define and improve testing strategies for big data solutions. Requirements: 6+ years of experience in QA, with a focus on data testing, ETL testing, and data validation. Strong proficiency in SQL for data validation, transformation testing, and debugging. Proficiency in Python is an added advantage. Experience with ETL testing and Data solutions. Experience with cloud platforms (AWS, Azure, or GCP) is a plus. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work across functional teams. Company Profile Stratacent is an IT Consulting and Services firm, headquartered in Jersey City, NJ, with two global delivery centres in New York City area and New Delhi area plus offices in London, Canada and Pune, India. We are a leading IT services provider focusing in Financial Services, Insurance, Healthcare and Life Sciences. We help our customers in their digital transformation journey and provides services/ solutions around Cloud Infrastructure, Data and Analytics, Automation, Application Development and ITSM. We have partnerships with SAS, Automation Anywhere, Snowflake, Azure, AWS and GCP. URL - http://stratacent.com Employee Benefits: • Group Medical Insurance • Cab facility • Meals/snacks • Continuous Learning Program Stratacent India Private Limited is an equal opportunity employer and will not discriminate against any employee or applicant for employment on the basis of race, color, creed, religion, age, sex, national origin, ancestry, handicap, or any other factors.
Posted 1 day ago
5.0 years
20 - 27 Lacs
Chennai, Tamil Nadu, India
On-site
Industry: Information Technology | Database & Infrastructure Services We are a fast-scaling managed services provider helping enterprises in finance, retail, and digital-native sectors keep mission-critical data available, secure, and high-performing. Our on-site engineering team in India safeguards petabytes of transactional data and drives continuous optimisation across hybrid environments built on open-source technologies. Role & Responsibilities Administer and optimise PostgreSQL clusters across development, staging, and production workloads. Design, implement, and automate backup, recovery, and disaster-recovery strategies with point-in-time restore. Tune queries, indexes, and configuration parameters to achieve sub-second response times and minimise resource consumption. Configure and monitor logical and streaming replication, high availability, and failover architectures. Harden databases with role-based security, encryption, and regular patching aligned to compliance standards. Collaborate with DevOps to integrate CI/CD, observability, and capacity planning into release pipelines. Skills & Qualifications Must-Have 5+ years PostgreSQL administration in production. Expertise in query tuning, indexing, and vacuum strategies. Proficiency with Linux shell scripting and automation tools. Hands-on experience with replication, HA, and disaster recovery. Preferred Exposure to cloud-hosted PostgreSQL (AWS RDS, GCP Cloud SQL). Knowledge of Ansible, Python, or Kubernetes for infrastructure automation. Benefits & Culture Highlights Engineer-led culture that values technical depth, peer learning, and continuous improvement. Access to enterprise-grade lab environments and funded certifications on PostgreSQL and cloud platforms. Competitive salary, health insurance, and clear growth paths into architecture and SRE roles. Work Location: On-site, India. Skills: postgresql,shell scripting,vacuum strategies,dba,linux shell scripting,python,disaster recovery,automation tools,cloud-hosted postgresql,indexing,query tuning,replication,ansible,high availability,kubernetes,postgresql administration
Posted 1 day ago
0 years
0 Lacs
Delhi, India
On-site
Flexing It is a freelance consulting marketplace that connects freelancers and independent consultants with organisations seeking independent talent. Flexing It has partnered with Our Client a leading consulting firm are looking to engage with a Enterprise/Data Architect. Key Responsibilities: 1. Enterprise Architecture Analysis: a. Assess current enterprise architecture across infrastructure, data, applications, and analytics layers. b. Apply TOGAF, MACH or equivalent frameworks to evaluate and redesign architecture for scalability, agility, and compliance. c. Assess legacy systems and define modernization roadmaps aligned with digital transformation goals. 2. Data Architecture & Governance: a. Analyze current data management practices including ingestion, storage, processing, and governance. b. Design a robust, scalable, and compliant target data architecture with a clear classification hierarchy and lifecycle governance. c. Define metadata management, data lineage, and master data management (MDM) strategies. 3. IT-OT Integration: a. Evaluate integration mechanisms between IT and OT systems (e.g., SAP, SDMS, SCADA, DCS). b. Recommend frameworks for secure, real-time data exchange and analytics enablement. c. Ensure alignment with OT cybersecurity standards. 4. TOGAF-Based Analysis: a. Use TOGAF principles to benchmark and align architecture with global best practices. b. Identify gaps and improvement opportunities in the current architecture. 5. Design of “To-Be” Architecture: a. Develop a future-ready architecture blueprint b. Ensure compatibility with cloud adoption strategies, cybersecurity frameworks, and business objectives. c. Incorporate MACH architecture principles (Microservices, API-first, Cloud-native, Headless) for modular, scalable, and composable enterprise systems. 6. Stakeholder Engagement: a. Collaborate with divisional IT teams, data center teams, and business units to gather requirements and validate architectural decisions. b. Lead and participate in workshops and whiteboarding sessions to co-create solutions. Skills Required 1. Architecture & Frameworks: a. Strong understanding of TOGAF, Zachman, or FEAF frameworks. b. Experience designing MACH-based architectures (Microservices, API-first, Cloud- native, Headless). c. Familiarity with event-driven architecture (EDA) and domain-driven design (DDD). 2. Cloud & Infrastructure: a. Experience with cloud platforms (AWS, Azure, GCP) and hybrid hosting models. b. Knowledge of containerization (Docker, Kubernetes), CI/CD pipelines, and DevSecOps practices. c. Infrastructure modernization including edge computing and IoT integration. 3. Data & Integration: a. Expertise in data architecture, data lakes, data mesh, and data fabric concepts. b. Proficiency in ETL/ELT tools, data virtualization, and real-time streaming platforms (Kafka, MQTT). c. Strong understanding of data governance, privacy laws (e.g., DPDP Act, GDPR), and compliance frameworks. 4. Security & Compliance: a. Familiarity with cybersecurity standards (ISO/IEC 27001, NIST, IEC 62443). b. Experience implementing zero trust architecture and identity & access management (IAM). 5. Tools & Platforms: a. Experience with enterprise architecture tools (e.g., ArchiMate, Sparx EA, LeanIX). b. Familiarity with integration platforms (MuleSoft, Dell Boomi, Apache Camel). c. Exposure to ERP systems (SAP, Oracle), CRM systems (Oracle, Salesforce, etc.) OT platforms, and industrial protocols. 6. Soft Skills: a. Ability to translate business needs into technical architecture. b. Strong documentation skills.
Posted 1 day ago
5.0 - 7.0 years
14 - 24 Lacs
Bengaluru
Work from Office
Data Visualization Software Developer Engineer (5-8 Years Experience) Role Overview: We are looking for a skilled Data Visualization Software Developer Engineer with 6-8 years of experience in developing interactive dashboards and data-driven solutions using Looker and LookerML. The ideal candidate will have expertise in Google Cloud Platform (GCP) and BigQuery and a strong understanding of data visualization best practices. Experience in the media domain (OTT, DTH, Web) will be a plus. Key Responsibilities: Work with BigQuery to create efficient data models and queries for visualization. Develop LookML models, explores, and derived tables to support business intelligence needs. Optimize dashboard performance by implementing best practices in data aggregation and visualization. Collaborate with data engineers, analysts, and business teams to understand requirements and translate them into actionable insights. Implement security and governance policies within Looker to ensure data integrity and controlled access. Leverage Google Cloud Platform (GCP) services to build scalable and reliable data solutions. Maintain documentation and provide training to stakeholders on using Looker dashboards effectively. Troubleshoot and resolve issues related to dashboard performance, data accuracy, and visualization constraints. Maintain and optimize existing Looker dashboards and reports to ensure continuity and alignment with business KPIs Understand, audit, and enhance existing LookerML models to ensure data integrity and performance Build new dashboards and data visualizations based on business requirements and stakeholder input Collaborate with data engineers to define and validate data pipelines required for dashboard development and ensure the timely availability of clean, structured data Document existing and new Looker assets and processes to support knowledge transfer, scalability, and maintenance Support the transition/handover process by acquiring detailed knowledge of legacy implementations and ensuring a smooth takeover Required Skills & Experience: 6-8 years of experience in data visualization and business intelligence using Looker and LookerML. Strong proficiency in writing and optimizing SQL queries, especially for BigQuery. Experience in Google Cloud Platform (GCP), particularly with BigQuery and related data services. Solid understanding of data modeling, ETL processes, and database structures. Familiarity with data governance, security, and access controls in Looker. Strong analytical skills and the ability to translate business requirements into technical solutions. Excellent communication and collaboration skills. Expertise in Looker and LookerML, including Explore creation, Views, and derived tables Strong SQL skills for data exploration, transformation, and validation Experience in BI solution lifecycle management (build, test, deploy, maintain) Excellent documentation and stakeholder communication skills for handovers and ongoing alignment Strong data visualization and storytelling abilities, focusing on user-centric design and clarity Preferred Qualifications: Experience working in the media industry (OTT, DTH, Web) and handling large-scale media datasets. Knowledge of other BI tools like Tableau, Power BI, or Data Studio is a plus. Experience with Python or scripting languages for automation and data processing. Understanding of machine learning or predictive analytics is an advantage
Posted 1 day ago
3.0 - 6.0 years
14 - 30 Lacs
Delhi, India
On-site
Industry & Sector: A fast-growing services provider in the enterprise data analytics and business-intelligence sector, we deliver high-throughput data pipelines, warehouses, and BI insights that power critical decisions for global BFSI, retail, and healthcare clients. Our on-site engineering team in India ensures the reliability, accuracy, and performance of every dataset that reaches production. Role & Responsibilities Design, execute, and maintain end-to-end functional, regression, and performance test suites for ETL workflows across multiple databases and file systems. Validate source-to-target mappings, data transformations, and incremental loads to guarantee 100% data integrity and reconciliation. Develop SQL queries, Unix shell scripts, and automated jobs to drive repeatable test execution, logging, and reporting. Identify, document, and triage defects using JIRA/HP ALM, partnering with data engineers to resolve root causes quickly. Create reusable test data sets and environment configurations that accelerate Continuous Integration/Continuous Deployment (CI/CD) cycles. Contribute to test strategy, coverage metrics, and best-practice playbooks while mentoring junior testers on ETL quality standards. Skills & Qualifications Must-Have: 3-6 years hands-on ETL testing experience in data warehouse or big-data environments. Advanced SQL for complex joins, aggregations, and data profiling. Exposure to leading ETL tools such as Informatica, DataStage, or Talend. Proficiency in Unix/Linux command-line and shell scripting for job orchestration. Solid understanding of SDLC, STLC, and Agile ceremonies; experience with JIRA or HP ALM. Preferred: Automation with Python, Selenium, or Apache Airflow for data pipelines. Knowledge of cloud data platforms (AWS Redshift, Azure Synapse, or GCP BigQuery). Performance testing of large datasets and familiarity with BI tools like Tableau or Power BI. Benefits & Culture Highlights Merit-based growth path with dedicated ETL automation upskilling programs. Collaborative, process-mature environment that values quality engineering over quick fixes. Comprehensive health cover, on-site cafeteria, and generous leave policy to support work-life balance. Workplace Type: On-site | Location: India | Title Used Internally: ETL Test Engineer. Skills: agile methodologies,aws redshift,jira,hp alm,datastage,apache airflow,test automation,power bi,selenium,advanced sql,data warehouse,unix/linux,azure synapse,stlc,gcp bigquery,shell scripting,sql,performance testing,agile,python,sdlc,tableau,defect tracking,informatica,etl testing,dimension modeling,talend
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Role : Sr. LLM Engineer: Gen AI Location : Hyderabad/Gurgaon Fulltime About the Rol e Sr. LLM Engineer: Gen AI Turing is looking for people with LLM experience to join us in solving business problems for our Fortune 500 customers. You will be a key member of the Turing GenAI delivery organization and part of a GenAI project. You will be required to work with a team of other Turing engineers across different skill sets. In the past, the Turing GenAI delivery organization has implemented industry leading multi-agent LLM systems, RAG systems, and Open Source LLM deployments for major enterprises. Required skills • 5+ years of professional experience in building Machine Learning models & systems • 1+ years of hands-on experience in how LLMs work & Generative AI (LLM) techniques particularly prompt engineering, RAG, and agents. • Experience in driving the engineering team toward a technical roadmap. • Expert proficiency in programming skills in Python, Langchain/Langgraph and SQL is a must. • Understanding of Cloud services, including Azure, GCP, or AWS • Excellent communication skills to effectively collaborate with business SMEs Roles & Responsibilities • Develop and optimize LLM-based solutions: Lead the design, training, fine-tuning, and deployment of large language models, leveraging techniques like prompt engineering, retrieval-augmented generation (RAG), and agent-based architectures. • Codebase ownership: Maintain high-quality, efficient code in Python (using frameworks like LangChain/LangGraph) and SQL, focusing on reusable components, scalability, and performance best practices. • Cloud integration: Aide in deployment of GenAI applications on cloud platforms (Azure, GCP, or AWS), optimizing resource usage and ensuring robust CI/CD processes. • Cross-functional collaboration: Work closely with product owners, data scientists, and business SMEs to define project requirements, translate technical details, and deliver impactful AI products. • Mentoring and guidance: Provide technical leadership and knowledge-sharing to the engineering team, fostering best practices in machine learning and large language model development. • Continuous innovation: Stay abreast of the latest advancements in LLM research and generative AI, proposing and experimenting with emerging techniques to drive ongoing improvements in model performance.
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Sr. Backend Developer Location: Gurgaon Experience: 5+ Years CTC: Up to 20 LPA Industry: Tech Product Job Description: We are looking for a talented and motivated Backend Developer to join our dynamic team. The ideal candidate will have a strong foundation in Node.js, cloud-native development, and AI/LLM integration. In this role, you will design scalable backend systems, deploy LLM models and AI agents (Agnos / LangGraph), and collaborate across teams to build secure, high-performance applications in a cloud-native environment. Key Responsibilities: -Build scalable backend services and RESTful APIs using Node.js, Express.js, and NestJS -Architect and manage cloud-native systems on AWS and GCP -Integrate AI/LLM agents using Agnos, LangGraph, or related tools -Set up and manage CI/CD pipelines (AWS CodePipeline, Lambda) -Implement real-time communication using Socket.IO, Kafka, RabbitMQ, SQS -Ensure API security, access control, and multi-tenant system design -Collaborate with frontend teams working on React.js -Guide and mentor team members technically Required Skills & Qualifications -5+ years of backend development experience with interactive applications -Strong command over Node.js, Express.js, NestJS -Experience working with frontend technologies like React.js -Advanced knowledge of AWS, GCP, and cloud cost/security optimization -Familiar with GraphQL, AWS AppSync, Elasticsearch -Experience deploying LLMs and using Agnos / LangGraph -Practical understanding of CI/CD pipelines and automation -Expertise in real-time systems like Kafka, RabbitMQ, Socket.IO, SQS
Posted 1 day ago
4.0 - 7.0 years
4 - 9 Lacs
Mumbai
Work from Office
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired bya collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizationsunlock the value of technology and build a more sustainable, more inclusive world. Design, configure, and manage NetApp storage solutions, including ONTAP, AFF, and FAS series. Implement and maintain storage replication, backup, and disaster recovery strategies using SnapMirror, SnapVault, and MetroCluster. Perform storage provisioning, troubleshooting, and performance tuning. Work with NAS (CIFS/NFS) and SAN (FC/iSCSI) technologies for enterprise environments. Support NetApp integration with cloud providers (AWS, Azure, Google Cloud). Automate storage management tasks using PowerShell, Python, or Ansible. Collaborate with IT teams to ensure high availability, security, and efficiency of storage environments. Monitor and resolve storage-related incidents and performance issues. Required Skills & Qualifications Bachelor"s degree in Computer Science, Information Technology, or a related field. Experience in NetApp storage administration. Expertise in NetApp ONTAP, SnapMirror, SnapVault, MetroCluster, and Active IQ. Hands-on experience with SAN/NAS protocols (NFS, CIFS, iSCSI, FC). Knowledge of cloud-based storage solutions (AWS FSx, Azure NetApp Files, Google Cloud). Familiarity with automation tools like PowerShell, Ansible, Python. Strong troubleshooting and problem-solving skills. NetApp certifications (NCDA, NCIE, or equivalent) are a plus. - Grade Specific Netapp Storage Admin Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of "22.5 billion.
Posted 1 day ago
2.0 - 4.0 years
4 - 7 Lacs
Bengaluru
Work from Office
"Strong working experience in python based Django,Flask framework. Experience in developing microservices based design and architecture. Strong programming knowledge in Javascript,HTML5, Python, Restful API, gRPC API Programming experience & object-oriented concepts in PYTHON. Knowledge of python libraries like Numpy, Pandas, Ppen3D, OpenCV, Matplotlib. Knowledge of MySQL/Postgres/MSSQL database. Knowledge of 3D geometry. Knowledge of SSO/OpenID Connect/OAuth authentication protocols. Working experience with version control systems like GitHub/BitBucket/GitLab. Familiarity with continuous integration and continuous deployment (CI/CD) pipelines. Basic knowledge of image processing. Knowledge of data-analysis, data science. Strong communication skills. Very good in analytical & logical thinking on different perspectives. Ability to handle challenges & resolve any blockers. Good team player & proactive in giving new ideas/suggestions/solutions & constructive analysis of team member"s ideas. Interested to working in a fast-paced, Agile software development team Good to Have: Knowledge of other programming language like C, C++. Knowledge of basics of machine-learning. Exposure to NoSQL Databases. Knowledge of GCP/AWS/Azure cloud" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Has more than a year of relevant work experience. Solid understanding of programming concepts, software design and software development principles. Consistently works to direction with minimal supervision, producing accurate and reliable results. Individuals are expected to be able to work on a range of tasks and problems, demonstrating their ability to apply their skills and knowledge. Organises own time to deliver against tasks set by others with a mid term horizon. Works co-operatively with others to achieve team goals and has a direct and positive impact on project performance and make decisions based on their understanding of the situation, not just the rules.
Posted 1 day ago
7.0 - 12.0 years
6 - 10 Lacs
Mumbai
Work from Office
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired by a collaborative community of colleagues around the world, and where you ’ ll be able to reimagine what ’ s possible. Join us and help the world ’ s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role As a SAP HANA on RISE Consultant with 7 to 12 years of Experience, you will be responsible for guiding customers through their digital transformation journey by leveraging SAP’s cloud-based RISE platform. You will support the migration, deployment, and optimization of SAP S/4HANA systems on RISE, ensuring high availability, scalability, and performance in a cloud environment. Lead or support SAP S/4HANA migrations to RISE with SAP (public/private cloud) Collaborate with BASIS, infrastructure, and cloud teams for system setup and operations Ensure system performance, security, and compliance in cloud-hosted environments Provide technical guidance on upgrades, patches, and integration with other SAP modules Your Profile 2+ years of relevant experience in SAP HANA and cloud-based SAP landscapes Hands-on experience with RISE with SAP, including provisioning and migration Strong knowledge of SAP S/4HANA architecture and cloud infrastructure (AWS, Azure, GCP) Familiarity with SAP BTP, SAP Cloud ALM, and SAP Activate methodology Excellent problem-solving and communication skills WHAT YOU"LL LOVE ABOUT WORKING HERE We value flexibility and support your work-life balance. Enjoy remote work options tailored to your lifestyle. Benefit from flexible working hours to suit your personal needs. Advance your career with structured growth programs. Access certifications in SAP and leading cloud platforms like AWS and Azure. Stay ahead in your field with continuous learning opportunities.
Posted 1 day ago
7.0 - 11.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Cloud Infrastructure Architects define, design and deliver a comprehensive and coherent transformation implementation across Business, Information, Systems and Technology through strong technical Cloud and Infrastructure expertise. They design the entire Cloud- and Infrastructure-based IT lifecycle to deliver business change, which may be enabled by Cloud. - Grade Specific Managing Cloud Infrastructure Architect - Design, deliver and manage complete cloud infrastructure architecture solutions. Demonstrate leadership of topics in the architect community and show a passion for technology and business acumen. Work as a stream lead at CIO/CTO level for an internal or external client. Lead Capgemini operations relating to market development and/or service delivery excellence. Are seen as a role model in their (local) community. Certificationpreferably Capgemini Architects certification level 2 or above, relevant cloud and infrastructure certifications, IAF and/or industry certifications such as TOGAF 9 or equivalent. Skills (competencies) Agile (Software Development Framework) Analytical Thinking AWS Architecture Business Acumen Capgemini Integrated Architecture Framework (IAF) Change Management Cloud Architecture Coaching Collaboration Commercial Awareness DevOps Google Cloud Platform (GCP) Influencing Innovation Knowledge Management Managing Difficult Conversations Network Architecture Risk Assessment Risk Management SAFe Stakeholder Management Storage Architecture Storytelling Strategic Planning Strategic Thinking Sustainability Awareness Technical Governance Verbal Communication Written Communication
Posted 1 day ago
7.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications. 1. Applies scientific methods to analyse and solve software engineering problems. 2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance. 3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers. 4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities. 5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Is highly respected, experienced and trusted. Masters all phases of the software development lifecycle and applies innovation and industrialization. Shows a clear dedication and commitment to business objectives and responsibilities and to the group as a whole. Operates with no supervision in highly complex environments and takes responsibility for a substantial aspect of Capgemini’s activity. Is able to manage difficult and complex situations calmly and professionally. Considers ‘the bigger picture’ when making decisions and demonstrates a clear understanding of commercial and negotiating principles in less-easy situations. Focuses on developing long term partnerships with clients. Demonstrates leadership that balances business, technical and people objectives. Plays a significant part in the recruitment and development of people. Skills (competencies) Verbal Communication
Posted 1 day ago
9.0 - 13.0 years
13 - 18 Lacs
Hyderabad
Work from Office
This role involves the development and application of engineering practice and knowledge in defining, configuring and deploying industrial digital technologies including but not limited to PLM MES for managing continuity of information across the engineering enterprise, including design, industrialization, manufacturing supply chain, and for managing the manufacturing data. - Grade Specific Focus on Digital Continuity Manufacturing. Fully competent in own area. Acts as a key contributor in a more complex critical environment. Proactively acts to understand and anticipates client needs. Manages costs and profitability for a work area. Manages own agenda to meet agreed targets. Develop plans for projects in own area. Looks beyond the immediate problem to the wider implications. Acts as a facilitator, coach and moves teams forward.
Posted 1 day ago
7.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
The engineer is expected to help setup automation and CI/CD pipelines across some of the new frameworks being setup by the Blueprints & continuous assurance squad. Our squad is working on multiple streams to improve the cloud security posture for the bank. Required skills: Strong hands-on experience and understanding on the GCP cloud. Strong experience with automation and familiarity with one or more scripting languages like python, GO, etc Knowledge and experience with any Infrastructure as code language like Terraform(preferred), Cloudformation, etc Ability to take quickly learn the frameworks and tech stack used and contribute towards the goals of the squad" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. - Grade Specific Is highly respected, experienced and trusted. Masters all phases of the software development lifecycle and applies innovation and industrialization. Shows a clear dedication and commitment to business objectives and responsibilities and to the group as a whole. Operates with no supervision in highly complex environments and takes responsibility for a substantial aspect of Capgemini’s activity. Is able to manage difficult and complex situations calmly and professionally. Considers ‘the bigger picture’ when making decisions and demonstrates a clear understanding of commercial and negotiating principles in less-easy situations. Focuses on developing long term partnerships with clients. Demonstrates leadership that balances business, technical and people objectives. Plays a significant part in the recruitment and development of people. Skills (competencies) Verbal Communication
Posted 1 day ago
4.0 - 6.0 years
3 - 7 Lacs
Chennai
Work from Office
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired bya collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizationsunlock the value of technology and build a more sustainable, more inclusive world. Experience with building Cloud vendor agnostic SaaS product. Experience in Java, Spring boot microservices, deployed as containers in Kubernetes ecosystem. In depth understanding of micro services architectures, technological familiarity with public/private/hybrid cloud, Openstack, GCE, Kubernetes, AWS Have deep understanding of building API"s/services That are built on top of MQ"s - RabbitMQ, Kafka, NATS etc. That uses cache like Redis, Memcached to improve the performance of the platform That scales to millions of users in a cloud environment like Private cloud, GCP, AWS, Azure, etc. Good to have OAuth, OpenID, SAML based authentication experience.
Posted 1 day ago
6.0 - 11.0 years
30 - 45 Lacs
Bengaluru
Hybrid
Key Skills: Java, Python, Core Java, Kubernetes, Docker, GCP, AWS Cloud, Azure, MS SQL Server Roles and Responsibilities: Design and build robust, scalable, and secure backend systems using Java, Python, or similar technologies. Contribute to system architecture, code reviews, and software design best practices. Collaborate with cross-functional teams including product, QA, DevOps, and frontend engineers. Work with containerization and orchestration tools like Docker and Kubernetes. Build and deploy cloud-native applications on AWS, GCP, or Azure. Drive CI/CD implementation and DevOps automation. Mentor junior engineers and foster a strong engineering culture. Continuously identify performance improvements and bottlenecks Skills Required: Must-Have: 4+ years of hands-on experience in software engineering. Strong coding skills in Java, Python, or similar backend language. Solid understanding of Core Java concepts and software design patterns. Experience with Microservices, REST APIs, and distributed system design. Proficiency in Docker, Kubernetes, and cloud platforms like AWS, GCP, or Azure. Familiarity with SQL databases (e.g., MS SQL Server). Strong communication and problem-solving skills. Nice-to-Have: Frontend experience using React/Angular/Vue. Exposure to CI/CD tools and DevOps culture. Familiarity with data engineering or ML pipelines. Experience working in a fast-paced startup environment. Education: Bachelor's or Master's degree in Computer Science, Engineering, or related field, or equivalent practical experience.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane