Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a highly motivated and skilled Python Data Science Professional looking to join our dynamic Production Management AIOps team. Your role will involve leveraging data science techniques and Python to develop innovative solutions that optimize production processes, enhance predictive capabilities, and integrate AI/ML models into operational workflows. As a passionate data scientist, you should have a strong understanding of Python's data science ecosystem, experience in building and deploying machine learning models, and a desire to work with large datasets in a fast-paced, collaborative environment. Your responsibilities will include: - Analyzing large, complex datasets to identify trends, patterns, and anomalies. - Developing and implementing machine learning models using Python libraries like scikit-learn, TensorFlow, and PyTorch to address business challenges. - Evaluating and refining model performance to ensure accuracy and reliability. - Designing and implementing data pipelines to collect, process, and transform data from various sources. - Creating clear and compelling visualizations to communicate data insights to technical and non-technical audiences. - Participating actively in agile development processes, collaborating with other data scientists, engineers, and product managers. - Staying up-to-date with the latest advancements in data science, machine learning, and AI. Qualifications: Technical Skills: - 5+ years of proven experience as a Data Scientist with a strong focus on Python. - Proficiency in Python and essential data science libraries like pandas, NumPy, scikit-learn, TensorFlow, and PyTorch. - Experience with statistical modeling, machine learning algorithms, and data mining techniques. - Strong data visualization skills using libraries like Matplotlib, Seaborn, or Plotly. - Knowledge of data engineering tools and techniques like SQL, Spark, and cloud-based data warehousing solutions. - Familiarity with version control systems like Git and agile development methodologies. Soft Skills: - Excellent problem-solving and analytical skills. - Strong communication and interpersonal skills. - Ability to work independently and as part of a team. - Ability to manage multiple tasks and prioritize effectively. - Passion for data science and a desire to learn and grow. Education: - Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field. This is a full-time position in the Technology job family group under Systems & Engineering. If you require a reasonable accommodation due to a disability when using our search tools or applying for a career opportunity, please review the Accessibility at Citi. View Citis EEO Policy Statement and the Know Your Rights poster for more information.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
nagpur, maharashtra
On-site
You will be responsible for designing, building, testing, integrating, managing, and optimizing high volume data from various sources to meet project requirements. Your role will involve leveraging PySpark/Spark and SQL to transform and process large datasets. You should be able to participate in all phases of the Software Development Life Cycle (SDLC) including requirements gathering, design, development, testing, deployment, maintenance, and support. To excel in this role, you should have experience in building data integration workflows using Databricks with knowledge of Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, and Databricks Runtime. A solid understanding of Spark and PySpark for big data processing is essential. Strong expertise in advanced SQL queries, particularly in analytical functions, and the ability to write and optimize highly efficient SQL queries are required. Experience working in an AWS environment is also essential for this position. Key Skills required for this role include proficiency in databricks, unity catalog, table triggers, data engineering, lakehouse architecture, delta live pipelines, AWS, SQL, Databricks Runtime, Spark, database concepts, Delta Lake, and PySpark.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As an experienced Big Data Developer, you will be responsible for designing, developing, and optimizing big data solutions using Java and Apache Spark. Your expertise in API development, data engineering, and big data tools will be crucial in collaborating with data scientists and analysts to deliver efficient solutions. You will play a key role in optimizing big data workflows for performance and scalability while ensuring data quality, integrity, and security. Your key responsibilities will include: - Designing, developing, and maintaining big data solutions using Java and Apache Spark. - Developing and managing APIs for data ingestion, processing, and extraction. - Collaborating with stakeholders to understand data requirements and deliver efficient solutions. - Optimizing big data workflows for performance and scalability. - Implementing data engineering best practices for data quality, integrity, and security. - Working with big data tools and technologies such as Hadoop, HDFS, Hive, HBase, Kafka, and others as needed. - Troubleshooting and resolving issues related to data processing and workflows. - Staying updated with the latest trends and advancements in big data technologies and tools. To qualify for this role, you should have: - A Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. - 4-8 years of experience in big data development and data engineering. - Proficiency in Java and experience with Apache Spark. - Experience in API development and integration. - Strong understanding of data engineering principles and big data concepts. - Familiarity with big data tools such as Hadoop, HDFS, Hive, HBase, and Kafka. - Experience with SQL and NoSQL databases. - Excellent problem-solving and analytical skills. - Strong communication and collaboration skills. Preferred qualifications include experience with cloud platforms, containerization technologies like Docker and Kubernetes, continuous integration and deployment pipelines, and familiarity with machine learning frameworks and libraries. In return, you will receive a competitive salary and benefits package, along with the opportunity to work with cutting-edge technologies in a dynamic and collaborative environment. Professional growth and development opportunities will also be available to support your career advancement.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a PowerBI & PowerApps Developer at AiEdge Analytics, you will play a crucial role in creating, developing, and deploying interactive Power BI dashboards, integrating data sources, managing ETL pipelines, and designing PowerApps applications. Your responsibilities will include collaborating with cross-functional teams, ensuring data accuracy and security, and implementing analytics solutions to meet business needs. To excel in this role, you should have knowledge in Computer Science and Software Development, experience in Back-End Web Development and Programming, proficiency in Object-Oriented Programming (OOP), strong analytical and problem-solving skills, and the ability to work effectively in a team environment. A Bachelor's degree in Computer Science, Information Technology, or a related field is required. Experience with Microsoft Power BI and PowerApps would be beneficial. Join our passionate team of certified experts at AiEdge Analytics, where we leverage over 15 years of industry experience in Microsoft Technologies for Data Engineering, Data Analytics, and Business Intelligence to transform raw data into valuable insights. By offering services such as interactive Power BI dashboards, data engineering, ETL pipeline integration, advanced analytics, and tailored BI solutions, we empower businesses to make informed decisions at optimized costs. With a focus on Lean Product Development, strong data security, and continuous client collaboration, we deliver maximum value and drive business growth. Come be a part of our team and unlock the true potential of data to make a positive impact.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
You are a strategic thinker passionate about driving solutions in the Data Domain. You have found the right team. As a Data Domain Modeler in the Transformation & Innovation team, you will lead the design and implementation of end-to-end data models starting from raw data to the semantic layer that makes our data more accessible and understandable for different personas ranging from finance users, data analysts, automation, quantitative research, and machine learning teams. Being part of an influential and data-centric team focused on data accessibility, you will work on designing new data models for domains such as headcount, contractors, financials, forecasting models, markets, and macro-economic scenarios. You will also represent the data domains in the overall information architecture strategy to optimize data models for end-user consumption, identify data homogenization opportunities, and optimize data pipelines in our data lake-house. You will lead the engagement and partner with product owners, business users (both technical and non-technical), data providers, and technology teams across the entire finance function to design and deliver data products. Job Responsibilities: - Work on some of the most complex and highly visible data problems in finance, at the intersection of finance and technology - Design and build a new cloud-based data lakehouse for the P&A community, leveraged by Analysts to CFO for their day-to-day reporting - Work on a wide range of data sets and use cases to support different Planning & Analysis processes, and personally lead and drive the design of them - Create solutions for key data challenges and implement innovative technology-based solutions at the bank such as an enterprise data catalog and AI-enabled conversational analytics - Partner with other high-performing teams within JPM to inspire innovation and champion change throughout the bank Required qualifications, capabilities, and skills: - Strong analytical and problem-solving skills with attention to details to formulate effective data models to address users" consumption pain points, and to lead their delivery - Curious mind to dig deep into the business and data to understand the context: Inquisitive and analytical mindset, challenges the status quo, and strive for excellence - 5+ years of relevant experience designing and implementing data models and analytic solutions using dimensional and relational data models - Hands-on and flexible approach to creating solutions aligned with the tools and skills of the client user. Strong communication skills to present data products and educate data consumers - Strong knowledge and experience using SQL & Python for data analysis, data engineering, and transformation to answer business questions - Experience with ETL / ELT process and architecture to move data across pipelines in a lake - Experience building analytics dashboard or building models suited for interactive dashboard consumption - Experience with cloud-based data lake platforms such as AWS, Azure, or Google Cloud - Bachelor's degree in computer science, data science, information systems, business analytics, or a related discipline,
Posted 1 day ago
8.0 - 15.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Engineering Manager at Amgen, you will play a pivotal role in leading the end-to-end data strategy and execution for regulatory product submissions, lifecycle management, and compliance reporting within the Biotech or Pharmaceutical domain. Your primary responsibilities will revolve around ensuring the timely and accurate delivery of regulatory data assets across global markets by collaborating with cross-functional Regulatory Integrated Product Teams (IPT). Your key responsibilities will include: - Leading the engineering strategy for regulatory operations, encompassing data ingestion, transformation, integration, and delivery across regulatory systems. - Serving as the data engineering Subject Matter Expert (SME) within the Integrated Product Team to facilitate regulatory submissions, agency interactions, and lifecycle updates. - Collaborating with various departments such as global regulatory affairs, clinical, CMC, quality, safety, and IT teams to translate submission data requirements into data engineering solutions. - Overseeing the development of data pipelines, models, and metadata frameworks that adhere to submission data standards. - Enabling integration and reporting across regulatory information management systems and other relevant platforms. - Implementing data governance, lineage, validation, and audit trails to ensure regulatory compliance. - Guiding the development of automation solutions, dashboards, and analytics to enhance visibility into submission timelines and regulatory KPIs. - Ensuring interoperability between regulatory data platforms and enterprise data lakes for cross-functional reporting and insights. - Driving innovation by evaluating emerging technologies in data engineering and AI for regulatory intelligence. - Leading and mentoring a team of data engineers and analysts to foster a culture of excellence and innovation. - Implementing Agile methodologies to enhance team velocity and project delivery. The ideal candidate for this role should possess: - 12+ years of experience in data engineering, with at least 3 years in a managerial capacity, preferably within the biotech or pharmaceutical industry. - Proven experience in supporting regulatory functions and familiarity with ETL/ELT tools and cloud-based data platforms. - Deep understanding of regulatory standards, data compliance, and submission processes. - Strong project management, communication, and leadership skills. - Ability to translate technical capabilities into business outcomes and effectively work in cross-functional environments. While not mandatory, prior experience in integrated product teams or regulatory transformation programs, knowledge of Regulatory Information Management Systems, and familiarity with Agile methodologies are considered advantageous. In addition to technical expertise, soft skills such as analytical thinking, communication, teamwork, and self-motivation are highly valued in this role. A degree in Computer Science or related field, along with relevant certifications, is preferred. Amgen is an equal opportunity employer committed to diversity and inclusion in the workplace.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You will be an integral part of Annalect Global and Omnicom Group, one of the largest media and advertising agency holding companies in the world. Our agency brands are consistently recognized as being among the world's creative best. Annalect India plays a key role in providing stellar products and services in areas of Creative Services, Technology, Marketing Science, Market Research, Business Support Services, Media Services, Consulting & Advisory Services. We are currently looking for talented professionals like you to join our team. Your responsibilities will include gathering and evaluating data requirements, designing, building, and managing ETL pipelines using Python, serving as the primary point of contact for data engineering tasks, owning the data QA process, prioritizing and managing engineering tasks, contributing to project planning, and creating technical documentation. You will also drive conversations with the team, clients, and business stakeholders. To qualify for this role, you should have at least 3-5 years of hands-on experience in data engineering or data management, preferably within data-driven domains. You should be able to design, develop, and maintain scalable Python & SQL based ETL pipelines, work with cloud data warehouses like Google BigQuery, collaborate with analysts and business teams, have excellent written and verbal communication skills, manage multiple data workflows and projects, and maintain positive client and vendor relationships.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You will be a vital member of the Data & Analytics team at Sonnys Enterprises in Pune, India. As the SAP Data Analyst, your role will focus on integrating Finance, Operations, and Technology by utilizing your deep SAP expertise, FP&A alignment, and advanced Power BI skills. Your responsibilities will include translating FP&A and operational reporting needs into SAP data models, analyzing key financial KPIs, understanding SAP FICO processes, mapping costs across SAP modules, and ensuring accurate financial reporting using organizational structures. You will collaborate with cross-functional teams to gather business requirements, design dashboards using Power BI, and support stakeholders in acting on insights from SAP ECC or S/4 HANA systems. In terms of Power BI development, you will design star schemas, develop DAX measures and KPIs, implement security models, and create scalable BI assets aligned with enterprise data strategies. Collaboration with ETL and data engineering teams will involve building data pipelines, data cleansing, transformation, and validation, as well as validating Power BI outputs against SAP source systems for data integrity. Your qualifications should include a Bachelor's degree in a related field, 5+ years of experience in Data Analysis or Business Intelligence, strong SAP expertise, Power BI skills, SQL proficiency, and experience in manufacturing, finance, or supply chain environments. Desirable competencies for this role include strong analytical and problem-solving skills, a proactive and collaborative mindset, ability to drive continuous improvement, and excellent communication and documentation skills. Sonnys Enterprises is an equal opportunity employer committed to diversity and inclusion.,
Posted 1 day ago
15.0 - 19.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior AI Architect at Dailoqa, you will play a pivotal role in shaping, designing, and delivering agentic AI solutions that drive real-world business value. Collaborating with stakeholders to identify high-impact agentic AI use cases, defining success metrics, and determining data requirements tailored to Financial Services clients will be a key aspect of your role. You will be responsible for architecting and overseeing the implementation of end-to-end agentic AI solutions aligned with Dailoqa's strategic objectives and client needs. In this role, you will lead and mentor cross-functional teams in the development and deployment of scalable agentic AI applications and infrastructures. Working closely with business stakeholders, you will translate complex requirements into actionable AI architecture and technical roadmaps. Furthermore, you will evaluate, recommend, and integrate advanced AI/ML platforms, frameworks, and technologies that enable agentic AI capabilities. Developing and enforcing AI governance frameworks, best practices, and ethical standards to ensure compliance with industry regulations and responsible AI principles will also be part of your responsibilities. Optimizing AI models for performance, scalability, and efficiency by leveraging cloud-native and distributed computing resources will be crucial. Staying ahead of emerging trends in agentic AI, machine learning, and data science, and applying new insights to enhance solution quality and business impact will be essential for success in this role. Additionally, providing technical leadership, mentorship, and code review for junior and peer team members, as well as participating in the hiring, onboarding, and development of AI talent, will be part of your responsibilities to foster a culture of innovation and excellence. To qualify for this role, you are required to have 15+ years of total experience, with 8+ years in machine learning and data science, along with more recent experience (4-5 years) in gen AI models applying AI to practical, comprehensive technology solutions and AI consultancy. Knowledge of basic algorithms, object-oriented and functional design principles, and best-practice patterns is essential. Experience in implementing GenAI, NLP, computer vision, or other AI frameworks/technologies is also a requirement. Proficiency in tools and technologies such as LLMs, implementing RAG or different prompt strategies, Azure OpenAI, off-the-shelf platform native AI tools and models, ML pipeline orchestration tools, Python, REST API development, NoSQL database design, RDBMS design and optimizations, as well as experience in data engineering and aligned hyperscale platforms like Databricks, Synapse, Fivetran, is necessary for this role. Moreover, you should hold a Master's or Ph.D. in Computer Science, Data Science, or a related field, along with extensive experience with modern AI frameworks, cloud platforms, and big data technologies. A strong background in designing and implementing AI solutions for enterprise-level applications, proven ability to lead and mentor technical teams, excellent communication skills to explain complex AI concepts to both technical and non-technical audiences, and a deep understanding of AI ethics and responsible AI practices are also required qualifications for this role at Dailoqa.,
Posted 1 day ago
3.0 - 10.0 years
0 Lacs
indore, madhya pradesh
On-site
As a Data Engineering Lead at Ascentt, you will play a crucial role in overseeing and guiding the organization's data engineering function. Your deep technical expertise in big data and data engineering, combined with strong leadership skills, will drive innovation, best practices, and excellence across our data engineering teams. You will have the opportunity to shape the strategic direction of the data engineering practice, define best practices, and lead the design of scalable, enterprise-wide data solutions and platforms. Your key responsibilities will include establishing and evolving the strategic direction for the data engineering practice, defining and implementing best practices and standards, overseeing multiple data engineering teams and projects, mentoring and developing talent, collaborating with other practice leads and senior leadership, staying abreast of emerging trends and technologies, driving the selection and optimization of data engineering tools, establishing KPIs, and representing the data engineering practice in executive-level meetings. To qualify for this role, you should have at least 10 years of experience in data engineering roles, with a minimum of 3 years in leadership positions. You should possess deep expertise in big data technologies, cloud platforms, and data integration tools and programming languages such as Scala and PySpark with Spark knowledge. Strong understanding of data governance, security, compliance requirements, and experience in leading large-scale data engineering initiatives are essential. Excellent communication skills, leadership abilities, and familiarity with Agile and DevOps methodologies are also required. Preferred qualifications include certifications like Snowflake SnowPro Certification or Databricks Data Engineer Professional, experience with scripting languages for automation, and industry experience in manufacturing, oil & gas, or automotive. Ascentt offers innovative projects, a collaborative environment, and opportunities for career growth and professional development within the company. If you are a passionate builder looking to shape the future of industrial intelligence through data engineering leadership, we invite you to join our team at Ascentt.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an individual contributor at Adobe, you will have the opportunity to leverage your expertise in data science engineering to drive growth and optimize customer experiences through personalization. The Customer Analytics & GTM team is dedicated to utilizing data to deliver exceptional experiences, and this role plays a crucial part in driving data engineering for large-scale initiatives across various strategic projects. In this role, you will be responsible for building fault-tolerant, scalable data pipelines using cloud-based tools, developing analytical capabilities using cutting-edge technologies, and optimizing data pipelines through innovative solutions. You will work on end-to-end data pipelines to run machine learning models in a production platform and contribute to data engineering and data science frameworks, tools, and processes. The ideal candidate will possess a Bachelor's degree in computer science or equivalent, along with 5-8 years of proven experience as a data engineer. You should have expertise in distributed data technologies like Hadoop, Hive, Presto, and Spark, as well as experience with cloud-based technologies such as Databricks, S3, Azure Blob Storage, and AWS EMR. Proficiency in streaming data ingestion and transformation using Kafka, Kinesis, and outstanding SQL skills are essential for this role. Additionally, hands-on experience in Python/PySpark/Scala, familiarity with CI/CD tools and open-source orchestration tools, and excellent communication skills for collaboration with data scientists and machine learning engineers are required. Knowledge of Elastic Stack, Graph Databases, and ML algorithms & frameworks is highly desirable. If you are passionate about contributing to the open-source community, have experience with data governance tools, collaboration tools, or Adobe tools, it would be considered a plus. This role offers a high visibility opportunity to work on critical missions and collaborate with global teams to deliver exceptional results in a collaborative environment. Join Adobe for an exciting career where you can make a significant impact, enjoy the best benefits, and maintain outstanding work-life balance. Apply now and become a part of a team that values innovation, collaboration, and empowerment to drive transformative digital experiences.,
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
indore, madhya pradesh
On-site
About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect, and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech. The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, TaskUs has a presence in twenty-three locations across twelve countries, including the Philippines, India, and the United States. It all began with one remarkably good idea to create a different breed of Business Processing Outsourcing (BPO)! TaskUs understands that achieving growth for partners requires a culture of constant motion, exploring new technologies, being ready to handle any challenge at a moment's notice, and mastering consistency in an ever-changing world. What We Offer: At TaskUs, we prioritize our employees" well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First. Job Description: Data Scientist with deep expertise in modern AI/ML technologies to join our innovative team. This role combines cutting-edge research in machine learning, deep learning, and generative AI with practical full-stack cloud development skills. You will be responsible for architecting and implementing end-to-end AI solutions, from data engineering pipelines to production-ready applications leveraging the latest in agentic AI and large language models. Key Responsibilities: AI/ML Development & Research: - Design, develop, and deploy advanced machine learning and deep learning models for complex business problems - Implement and optimize Large Language Models (LLMs) and Generative AI solutions - Build agentic AI systems with autonomous decision-making capabilities - Conduct research on emerging AI technologies and their practical applications - Perform model evaluation, validation, and continuous improvement Cloud Infrastructure & Full-Stack Development: - Architect and implement scalable cloud-native ML/AI solutions on AWS, Azure, or GCP - Develop full-stack applications integrating AI models with modern web technologies - Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.) - Implement CI/CD pipelines for ML model deployment and monitoring - Design and optimize cloud infrastructure for high-performance computing workloads Data Engineering & Database Management: - Design and implement data pipelines for large-scale data processing - Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.) - Optimize database performance for ML workloads and real-time applications - Implement data governance and quality assurance frameworks - Handle streaming data processing and real-time analytics Leadership & Collaboration: - Mentor junior data scientists and guide technical decision-making - Collaborate with cross-functional teams including product, engineering, and business stakeholders - Present findings and recommendations to technical and non-technical audiences - Lead proof-of-concept projects and innovation initiatives Required Qualifications: Education & Experience: - Master's or PhD in Computer Science, Data Science, Statistics, Mathematics, or related field - 5+ years of hands-on experience in data science and machine learning - 3+ years of experience with deep learning frameworks and neural networks - 2+ years of experience with cloud platforms and full-stack development Technical Skills - Core AI/ML: - Machine Learning: Scikit-learn, XGBoost, LightGBM, advanced ML algorithms - Deep Learning: TensorFlow, PyTorch, Keras, CNN, RNN, LSTM, Transformers - Large Language Models: GPT, BERT, T5, fine-tuning, prompt engineering - Generative AI: Stable Diffusion, DALL-E, text-to-image, text generation - Agentic AI: Multi-agent systems, reinforcement learning, autonomous agents Technical Skills - Development & Infrastructure: - Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript - Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI - Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra, DynamoDB) - Full-Stack Development: React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes - MLOps: MLflow, Kubeflow, Model versioning, A/B testing frameworks - Big Data: Spark, Hadoop, Kafka, streaming data processing Preferred Qualifications: - Experience with vector databases and embeddings (Pinecone, Weaviate, Chroma) - Knowledge of LangChain, LlamaIndex, or similar LLM frameworks - Experience with model compression and edge deployment - Familiarity with distributed computing and parallel processing - Experience with computer vision and NLP applications - Knowledge of federated learning and privacy-preserving ML - Experience with quantum machine learning - Expertise in MLOps and production ML system design Key Competencies: Technical Excellence: - Strong mathematical foundation in statistics, linear algebra, and optimization - Ability to implement algorithms from research papers - Experience with model interpretability and explainable AI - Knowledge of ethical AI and bias detection/mitigation Problem-Solving & Innovation: - Strong analytical and critical thinking skills - Ability to translate business requirements into technical solutions - Creative approach to solving complex, ambiguous problems - Experience with rapid prototyping and experimentation Communication & Leadership: - Excellent written and verbal communication skills - Ability to explain complex technical concepts to diverse audiences - Strong project management and organizational skills - Experience mentoring and leading technical teams DEI: In TaskUs, we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all TaskUs career opportunities and apply through the provided URL: [TaskUs Careers](https://www.taskus.com/careers/). TaskUs is proud to be an equal opportunity workplace and is an affirmative action employer. We celebrate and support diversity; we are committed to creating an inclusive environment for all employees. TaskUs people first culture thrives on it for the benefit of our employees, our clients, our services, and our community.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You are an experienced Lead Data Engineer with a strong expertise in Google BigQuery and working knowledge of Looker. Your main responsibility is to support a data modernization engagement by leading efforts in revamping data architecture, optimizing data pipelines, and supporting reporting needs. Additionally, you will mentor an internal client developer to ensure a smooth transition post-engagement. In terms of Data Architecture & Engineering, you will assess and enhance existing data models, pipelines, and architecture in BigQuery. Your focus will be on optimizing data flows and storage for performance, scalability, and cost-efficiency, while implementing best practices in data engineering and governance. You will also be responsible for Data Process & Structure Improvement by reviewing and enhancing data collection and transformation processes, ensuring high-quality, consistent, and reliable data availability. Regarding BI Support Dashboards & Reports, your tasks include enhancing and maintaining existing reports and dashboards using Looker, as well as designing and building new reports to meet evolving business requirements. Stakeholder Collaboration is crucial, as you will work closely with business and technical stakeholders to gather data and reporting needs, translating business requirements into technical deliverables. Mentoring & Knowledge Transfer is another key aspect of your role. You will guide and mentor the client's internal developer, ensuring sufficient handover for long-term maintainability after project completion. Your must-have skills include strong experience with Google BigQuery, proficiency in building and managing data pipelines and ETL/ELT workflows, solid SQL skills, and experience working with large datasets. Additionally, experience with Looker, version control (e.g., Git), CI/CD for data solutions, ability to work in Agile environments, and with remote teams are essential. Good-to-have skills include exposure to GCP services beyond BigQuery (e.g., Dataflow, Cloud Functions). Soft skills such as excellent communication and interpersonal skills, proven ability to work independently in a client-facing role, and experience in mentoring or coaching team members are highly valued. Finally, you are expected to adhere to Information Security Management policies and procedures in your role.,
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
The responsibilities for this role include maintaining and enhancing the existing MS Azure AI Infrastructure, providing appropriate documentation and status according to SCRUM, architecting, configuring, and deploying bots within Azure VNets and landing zones ensuring scalability and security. In addition, designing and managing private endpoints to ensure secure communication between bots and cloud services, implementing Azure security best practices for bot deployment, optimizing bot performance using Azure resources such as Application Gateway, Load Balancers, and Traffic Manager, collaborating with developers and DevOps teams to integrate bots into enterprise environments, troubleshooting and resolving network connectivity, authentication, and deployment issues, and monitoring bot deployments using Azure Monitor, Application Insights, and Log Analytics. The required skill sets for this position include Agile SCRUM certification or equivalent (Professional Agile Leadership, Professional Scrum Developer or Professional Scrum Master from SCRUM.org), Microsoft Azure AI Most Valuable Professional Certified, AI Technical Areas of Expertise recognized by Microsoft such as Azure Machine Learning, Microsoft Cognitive Services, and Microsoft Bot Framework, Microsoft Neural Machine Translation expertise, Dev Ops on HP private cloud, HP Cybersecurity policies advanced expertise, and MS Azure Conversational Language Understanding expertise. The ideal candidate for this role should have 8-11 years of experience with primary skills in Data Engineering and sub skills in Data Engineering. Additionally, having proficiency in Azure Log Analytics, Azure Data Factory, and Agile Framework (SCRUM) would be an added advantage.,
Posted 1 day ago
0.0 - 3.0 years
0 Lacs
indore, madhya pradesh
On-site
As a Python Developer Fresher/Junior, your responsibilities will include writing clean, scalable, and efficient Python code. You will work on back-end components, APIs, and data processing tasks, integrating front-end elements with server-side logic. Collaboration with cross-functional teams to define and develop features will be a crucial aspect of your role, along with debugging and fixing software defects. It is essential to have a strong understanding of Python basics and OOPs, familiarity with frameworks like Django or Flask, good knowledge of SQL and databases (MySQL/PostgreSQL), and a problem-solving mindset with a willingness to learn. For the position of MERN Stack Developer Fresher/Junior, you will be responsible for building and maintaining scalable web applications using MongoDB, Express.js, React.js, and Node.js. Your tasks will involve writing clean, maintainable, and reusable code, developing RESTful APIs, integrating them with front-end interfaces, and participating in code reviews while contributing to project planning. Knowledge of JavaScript ES6, HTML5, and CSS3, familiarity with React and Node.js basics, understanding of REST APIs and asynchronous programming, and version control with Git are essential requirements for this role. As a QA Engineer (Manual & Automation) Fresher/Junior, you will develop and execute test cases (manual and automated), identify, record, and document bugs thoroughly, collaborate with developers to understand features and resolve issues, and create test plans while participating in sprint activities. Basic understanding of testing concepts, SDLC, and STLC, exposure to any test automation tools (Selenium, JUnit, TestNG, etc.), familiarity with bug tracking tools (JIRA, Bugzilla), strong attention to detail, and analytical skills are necessary qualifications for this position. In the role of a Data Engineer Fresher/Junior, your responsibilities will involve building data pipelines and workflows using ETL tools, assisting in data cleaning, processing, and transformation tasks, working with large datasets from multiple sources, and supporting data analysis and reporting needs. Knowledge of SQL and database management, understanding of data warehousing concepts and tools, familiarity with Python or any scripting language for data manipulation, and bonus exposure to cloud platforms like AWS, Azure, or GCP are desired qualifications for this position. For the position of Java Developer Fresher/Junior, you will be tasked with designing, coding, and testing Java-based applications, developing and consuming RESTful APIs, debugging and resolving technical issues in applications, and collaborating with team members while participating in code reviews. Strong foundation in Core Java and OOPs, familiarity with Java frameworks like Spring Boot or Hibernate, understanding of SQL and relational databases, and good problem-solving and communication skills are essential requirements for this role.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Seeking an experienced and motivated Data Engineering Lead to join our growing team. The ideal candidate will have a minimum of 5 years of hands-on experience in data engineering, with a strong focus on Spark. As a Data Engineering Lead, you will play a pivotal role in designing, developing, and maintaining our data infrastructure, as well as leading a team of data engineers to deliver high-quality results.,
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a part of Majid Al Futtaim, you will be joining a team dedicated to creating great moments for everyone, every day. Majid Al Futtaim is a prominent shopping mall, residential communities, retail, and leisure pioneer operating across the Middle East, Africa, and Asia, welcoming more than 560 million visitors annually. Over the last twenty years, we have significantly influenced the consumer landscape in the region, revolutionizing shopping, living, and entertainment experiences, all while upholding a robust sustainability commitment. Notably, we are proud to have the largest mall globally to achieve LEED Gold EBOM Certification. Our workforce comprises over 45,000 team members hailing from 100 different nationalities across 17 international markets. At Majid Al Futtaim, customer-centricity is our core value, reflected in the endeavors of each team member. If you embody traits like being bold, passionate, and collaborative, Majid Al Futtaim is the ideal destination for you. Join us in our mission to make a difference and be a part of a diverse and inclusive environment where great opportunities await.,
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a Data Engineer at Guidehouse, you will have the opportunity to lead and execute data engineering projects, ensuring timely delivery and high quality. You will be responsible for building and optimizing data architectures for operational and analytical purposes, collaborating with cross-functional teams to gather and define data requirements. Additionally, you will implement data quality, data governance, and data security practices while managing and optimizing cloud-based data platforms such as Azure and AWS. Your role will involve developing and maintaining Python/PySpark libraries for data ingestion, processing, and integration with both internal and external data sources. You will design and optimize scalable data pipelines using Azure Data Factory and Spark (Databricks), working closely with stakeholders to address data-related technical issues and support their data infrastructure needs. As a mentor to junior data engineers, you will guide best practices in data engineering and evaluate and integrate new technologies and tools to improve data infrastructure. Ensuring compliance with data privacy regulations and monitoring performance across the data ecosystem will also be key responsibilities in this role. To qualify for this position, you should have a Bachelor's or Master's degree in computer science, information systems, statistics, math, engineering, or a related discipline. A minimum of 10+ years of hands-on experience in data engineering and cloud services is required, along with experience in leading and mentoring team members. Proficiency in Azure Data Factory, Databricks, Python, and PySpark is essential, as well as familiarity with modern data storage concepts like data lake and lake house. Experience in other cloud services such as AWS and data processing technologies will be advantageous, along with the ability to enhance, develop, and resolve defects in ETL processes using cloud services. Strong communication skills, problem-solving abilities, and a self-starter mindset are desirable traits for this role. Additionally, experience in different cloud providers, programming, and DevOps would be considered nice-to-have qualifications. Guidehouse offers a comprehensive total rewards package, including competitive compensation and a flexible benefits package to create a diverse and supportive workplace environment. Guidehouse is an Equal Opportunity Employer and will consider qualified applicants with criminal histories in accordance with applicable law. If you need accommodation during the application process, please contact Guidehouse Recruiting. Remember to be cautious of unauthorized correspondence related to job opportunities and report any suspicious activities to Guidehouse's Ethics Hotline. Your privacy and security are important to us.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Business Intelligence Partner at Randstad Sourceright, you will play a crucial role in partnering with regional and global stakeholders to provide business solutions through data-driven insights and recommendations. Your primary responsibility will be to transform complex data into compelling narratives that drive strategic business decisions. You will work closely with operational teams, data engineering, BI support teams, internal teams, and RSR clients to deliver insightful analytics and consulting services across all operating companies within RSR. In this role, you will oversee the end-to-end delivery of regional and global accounts with a focus on data analytics. This includes collaborating with data engineering to provide usable datasets, creating meaningful insights and visualizations within the BI solution (DOMO), and effectively communicating with stakeholders. Your expertise in translating data into actionable insights will be crucial in mitigating risks, identifying operational and revenue-generating opportunities, and providing innovative business solutions. Consulting and partnering will constitute a significant portion of your responsibilities (60%). You will leverage data analytics to extract actionable insights from both external client and internal RSR data, supporting strategic decision-making processes. Additionally, you will collaborate with global teams to standardize data practices, share analytics best practices, and mentor junior team members. Your ability to communicate complex insights through data visualization and storytelling will be essential in building and maintaining client relationships. Implementation of BI solutions will also be a key focus area (30%). You will lead the implementation workstream, deliver a reporting suite for clients, and ensure adherence to contractual conditions around reporting, financial elements, SLAs, and KPIs. Your role will involve developing and maintaining a comprehensive quality management program, overseeing the design and build of visualizations, and ensuring reporting accuracy and timeliness through continuous process improvement. Maintaining standards (10%) will involve working within agreed parameters aligned with global BI production methods, leveraging an agile work environment for innovation, organizing stakeholder feedback, promoting available resources for operations training, and ensuring data governance principles are met. To qualify for this role, you should have a Bachelor's degree (preferred in EMEA, mandatory in APAC) and a minimum of 8 years of experience in data analytics and consulting in a delivery environment. Proficiency in data visualization tools such as DOMO, Tableau, Spotfire, Power BI, and GCP is required, along with strong SQL skills and knowledge of data governance principles. Experience in project management, talent acquisition, and people analytics will be advantageous. Preferred experience in advanced analytics, data science, AI, ML, and client-facing communication skills will be beneficial. Your technical skills should include expertise in data visualization, data manipulation, analytics, and client-facing communication. Excellent presentation and storytelling abilities will be essential in delivering solutions to senior leadership and client stakeholders. If you are looking to join a dynamic team and make a significant impact through data-driven insights and recommendations, this role as a Senior Business Intelligence Partner at Randstad Sourceright offers a challenging and rewarding opportunity to excel in the field of business intelligence and analytics.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Software Development Engineer (Data Engineering) in Enterprise Data Solution (EDS) at Mastercard, you will have the opportunity to contribute to building high-performance data pipelines for loading data into the company's Data Warehouse. The Data Warehouse plays a crucial role in providing analytical capabilities to various business users, helping them address their business challenges through data-driven insights. You will be an integral part of a growing organization, collaborating with experienced engineers to tackle complex problems. Your responsibilities will include participating in medium-to-large size data engineering projects, integrating new sources of real-time, streaming, batch, and API-based data into the platform, and supporting business stakeholders in leveraging data-driven insights for growth and transformation. You will be involved in building and maintaining data processing workflows, ensuring reliable integrations with internal systems and third-party APIs, and assisting data analysts in deriving meaningful insights from complex datasets. Collaborating with cross-functional agile teams, you will drive projects through the full development cycle, while also promoting data engineering best practices within the team. To excel in this role, you should hold at least a Bachelor's degree in Computer Science, Computer Engineering, or a related field, or possess equivalent work experience. You must have prior experience in Data Warehouse projects within a product or service-based organization, along with expertise in Data Engineering and implementing end-to-end Data Warehouse projects in a Big Data environment. Proficiency in working with databases like Oracle and Netezza, strong SQL knowledge, and experience in building data pipelines using Spark with Scala/Python/Java on Hadoop are essential. Familiarity with Nifi and Agile methodologies is advantageous. Strong analytical skills are necessary for debugging production issues, providing root cause analysis, and implementing mitigation plans. Effective communication, relationship-building, collaboration, and organizational skills are essential for this role. You should be detail-oriented, proactive, and able to work independently under pressure, demonstrating a high level of initiative and self-motivation. The ability to quickly learn and adopt new technologies, conduct proof of concepts (POCs), and work effectively in diverse, geographically distributed project teams is key to success in this role. In addition to your technical responsibilities, as a Mastercard employee, you are expected to uphold corporate security responsibilities. This includes compliance with security policies, maintaining confidentiality and integrity of accessed information, reporting any security violations or breaches, and completing mandatory security trainings as per Mastercard's guidelines.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
The Data Product Owner will lead the development of innovative data products supporting Scientific Core Data Products. These products are built from source data from electronic lab notebook technologies in domains such as in vitro small and large molecule screens, next-generation sequencing, proteomics, and digital pathology source systems. Collaborating with scientists and technical stakeholders, the Data Product Owner will manage the full data product lifecycle from gathering requirements to product delivery and continuous improvement. Working closely with a dedicated scrum team composed of data engineers, the Data Product Owner will deliver high-impact data products that empower cutting-edge scientific research and innovation. The Data Product Owner will bridge the gap between scientists and product teams to ensure that outcomes align with business needs and value, acting as a master coordinator, driver, and translator. By gathering and prioritizing requirements from scientists, technical stakeholders, and cross-functional teams, the Data Product Owner will define the What and Why of product development. They will work closely with scientists and data engineers to ensure data products meet both scientific and technical requirements, collaborating with senior leaders and cross-functional teams to ensure alignment with the product strategy supported by business cases and industry standards. Overseeing the agile product development process, the Data Product Owner will ensure smooth product delivery from initial conception to ongoing iterations, including mitigating risks and issues, providing regular product updates, and partnering with the Scrum Master and delivery team for successful sprints. By managing backlog prioritization and balancing short-term deliverables with long-term goals, the Data Product Owner will drive continuous improvements in data products post-launch, monitoring user satisfaction, adoption, and scientific outcomes. The ideal candidate for this role should have a Master's degree in computer science, bioinformatics, data science, or a related scientific field, with 5+ years of product management experience, preferably in bioinformatics and/or life sciences. They should possess a solid understanding of Agile methodologies, data operations (DataOps), data warehousing, and data engineering principles, as well as familiarity with cloud environments such as AWS and Snowflake. Strong communication, collaboration, and leadership skills are essential for aligning teams around shared goals and driving successful product delivery in a fast-paced, complex environment.,
Posted 2 days ago
0.0 - 4.0 years
0 Lacs
madurai, tamil nadu
On-site
The ideal candidate will have hands-on experience working with Azure tools and technologies. You will receive mentorship from industry experts in data engineering and analysis, along with networking opportunities with professionals in the field. This position provides a pathway to secure a placement with us! We welcome applications from freshers, college students, and career gap candidates. There should be no arrears before enrolling, and the overall percentage does not matter. Strong analytical and problem-solving skills are essential, along with a passion for data and a desire to learn.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
We are seeking a skilled Python AI/ML Engineer with a strong passion for developing AI/ML solutions and contributing to ML pipelines. The ideal candidate will have in-depth knowledge of traditional and deep learning concepts, hands-on programming skills, experience in enterprise-grade software engineering, and a good understanding of MLOps practices. As a Python AI/ML Engineer, your responsibilities will include designing, developing, and deploying scalable machine learning models for various tasks such as classification, regression, NLP, and generative tasks. You will be tasked with building and optimizing data transformation workflows using Python and Pandas, leading AI/ML project pipelines from data ingestion to model deployment and monitoring, and implementing model observability and monitoring for drift. Additionally, you will be involved in developing REST APIs and integrating ML models with production systems using frameworks like FastAPI. Participation in code reviews, writing unit/integration tests, and ensuring high code quality will be essential. Collaborating with cross-functional teams including Data Engineers, DevOps, and Product Managers is a key aspect of this role. To excel in this position, you must have advanced proficiency in Python and its libraries such as Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch. A strong understanding of asynchronous programming, FastAPI, and concurrency is required. Solid grasp of traditional ML concepts, experience with deep learning, and familiarity with MLOps tools are also necessary skills. Experience with REST API development, integration testing, CI/CD practices, containerization tools like Docker, and cloud-based ML deployment will be beneficial. Furthermore, the ability to perform data transformation and aggregation tasks using Python/Pandas is essential for success in this role. Having experience in GenAI, working with LLMs, and exposure to tools like MLflow, Kubeflow, Airflow, or similar MLOps platforms are considered advantageous. Any prior contributions to open-source ML tools or GitHub repositories will be a plus. If you have a strong background in Python, AI/ML, deep learning, MLOps practices, and are enthusiastic about developing innovative solutions, we encourage you to apply for this position and be part of our dynamic team.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Genpact is a global professional services and solutions firm that focuses on delivering outcomes to shape the future. With over 125,000 employees in more than 30 countries, we are driven by curiosity, agility, and the desire to create lasting value for our clients. Our purpose is to pursue a world that works better for people by serving and transforming leading enterprises, including Fortune Global 500 companies. We specialize in deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Manager, Analytics. As part of the Analytics/Innovation HUB team, you will be involved in building in-house analytics solutions for CPGR companies, including shared services, finance, supply chain, and commercial groups. Responsibilities: - Building strategic relationships with clients and identifying key growth opportunities - Operating in a PMO environment, monitoring, controlling, and reporting the status of services/projects to clients - Identifying areas of significant customer business impact and providing strategic direction and thought leadership - Owning end-to-end projects, including design, development, and delivery of analytics solutions such as dashboards, data engineering, and data science activities - Creating project plans, timelines, and estimates - Managing the team, monitoring team hours, and driving project tasks and milestones - Overseeing day-to-day management of projects, communications, and priorities - Conducting workshops with key client stakeholders to generate ideas for solving business problems - Demonstrating the value of analytics and building strong working relationships with customers - Guiding and assisting sales teams in identifying and promoting solutions for new business growth - Collaborating with consulting, transformation, and practice teams to deliver complex, large client engagements that implement innovative business and technology solutions - Leading/participating in project discussions to bring best practices for problem-solving Minimum qualifications: - Bachelor's Degree in Engineering (Computer Science/information science) or Master's Degree (e.g., MBA in Data Analytics/Business Analytics) - Good Excel and presentation skills - Strong presentation and storytelling skills - Personal drive and positive work ethic to deliver results within tight deadlines and demanding situations - Attention to detail and quick response time - Good communication and negotiation skills with the ability to communicate clearly and precisely at various levels - Interpersonal skills, team builder, self-driven, with strategic thinking abilities Preferred Qualifications: - Experience in working and managing SCM and Finance projects would be an added advantage This is a full-time position located in India, Bangalore. The educational requirement is a Master's degree or equivalent. The job was posted on April 29, 2025, at 4:41:12 AM, and the unposting date is set for October 26, 2025, at 12:41:12 AM. The primary skill set required for this job is Operations, and the job category is Full Time.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You will be joining as an expert application support engineer to enhance the global support capacity for the data and analytics platform utilized by the research and trading teams. Your role will involve collaborating closely with data engineers, data scientists, external data vendors, and trading teams to address promptly issues related to data quality, data integration, model pipelines, and analytics applications. Your expertise should include proficiency in Python, SQL, familiarity with data engineering, and hands-on experience with AWS data and analytics services or similar cloud vendor services. Moreover, you should possess strong problem-solving skills, excellent interpersonal abilities, and the capacity to efficiently organize and prioritize your tasks. Your primary responsibilities will include managing incidents and users on the data and analytics platform, developing and maintaining the Data Quality framework (including anomaly detection), implementing Python & SQL hotfixes, collaborating with data engineers on complex issues, as well as deploying diagnostic tools and automating operational processes. In this role, you will closely collaborate with data scientists, data engineers, and platform engineers in a dynamic commercial environment. Your support will be vital in assisting research analysts and traders in resolving issues effectively. Key competencies for this role include exceptional problem-solving skills and the capability to communicate clearly with a diverse range of customers across business units and technology teams. You will be reporting to the Head of DSE Engineering in Mumbai, who in turn reports to the Global Head of Cloud and Data Engineering.,
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data engineering job market in India is flourishing with a high demand for professionals who can manage and optimize large amounts of data. Data engineering roles are critical in helping organizations make informed decisions and derive valuable insights from their data.
The average salary range for data engineering professionals in India varies based on experience levels. Entry-level positions typically start around ₹4-6 lakhs per annum, while experienced data engineers can earn upwards of ₹15-20 lakhs per annum.
In the field of data engineering, a typical career path may progress as follows: - Junior Data Engineer - Data Engineer - Senior Data Engineer - Tech Lead
In addition to data engineering expertise, professionals in this field are often expected to have skills in: - Data modeling - ETL processes - Database management - Programming languages like Python, Java, or Scala
As you explore data engineering jobs in India, remember to hone your skills, stay updated on the latest technologies, and prepare thoroughly for interviews. With the right mindset and preparation, you can confidently apply for and excel in data engineering roles in the country. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France