Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 5.0 years
2 - 5 Lacs
Bengaluru
Work from Office
PLC/DCS/Drives : Emerson Automation Solutions-IP (PAC Systems, GMR/TMR Systems, 90-30 Series, Micro/Nano PLC). DeltaV, Siemens, and ABB 800xA with PM864 controllers and Control Builder Programming. Adapt in managing any PLC/DCS. Control Software : Proficy suite, iFIX, GMR – GEIP, ABB 800xA Control Builder, Siemens TIA Portal, Aveva Plant SCADA (With ASM Standards), Schneider Unity Pro, IA Ignition SCADA, PTC VT SCADA. Communication Protocols : Modbus Communication, Profinet, IEC61850, DNP, HART, OPC etc. Field Instruments and PLC Hardware : Proficient analyzation of field instruments like pressure/temperature/flow/level switches & transmitters, Fire & Gas detection Systems, limit switches, Installation support and testing, PLC IO wiring support and testing, configuring the field instruments for the PLC IO’s, developing logics for the same and testing, FAT/SAT support, review and rectification of customer queries during FAT/SAT. Proficient documentation skills in SCDs, C&Es, instrument and cable list documentation, IO tag databases, control systems, instruments, and valves.
Posted 6 days ago
5.0 - 9.0 years
15 - 22 Lacs
Chennai
Work from Office
We are looking for a skilled and motivated Senior Data Engineer to join data integration and analytics team. The ideal candidate will have hands-on experience with Informatica IICS, AWS Redshift, Python scripting, and Unix/Linux systems. You will be responsible for building and maintaining scalable ETL pipelines to support business intelligence and analytics needs. We value individuals who are passionate about continuous learning, problem-solving, and enabling data-driven decision-making. Years of Experience: Min. 5 years (Note: with 3+ years of hands-on experience in Informatica IICS (Cloud Data Integration, Application Integration) Primary Skills: Informatica IICS, AWS (especially Redshift) Secondary Skills: Python, Unix/Linux Role Description: As a Senior Data Engineer, you will lead the design, development, and management of scalable data platforms and pipelines. This role demands a strong technical foundation in data architecture, big data technologies, and database systems (both SQL and NoSQL), along with the ability to collaborate across functional teams to deliver robust, secure, and high-performing data solutions. Key Responsibilities: Design, develop, and maintain end-to-end data pipelines and infrastructure. Translate business and functional requirements into scalable, well-documented technical solutions. Build and manage data flows across structured and unstructured data sources, including streaming and batch integrations. Ensure data integrity and quality through automated validations, unit testing, and comprehensive documentation. Optimize data processing performance and manage large datasets efficiently. Collaborate closely with stakeholders and project teams to align data solutions with business objectives. Implement and maintain security and privacy protocols to ensure safe data handling. Set up development environments and configure tools and services. Mentor junior data engineers and contribute to continuous improvement and automation initiatives. Coordinate with QA and UAT teams during testing and release phases. Role Requirements: Strong proficiency in SQL, including procedures, performance tuning, and analytical functions. Solid understanding of data warehousing concepts, including dimensional modeling and slowly changing dimensions (SCDs). Hands-on experience with scripting languages (Shell / PowerShell). Proficiency in data profiling, validation, and testing practices. Excellent problem-solving, communication (written and verbal), and documentation skills. Exposure to Agile methodologies and CI/CD practices. Additional Requirements: Overall 5+ years of experience, with 3+ years of hands-on experience in Informatica IICS (Cloud Data Integration, Application Integration). Strong proficiency in AWS Redshift and writing complex SQL queries. Solid programming experience in Python for scripting, data wrangling, and automation.
Posted 3 weeks ago
1.0 - 4.0 years
3 - 7 Lacs
Bengaluru
Work from Office
: Job TitleReconciliation Production Analyst, NCT LocationBangalore, India Role Description The role requires the individual to manage the cash publishing and reconciliation (for Cash, Custody, and Intersystem Positions) for set of portfolios. The cash publishing process ensures the right cash projections to front office for their investment decisions. Cash Publishing is a sensitive process and requires transaction-based research on a list of Portfolios where errors will most likely result intofinancial/operationalrisk, hence it is very important to fill the position so that we have adequate time to train the person and avoid any impact on business. Good understanding of Reconciliation on various product classes such as Equity bond etc. including the end-to-end investigation on discrepancy between us vs external party. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Conduct cash publishing and reconciliation on the breaks Timely follow ups on the open breaks Securities & OTC recon Good team player Preparing daily MIS Your skills and experience Reconciliations on cash & positions Hands on experience on TLM, Aladdin, SCD Should be able to understand the accounting vs Investment book of records. Experience / Qualifications: Bachelors degree with 1-4 years of experience How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm
Posted 3 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI . Inviting applications for the role of Lead Consulta nt- Snowflake Data Engineer ( Python+Cloud ) ! In this role, the Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification is must. Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL , Bulk copy, Snowpipe , Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight . Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/ Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python and Pyspark . Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2 . Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/ PySpark , AWS/Azure, ETL concepts, & Data Warehousing concepts Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 3 weeks ago
0.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Inviting applications for the role of Lead Consultant -Data Engineer! . Design, document & implement the data pipelines to feed data models for subsequent consumption in Snowflake using dbt, and airflow. . Ensure correctness and completeness of the data being transformed via engineering pipelines for end consumption in Analytical Dashboards. . Actively monitor and triage technical challenges in critical situations that require immediate resolution. . Evaluate viable technical solutions and share MVPs or PoCs in support of the research . Develop relationships with external stakeholders to maintain awareness of data and security issues and trends . Review work from other tech team members and provide feedback for growth . Implement Data Performance and data security policies that align with governance objectives and regulatory requirements . Effectively mentor and develop your team members . You have experience in data warehousing, data modeling, and the building of data engineering pipelines. . You are well versed in data engineering methods, such as ETL and ELT techniques through scripting and/or tooling. . You are good at analyzing performance bottlenecks and providing enhancement recommendations you have a passion for customer service and a desire to learn and grow as a professional and a technologist. . Strong analytical skills related to working with structured, semi-structured, and unstructured datasets. . Collaborating with product owners to identify requirements, define desired and deliver trusted results. . Building processes supporting data transformation, data structures, metadata, dependency, and workload management. . In this role, SQL is heavily focused. An ideal candidate must have hands-on experience with SQL database design. Plus, Python. . Demonstrably deep understanding of SQL (level: advanced) and analytical data warehouses (Snowflake preferred). . Demonstrated ability to write new code i.e., well-documented and stored in a version control system (we use GitHub & Bitbucket) . Extremely talented in applying SCD, CDC, and DQ/DV framework. . Familiar with JIRA & Confluence. . Must have exposure to technologies such as dbt, Apache airflow, and Snowflake. . Desire to continually keep up with advancements in data engineering practices. Qualifications we seek in you! Minimum qualifications: Essential Education Bachelor%27s degree or equivalent combination of education and experience. Bachelor%27s degree in information science, data management, computer science or related field preferred. Essential Experience & Job Requirements . IT experience with a major focus on data warehouse/database-related projects . Must have exposure to technologies such as dbt, Apache Airflow, and Snowflake. . Experience in other data platforms: Oracle, SQL Server, MDM, etc . Expertise in writing SQL and database objects - Stored procedures, functions, and views. Hands-on experience in ETL/ELT and data security, SQL performance optimization and job orchestration tools and technologies e.g., dbt, APIs, Apache Airflow, etc. . Experience in data modeling and relational database design . Well-versed in applying SCD, CDC, and DQ/DV framework. . Demonstrate ability to write new code i.e., well-documented and stored in a version control system (we use GitHub & Bitbucket) . Good to have experience with Cloud Platforms such as AWS, Azure, GCP and Snowflake . Good to have strong programming/ scripting skills (Python, PowerShell, etc.) . Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs) o Excellent written, and oral communication and presentation skills to present architecture, features, and solution recommendations Global functional product portfolio technical leaders (Finance, HR, Marketing, Legal, Risk, IT), product owners, functional area teams across levels o Global Data Product Portfolio Management & teams (Enterprise Data Model, Data Catalog, Master Data Management) Preferred Qualifications Knowledge of AWS cloud, and Python is a plus. . . . . . .
Posted 1 month ago
7.0 - 10.0 years
10 - 16 Lacs
Bengaluru
Hybrid
Job Description: The candidate should be result-oriented, self-starter and self-motivated who can manage multiple time-sensitive assignments. Should have expert data modeling skills including conceptual, logical and physical model design. The candidate should be able to participate in the development of Business Intelligence solutions to meet the needs of the Commercial Organizations (Finance, CRM, Sales, Sales Operations, Marketing) Evaluate business requirements, conduct a POC and proactively call out data anomalies within the source system. Perform end to end data validation for a solution ensuring quality data delivery. Develop/performance tune complex SQL (PL/SQL and/or T-SQL) queries and stored procedures. Should be well versed with project tracking tools like Jira and Github. Able to batch processes using any scripting language like pearl, Unix or python. Experience with source systems like salesforce (SOQL), NetSuite or any other CRM is a plus. Candidate should be able to create strategic design and mapping of business requirements to system/technical requirements. Candidate should be able to communicate, translate, and simplify business requirements to ensure buy-in from all stakeholders. Requirements 7-10 years of experience with Database Development using Packages/Stored Procedures/Triggers in SQL and PLSQL 7-10 years of experience in Dimensional Modeling, Star and Snowflake Schemas design and maintaining dimensions using Slowly Changing Dimensions(SCD) 7+ years of experience in SQL performance tuning and optimization techniques Ability to work on on ad-hoc requests and should be able to work on simultaneous projects or tasks Effective and positive communication and Team working skills.
Posted 1 month ago
1.0 - 2.0 years
4 - 9 Lacs
Pune, Chennai, Bengaluru
Work from Office
Job Title: Data Engineer Experience: 12 to 20 months Work Mode: Work from Office Locations: Bangalore, Chennai, Kolkata, Pune, Gurgaon Role Overview We are seeking a driven and hands-on Data Engineer with 12 to 20 months of experience to support modern data pipeline development and transformation initiatives. The role requires solid technical skills in SQL , Python , and PySpark , with exposure to cloud platforms such as Azure Databricks or GCP . As a Data Engineer at Tredence, you will work on ingesting, processing, and modeling large-scale data, implementing scalable data pipelines, and applying foundational data warehousing principles. This role also includes direct collaboration with cross-functional teams and client stakeholders. Key Responsibilities Develop robust and scalable data pipelines using PySpark in cloud platforms like Azure Databricks or GCP Dataflow . Write optimized SQL queries for data transformation, analysis, and validation. Implement and support data warehouse models and principles, including: Fact and Dimension modeling Star and Snowflake schemas Slowly Changing Dimensions (SCD) Change Data Capture (CDC) Medallion Architecture Monitor, troubleshoot, and improve pipeline performance and data quality. Work with teams across analytics, business, and IT functions to deliver data-driven solutions. Communicate technical updates and contribute to sprint-level delivery. Mandatory Skills Strong hands-on experience with SQL and Python Working knowledge of PySpark for data transformation Exposure to at least one cloud platform: Azure Databricks or GCP Good understanding of data engineering and warehousing fundamentals Excellent debugging and problem-solving skills Strong written and verbal communication skills Preferred Skills Experience working with Databricks Community Edition or enterprise version Familiarity with data orchestration tools like Airflow or Azure Data Factory Exposure to CI/CD processes and version control (e.g., Git) Understanding of Agile/Scrum methodology and collaborative development Basic knowledge of handling structured and semi-structured data (JSON, Parquet, etc.)
Posted 1 month ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Warehouse Architect to design scalable, high-performance data warehouse solutions for analytics and reporting. Perfect for engineers experienced with large-scale data systems. Key Responsibilities: Design and maintain enterprise data warehouse architecture Optimize ETL/ELT pipelines and data modeling (star/snowflake schemas) Ensure data quality, security, and performance Work with BI teams to support analytics and reporting needs Required Skills & Qualifications: Proficiency with SQL and data warehousing tools (Snowflake, Redshift, BigQuery, etc.) Experience with ETL frameworks (Informatica, Apache NiFi, dbt, etc.) Strong understanding of dimensional modeling and OLAP Bonus: Knowledge of cloud data platforms and orchestration tools (Airflow) Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 month ago
7.0 - 12.0 years
0 - 2 Lacs
Pune, Ahmedabad, Gurugram
Work from Office
Urgent Hiring: Azure Data Engineer (Strong PySpark + SCD II/III Expert) Work Mode: Remote Client-Focused Interview on PySpark + SCD II/III Key Must-Haves: Very Strong hands-on PySpark coding Practical experience implementing Slowly Changing Dimensions (SCD) Type II and Type III Strong expertise in Azure Data Engineering (ADF, Databricks, Data Lake, Synapse) Proficiency in SQL and Python for scripting and transformation Strong understanding of data warehousing concepts and ETL pipelines Good to Have: Experience with Microsoft Fabric Familiarity with Power BI Domain knowledge in Finance, Procurement, and Human Capital Note: This role is highly technical. The client will focus interviews on PySpark coding and SCD Type II/III implementation . Only share profiles that are hands-on and experienced in these areas. Share strong, relevant profiles to: b.simrana@ekloudservices.com
Posted 1 month ago
1.0 - 4.0 years
5 - 9 Lacs
Bengaluru
Work from Office
About The Role : Job TitleOperations Analyst, NCT LocationBangalore, India Role Description The Analyst / Sr. Analyst will be responsible for completion of day-to-day activity as per standards and ensure accurate and timely delivery of assigned production duties. Candidate needs to ensure adherence to all cut-off times and quality of processing as maintained in SLAs. Candidate should ensure that all queries/first level escalations related to routine activities are responded to within the time frames pre-specified. Should take responsibility and act as backup for the Peers in their absence and share best practices with the team. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Manage daily Reconciliation of Securities/Cash Internal book of records V/s Custodian Books. Basic knowledge of Daily Uploads of feeds and its Maintenance. Investigating Margin differences / Tax related differences. Research and bookings of Dummy forexes Manage cash reconciliation between Aladdin and the custodian feeds on trade and Currency level. Identify the cause and assign the cash/position break to correct team for further investigation & resolution. Perform primary investigation on the cash/position breaks on Aladdin escalate all issues properly, in time, to the appropriate level, to avoid any adverse impact on the business Responsible for understanding clients needs from a technical and operational perspective Ensure support for managing internal projects/initiatives, Timely response to all front office/ Internal queries Ensure strict adherence to all internal and external process guidelines including compliance and legal. Ensure candidate has assisted in creating proper backups through adequate cross training, within the department Your skills and experience Experience in handling Cash and Position reconciliation.(Preferred) Knowledge of Trade Life Cycle. Preferred Knowledge of Financial products like Debt, Equity, Derivatives etc. Functional Skills: Have Working knowledge of SSR/TLM/SCD/Aladdin reconciliation tool Cognos reporting Have basic knowledge of Reconciliation process and understand various (ledger and statement) feeds/swifts. Have experience of Bank Custody, FOBO reconciliation. Knowledge of Trade Life Cycle of various financial products will be an advantage. Have Working knowledge of SSR/TLM reconciliation tool. Attention to Details. Skills Needs to be a self-starter with significant ability to undertake initiatives. Strong interpersonal / good negotiations skills are required. Follow through skills, Effective communication skills, fluency in Microsoft Office skills, ability to confidently handle internal clients, futuristic and innovative approach will be expected. Ability and willingness to work in night shift is a must. Education / Certification Qualification Graduates with good academic records. Any certifications in securities such as NCFM modules, will be good but not compulsory. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm
Posted 2 months ago
5.0 - 10.0 years
12 - 22 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Job Title: ======= Senior MS BI Developer Onsite Location: ============= Dubai, UAE Doha , Qatar Riyadh, Saudi Onsite Monthly Salary: ============== 10k AED - 15k AED - Full tax free salary , Depending on Experience Gulf work permit will be sponsored by our client Project duration: ============= 2 Years, Extendable Desired Experience Level Needed: =========================== 5 - 10 Years Qualification: ========== B.Tech / M.Tech / MCA / M.Sc or equivalent Experience Needed: =============== Over all: 5 or more Years of total IT experience Solid 3+ Years experience or MS - BI Developer with Microsoft Stack / MS - DWH Engineer Job Responsibilities: ================ - Design and develop DWH data flows - Able to build SCD -1 / SCD - 2 / SCD -3 dimensions - Build Cubes - Maintain SSAS / DWH data - Design Microsoft DWH & its ETL packages - Able to code T-SQL - Able to create Orchestrations - Able to design batch jobs / Orchestrations runs - Familiarity with data models - Able to develop MDM (Master Data Management) Experience: ================ - Experience as DWH developer with Microsoft DWH data flows and cubes - Exposure and experience with Azure services including Azure Data Factory - Sound knowledge of BI practices and visualization tools such as PowerBI / SSRS/ QlikView - Collecting / gathering data from various multiple source systems - Creating automated data pipelines - Configuring Azure resources and services Skills: ================ - Microsoft SSIS / SSAS / SSRS - Informatica - Azure Data Factory - Spark - SQL Nice to have: ========== - Any on site experience is added advantage, but not mandatory - Microsoft certifications are added advantage Business Vertical: ============== - Banking / Investment Banking - Capital Markets - Securities / Stock Market Trading - Bonds / Forex Trading - Credit Risk - Payments Cards Industry (VISA/ Master Card/ Amex) Job Code: ====== MSBI_DEVP_0525 No.of positions: ============ 05 Email: ===== spectrumconsulting1977@gmail.com if you are interested, please email your CV as ATTACHMENT with job ref. code [ MSBI_DEVP_0525 ] as subject
Posted 2 months ago
9 - 11 years
37 - 40 Lacs
Ahmedabad, Bengaluru, Mumbai (All Areas)
Work from Office
Dear Candidate, We are hiring a Data Engineer to build scalable data pipelines and infrastructure to power analytics and machine learning. Ideal for those passionate about data integrity, automation, and performance. Key Responsibilities: Design ETL/ELT pipelines using tools like Airflow or dbt Build data lakes and warehouses (BigQuery, Redshift, Snowflake) Automate data quality checks and monitoring Collaborate with analysts, data scientists, and backend teams Optimize data flows for performance and cost Required Skills & Qualifications: Proficiency in SQL, Python, and distributed systems (e.g., Spark) Experience with cloud data platforms (AWS, GCP, or Azure) Strong understanding of data modeling and warehousing principles Bonus: Experience with Kafka, Parquet/Avro, or real-time streaming Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
5 - 8 years
14 - 18 Lacs
Hyderabad
Work from Office
Role & responsibilities Plan, develop, and coordinate test activities including creation and execution of test plans and test cases. Perform debugging, defect tracking, test analysis, and documentation. Understand business functionality and application technology under test. Collaborate with on-site teams and other stream areas during release cycles. Utilize ESG QA tools, methodologies, and processes. Ensure low bug rates and high code quality during releases. Manage build deployments to QA and flag risks/issues proactively. Skills Required: Experience with SQL and ETL testing, schema validation, and SCD types. Strong knowledge in data warehouse/BI testing and cloud-based services (Azure). Expertise in writing complex SQL queries and validating data during migration. Proficient in UFT, TFS, Microsoft tools, and peripheral technologies (SAP, PeopleSoft, Aderant). Strong communication, estimation, and project delivery skills. Team leadership, remote collaboration, and quality focus. Interested candidates can share your resume to sarvani.j@ifinglobalgroup.com
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France