Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Andhra Pradesh, India
On-site
JD Key Responsibilities Lead the end-to-end migration of legacy data warehouses (e.g., Teradata, Oracle, SQL Server, Netezza, Redshift) to Snowflake. Assess current data architecture and define migration strategy, roadmap, and timelines. Develop ELT/ETL pipelines using tools such as dbt, Apache Airflow, Matillion, Talend, Informatica, etc. Optimize Snowflake configurations, including clustering, caching, and resource management for performance and cost efficiency. Implement security best practices, including role-based access, masking, and data encryption. Collaborate with data engineering, analytics, and business teams to ensure accurate and efficient data transfer. Create and maintain technical documentation, including migration plans, test scripts, and rollback procedures. Support validation, testing, and go-live activities. Required Skills & Experience 5+ years in data engineering or data platform roles, with at least 2+ years in Snowflake migration projects. Hands-on experience in migrating large datasets from legacy data warehouses to Snowflake. Proficient in SQL, Python, and Snowflake scripting (SnowSQL, stored procedures, UDFs). Experience with data migration tools and frameworks (e.g., AWS SCT, Azure Data Factory, Fivetran, etc.). Strong knowledge of cloud platforms (AWS, Azure, or GCP). Familiarity with DevOps practices, CI/CD for data pipelines, and version control (Git). Excellent problem-solving and communication skills. Preferred Qualifications Snowflake certification(s) SnowPro Core or Advanced Architect. Experience with real-time data ingestion (e.g., Kafka, Kinesis, Pub/Sub). Background in data governance, data quality, and compliance (GDPR, HIPAA). Prior experience in Agile/Scrum delivery environments
Posted 2 days ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
BI-Tableau desktop, Tableau Reports and Dashboard Design, Data visualization and analysis, Tableau Server, Tableau Reader, Cognos Report Studio, Query Studio, Cognos Connection is a plus Languages- SQL, PL SQL, T-SQL, SQL Plus, SAS Base a plus Perform complex MS excel operations Pivot table, filter operations on the underlying data Knowledge of reporting tools like Qlik Sense, Qlik view and statistical tools like Advanced Excel (v-lookup, charts, dashboard design), Visual Basic using visual studio, MS Access is a plus Possess ability for critical thinking, analysis, good interpersonal and communication skills. Ability to adapt and learn new technologies and get quickly proficient with them. Data mining experience Blended data from multiple resources like flat files, Excel, Oracle, and Tableau server environment Used Cloud sources like Amazon AWS Redshift, Snowflake, Google Drive, MS Excel, Oracl
Posted 2 days ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About The Role Grade Level (for internal use): 09 The Team Are you ready to dive into the world of data and uncover insights that shape global commodity markets? We're looking for a passionate BI Developer to join our Business Intelligence team within the Commodity Insights division at S&P Global. At S&P Global, we are on a mission to harness the power of data to unlock insights that propel our business forward. We believe in innovation, collaboration, and the relentless pursuit of excellence. Join our dynamic team and be a part of a culture that celebrates creativity and encourages you to push the boundaries of what’s possible. Key Responsibilities Unlocking the Power of Data Collaborate on the end-to-end data journey, helping collect, cleanse, and transform diverse data sources into actionable insights that shape business strategies for functional leaders. Work alongside senior BI professionals to build powerful ETL processes, ensuring data quality, consistency, and accessibility. Crafting Visual Storytelling Develop eye-catching, impactful dashboards and reports that tell the story of commodity trends, prices, and global market dynamics. Bring data to life for stakeholders across the company, including executive teams, analysts, and developers, by helping to create visually compelling and interactive reporting tools. Mentor and train users on dashboard usage for efficient utilization of insights. Becoming a Data Detective Dive deep into commodities data to uncover trends, patterns, and hidden insights that influence critical decisions in real-time. Demonstrate strong analytical skills to swiftly grasp business needs and translate them into actionable insights. Collaborate with stakeholders to define key metrics and KPIs and contribute to data-driven decisions that impact the organization’s direction. Engaging with Strategic Minds Work together with cross-functional teams within business operations to turn complex business challenges into innovative data solutions. Gather, refine, and translate business requirements into insightful reports and dashboards that push our BI team to new heights. Provide ongoing support to cross-functional teams, addressing issues and adapting to changing business processes. Basic Qualifications 3+ years of professional experience in BI projects, focusing on dashboard development using Power BI or similar tools and deploying them on their respective online platforms for easy access. Proficiency in working with various databases such as Redshift, Oracle, and Databricks, using SQL for data manipulation, and implementing ETL processes for BI dashboards. Ability to identify meaningful patterns and trends in data to provide valuable insights for business decision-making. Skilled in requirement gathering and developing BI solutions. Candidates with a strong background/proficiency in Power BI and Power Platforms tools such as Power Automate/Apps, and intermediate to advanced proficiency in Python are preferred. Essential understanding of data modeling techniques tailored to problem statements. Familiarity with cloud platforms (e.g., Azure, AWS) and data warehousing. Exposure to GenAI concepts and tools such as ChatGPT. Experience with to Agile Project Implementation methods. Excellent written and verbal communication skills. Must be able to self-start and succeed in a fast-paced environment. Additional/Preferred Qualifications Knowledge of Generative AI, Microsoft Copilot, and Microsoft Fabric a plus. Ability to write complex SQL queries or enhance the performance of existing ETL pipelines is a must. Familiarity with Azure Devops will be an added advantage. Candidates with a strong background/proficiency in Power BI and Power Platforms tools such as Power Automate/Apps, and intermediate to advanced proficiency in Python are preferred. Shift Timings:- 1PM-10PM IST (Flexibility Required) About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 312656 Posted On: 2025-06-26 Location: Hyderabad, Telangana, India
Posted 2 days ago
2.0 years
10 Lacs
Chennai
Remote
Seeking a Sigma Developer to build dashboards, optimize SQL, integrate with JS frameworks, connect to cloud warehouses, ensure BI security, and support CI/CD. Must excel in Sigma, data modeling, and cross-team collaboration for data-driven insights. Required Candidate profile Bachelor’s in CS/Data field, 2+ yrs in Sigma/BI tools, SQL expert, experience with embedding, cloud warehouses (Snowflake/BigQuery), data modeling, BI security, and building responsive dashboards
Posted 2 days ago
3.0 - 6.0 years
40 - 45 Lacs
Kochi, Kolkata, Bhubaneswar
Work from Office
We are seeking experienced Data Engineers with over 3 years of experience to join our team at Intuit, through Cognizant. The selected candidates will be responsible for developing and maintaining scalable data pipelines, managing data warehousing solutions, and working with advanced cloud environments. The role requires strong technical proficiency and the ability to work onsite in Bangalore. Key Responsibilities: Design, build, and maintain data pipelines to ingest, process, and analyze large datasets using PySpark. Work on Data Warehouse and Data Lake solutions to manage structured and unstructured data. Develop and optimize complex SQL queries for data extraction and reporting. Leverage AWS cloud services such as S3, EC2, EMR, Athena, and Redshift for data storage, processing, and analytics. Collaborate with cross-functional teams to ensure the successful delivery of data solutions that meet business needs. Monitor data pipelines and troubleshoot any issues related to data integrity or system performance. Required Skills: 3 years of experience in data engineering or related fields. In-depth knowledge of Data Warehouses and Data Lakes. Proven experience in building data pipelines using PySpark. Strong expertise in SQL for data manipulation and extraction. Familiarity with AWS cloud services, including S3, EC2, EMR, Athena, Redshift, and other cloud computing platforms. Preferred Skills: Python programming experience is a plus. Experience working in Agile environments with tools like JIRA and GitHub.
Posted 2 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview As a member of the Platform engineering team, you will be the key techno functional expert leading and overseeing PepsiCo's Platforms & operations and drive a strong vision for how Platforms engineering can proactively create a positive impact on the business. You'll be an empowered Leader of a team of Platform engineers who build Platform products for platform optimization and cost optimization and build tools for Platform ops and Data Ops on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As Leader of the Platform engineering team, you will help in managing platform Governance team that builds frameworks to guardrail the platforms of very large and complex data applications in public cloud environments and directly impact the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of platforms and services. Manage and scale Azure Data Platforms to support new product launches and drive Platform Stability and Observability across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for Data Platforms for cost and performance. Responsible for implementing best practices around systems integration, security, performance and Platform management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to “production Alize” data science models. Define and manage SLAs for Platforms and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 10+ years of overall technology experience that includes at least 4+ years of hands-on software development, Program management, and Advanced Analytics. 4+ years of experience with Power BI, Tableau, Data Warehousing, and Data Analytics tools. 4+ years of experience in Platform optimization and performance tuning Experience in managing multiple teams and coordinating with different stakeholders to implement the vision of the team. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building Symantec Models. Proficient in DAX queries, Copilot and AI Skills Experience building/operating highly available, distributed systems of data Visualization . Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with version control systems like Github and deployment & CI tools. Knowledge of Azure Data Factory, Azure Databricks. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with Augmented Analytics tools is Plus (such as ThoughtSpot, Tellius).
Posted 2 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Overview As an Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling - documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 4+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).
Posted 2 days ago
10.0 - 13.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Years of Experience: 10-13 Years old Location: Navi Mumbai (Locals or relocation candidate) Notice Period: Immediate joiner preferred or 30 days Max Budget: 60 LPA Job Description: We are looking for an experienced AI/ML Lead to spearhead the design, development, and deployment of intelligent solutions across cloud environments. The ideal candidate will have a strong background in deep learning, big data processing, and AI integrations, with proven experience in applying OCR and prompt engineering in real-world enterprise use cases. This role requires both technical depth and team leadership to deliver scalable, production-grade models and collaborate across business functions. Key Responsibilities: • Lead AI/ML solution design and delivery across cloud platforms (Azure ML, AWS SageMaker). • Build, train, and deploy advanced models using Deep Learning Frameworks like TensorFlow and PyTorch. • Apply OCR techniques (with CNNs) for document image understanding and automation. • Drive prompt engineering for AI integrations (e.g., Copilot, OpenAI, IBM Watson). • Optimize model pipelines for performance and scalability within Azure Synapse and cloud-based data warehouses. • Collaborate across departments to apply AI in business intelligence and enterprise data architecture use cases. • Manage and mentor a team of junior AI engineers and analysts. • Work on structured and unstructured data pipelines using tools like Databricks, Spark, and Dask. • Utilize and integrate Amazon EC2, ECR, S3, Redshift and manage model training and deployment in cloud environments. Must-Have Skills: • Azure ML, AWS (SageMaker, CodeGuru, EC2, ECR, S3) • OCR with CNN • Deep Learning: TensorFlow, PyTorch • Prompt Engineering with OpenAI/Copilot • Big Data: Databricks, Spark, Dask • Business Intelligence & AI Integrations • Team Leadership & Client Communication
Posted 2 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Lead Data Engineer Primary Skills Athena, SNS, SQS, CloudWatch, Kinesis, Redshift Job requirements Monitor/track multiple sources for issues being reported – ServiceNow/Slack/Email/Calls Create and track all incidents/production issues in Service Now with the right categorization For all ServiceNow incidents, analyze the inputs from helpdesk and assign them to the right application queue. Manage the queue assignments of incidents based on status and change of context. Perform actions based on SOPs to analyze/resolve the issue. Identify the root cause or source of issue as possible to make L3 analysis easier. Create the summary of actions done, possible next steps and handover to L3 team changing the assignment. Communicate with the end user on updates and gather additional data and feedback. Validate the fix provided by L3, update incidents as required and involve L4/L5 as required. Report the trend and pattern of issues to deed product backlogs and create SOPs on identified issues and resolutions for future consumption. Provide on-call support for any critical issues during the weekend
Posted 2 days ago
0 years
0 Lacs
India
Remote
About Us Our leading SaaS-based Global Employment Platform™ enables clients to expand into over 180 countries quickly and efficiently, without the complexities of establishing local entities. At G-P, we’re dedicated to breaking down barriers to global business and creating opportunities for everyone, everywhere. Our diverse, remote-first teams are essential to our success. We empower our Dream Team members with flexibility and resources, fostering an environment where innovation thrives and every contribution is valued and celebrated. The work you do here will positively impact lives around the world. We stand by our promise: Opportunity Made Possible. In addition to competitive compensation and benefits, we invite you to join us in expanding your skills and helping to reshape the future of work. At G-P, we assist organizations in building exceptional global teams in days, not months—streamlining the hiring, onboarding, and management process to unlock growth potential for all. About The Position As a Senior Engineering Manager at Globalization Partners, you will be responsible for both technical leadership and people management. This includes contributing to architectural discussions, decisions, and execution, as well as managing and developing a team of Data Engineers (of different experience levels). What You Can Expect To Do Own the strategic direction and execution of initiatives across our Data Platform, aligning technical vision with business goals. Guide teams through architectural decisions, delivery planning, and execution of complex programs that advance our platform capabilities. Lead and grow high-performing engineering teams responsible for the full data and analytics stack—from ingestion (ETL and Streaming) through transformation, storage, and consumption—ensuring quality, reliability, and performance at scale. Partner cross-functionally with product managers, architects, engineering leaders, and stakeholders from Cloud Engineering and other business domains to shape product and platform capabilities, translating business needs into actionable engineering plans. Drive delivery excellence by setting clear expectations, removing blockers, and ensuring engineering teams are progressing efficiently towards milestones while maintaining technical integrity. Ensure adoption and consistency of platform standards and best practices, including shared components, reusable libraries, and scalable data patterns. Support technical leadership across teams by fostering a strong culture of engineering excellence, security, and operational efficiency. Guide technical leads in maintaining high standards in architecture, development, and testing. Contribute to strategic planning, including the evolution of the data platform roadmap, migration strategies, and long-term technology investments aligned with company goals. Champion agile methodologies and DevOps practices, driving continuous improvement in team collaboration, delivery cycles, and operational maturity. Mentor and develop engineering talent, creating an environment where individuals can thrive through coaching, feedback, and growth opportunities. Promote a culture of innovation, accountability, and psychological safety. Challenge the Data Platform Quality and Performance by building/monitoring quality KPI and building a quality-first culture What We Are Looking For Proven experience leading geographically distributed engineering teams in the design and delivery of complex data and analytics platforms. Strong technical foundation with hands-on experience in modern data architectures, handling structured and unstructured data, and programming in Python—capable of guiding teams and reviewing design and code at a high level when necessary. Proficiency in SQL and relational database technologies, with the ability to guide data modeling and performance optimization discussions. In-depth understanding of ETL processes and data integration strategies, with practical experience overseeing data ingestion (batch and streaming), transformation, and quality assurance initiatives. Familiarity with commercial data platforms (e.g., Databricks, Snowflake) and cloud-native data warehouses (e.g., Redshift, BigQuery), including trade-offs and best practices in enterprise environments. Working knowledge of data governance and cataloging solutions, such as Atlan, Alation, Informatica, or Collibra, and experience supporting enterprise data stewardship efforts. Deep understanding of data quality, experience in building quality processes, and usage of tools like Monte Carlo. Understanding of machine learning and AI workloads, including the orchestration of data pipelines for model training and deployment in both batch and streaming contexts. Strong analytical and problem-solving skills, with the ability to drive root-cause analysis, evaluate architectural trade-offs, and support decision-making in ambiguous or fast-changing environments. Exceptional communication skills, with a track record of clear and effective collaboration across technical and non-technical stakeholders. Fluent in English, both verbal and written, with the ability to influence at all levels of the organization. Bachelor’s degree in Computer Science or a related field; advanced degrees or equivalent professional experience are a plus. We will consider for employment all qualified applicants who meet the inherent requirements for the position. Please note that background checks are required, and this may include criminal record checks. G-P. Global Made Possible. G-P is a proud Equal Opportunity Employer, and we are committed to building and maintaining a diverse, equitable and inclusive culture that celebrates authenticity. We prohibit discrimination and harassment against employees or applicants on the basis of race, color, creed, religion, national origin, ancestry, citizenship status, age, sex or gender (including pregnancy, childbirth, and pregnancy-related conditions), gender identity or expression (including transgender status), sexual orientation, marital status, military service and veteran status, physical or mental disability, genetic information, or any other legally protected status. G-P also is committed to providing reasonable accommodations to individuals with disabilities. If you need an accommodation due to a disability during the interview process, please contact us at careers@g-p.com.
Posted 2 days ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: We are seeking an experienced Data Architect to design, implement, and optimize scalable data solutions on Amazon Web Services (AWS) and / or Google Cloud Platform (GCP). The ideal candidate will lead the development of enterprise-grade data architectures that support analytics, machine learning, and business intelligence initiatives while ensuring security, performance, and cost optimization. Who we are looking for: Primary Responsibilities: Key Responsibilities Architecture & Design: Design and implement comprehensive data architectures using AWS or GCP services Develop data models, schemas, and integration patterns for structured and unstructured data Create solution blueprints, technical documentation, architectural diagrams, and best practice guidelines Implement data governance frameworks and ensure compliance with security standards Design disaster recovery and business continuity strategies for data systems Technical Leadership: Lead cross-functional teams in implementing data solutions and migrations Provide technical guidance on cloud data services selection and optimization Collaborate with stakeholders to translate business requirements into technical solutions Drive adoption of cloud-native data technologies and modern data practices Platform Implementation: Implement data pipelines using cloud-native services (AWS Glue, Google Dataflow, etc.) Configure and optimize data lakes and data warehouses (S3 / Redshift, GCS / BigQuery) Set up real-time streaming data processing solutions (Kafka, Airflow, Pub / Sub) Implement automated data quality monitoring and validation processes Establish CI/CD pipelines for data infrastructure deployment Performance & Optimization: Monitor and optimize data pipeline performance and cost efficiency Implement data partitioning, indexing, and compression strategies Conduct capacity planning and scaling recommendations Troubleshoot complex data processing issues and performance bottlenecks Establish monitoring, alerting, and logging for data systems Skill: Bachelor’s degree in Computer Science, Data Engineering, or related field 9+ years of experience in data architecture and engineering 5+ years of hands-on experience with AWS or GCP data services Experience with large-scale data processing and analytics platforms AWS Redshift, S3, Glue, EMR, Kinesis, Lambda AWS Data Pipeline, Step Functions, CloudFormation BigQuery, Cloud Storage, Dataflow, Dataproc, Pub/Sub GCP Cloud Functions, Cloud Composer, Deployment Manager IAM, VPC, and security configurations SQL and NoSQL databases Big data technologies (Spark, Hadoop, Kafka) Programming languages (Python, Java, SQL) Data modeling and ETL/ELT processes Infrastructure as Code (Terraform, CloudFormation) Container technologies (Docker, Kubernetes) Data warehousing concepts and dimensional modeling Experience with modern data architecture patterns Real-time and batch data processing architectures Data governance, lineage, and quality frameworks Business intelligence and visualization tools Machine learning pipeline integration Strong communication and presentation abilities Leadership and team collaboration skills Problem-solving and analytical thinking Customer-focused mindset with business acumen Preferred Qualifications: Master’s degree in relevant field Cloud certifications (AWS Solutions Architect, GCP Professional Data Engineer) Experience with multiple cloud platforms Knowledge of data privacy regulations (GDPR, CCPA) Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.
Posted 2 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Senior Manager, Integrated Test Lead – Data Product Engineering & Delivery (Sr Manager, Technology Testing) Lead comprehensive testing strategy and execution for complex data engineering pipelines and product delivery initiatives. Drive quality assurance across integrated systems, data workflows, and customer-facing applications while coordinating cross-functional testing efforts. Who we are looking for: Primary Responsibilities: Test Strategy & Leadership: Design and implement end-to-end testing frameworks for data pipelines, ETL / ELT processes, and analytics platforms Ensure test coverage across ETL / ELT, data transformation, lineage and consumption layers Develop integrated testing strategies spanning multiple systems, APIs, and data sources Establish testing standards, methodologies, and best practices across the organization Data Engineering Testing: Create comprehensive test suites for data ingestion, transformation, and output validation Design data quality checks, schema validation, and performance testing for large-scale datasets Implement automated testing for streaming and batch data processing workflows Validate data integrity across multiple environments and systems and against business rules Cross-Functional Coordination: Collaborate with data engineers, software developers, product managers, and DevOps teams Coordinate testing activities across multiple product streams and release cycles Manage testing dependencies and critical path items in complex delivery timelines Quality Assurance & Process Improvement: Establish metrics and KPIs for testing effectiveness and product quality to drive continuous improvement in testing processes and tooling Lead root cause analysis for production issues and testing gaps Technical Leadership: Mentor junior QA engineers and promote testing best practices Evaluate and implement new testing tools and technologies Design scalable testing infrastructure and CI/CD integration Skill: 10+ years in software testing with 3+ years in leadership roles 8+ year experience testing data engineering systems, ETL pipelines, or analytics platforms Proven track record with complex, multi-system integration testing Experience in agile/scrum environments with rapid delivery cycles Strong SQL experience with major databases (Redshift, Bigquery, etc.) Experience with cloud platforms (AWS, GCP) and their data services Knowledge of data pipeline tools (Apache Airflow, Kafka, Confluent, Spark, dbt, etc.) Proficiency in data warehousing, data architecture, reporting and analytics applications Scripting languages (Python, Java, bash) for test automation API testing tools and methodologies CI/CD/CT tools and practices Strong project management and organizational skills Excellent verbal and written communication abilities Experience managing multiple priorities and competing deadlines Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid.
Posted 2 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
AWS Data Engineer: Primary skillsets: AWS, Pyspark, SQL, Databricks, Python Secondary skillset- Any ETL Tool, Github, DevOPs(CI-CD) Experience: 3-4yrs Degree in computer science, engineering, or similar fields Mandatory Skill Set: Python, PySpark , SQL, AWS with Designing, developing, testing and supporting data pipelines and applications. 3+ years working experience in data integration and pipeline development. 3+ years of Experience with AWS Cloud on data integration with a mix of Apache Spark, Glue, Kafka, Kinesis, and Lambda in S3 Redshift, RDS, MongoDB/DynamoDB ecosystems Databricks, Redshift experience is a major plus. 3+ years of experience using SQL in related development of data warehouse projects/applications (Oracle & amp; SQL Server) Strong real-life experience in python development especially in PySpark in AWS Cloud environment Strong SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch Workflow management tools like Airflow AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR (equivalent tools in the GCP stack will also suffice) Good to Have: Snowflake, Palantir Foundry
Posted 2 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to create the next-generation tools for intuitive data access for transportation operations? We are looking for a Business Intelligence Engineer to help setup and deliver robust, structured reporting, analytics and models for the RBS Cost to Serve team. You will be a key contributor to shaping our strategic Defect Elimination program by equipping the program teams with the key analytics and insights. You will have an eye for detail, proficient/advanced SQL/DW/Python and have a knack for solving challenging data and reporting challenges. The role requires you to feel comfortable working with and clearly communicating with other functional teams, regionally and globally. The position will be based in Bangalore/Chennai/HYD. You will be reporting to a Sr Program Manager : Cost to Serve Analytics & Insights, working intensely with her (larger) project team, including Finance. The ideal candidate will be comfortable in a fast-paced, dynamic environment; will be a creative and an analytical problem solver with the opportunity to fulfill the Amazon motto to “Work Hard. Have Fun. Make History”. Key job responsibilities Analysis of business requirements and translation into technical requirements. By support of senior colleagues integration into a working, stable and scalable system Independent realization of requirements for Business Intelligence and custom software development products Creation of test cases and guidance of business stakeholders within the testing process Presentation of solutions and implemented features within weekly sync up with business stakeholders Ownership of maintenance and error handling of deployed solutions Focus on project delivery About The Team RBS Cost to Serve team aims to identify and eliminate waste, negative experiences, and non-value activities across the end-to-end remit of supply chain and dependent work streams that slow down resolution for our stakeholders. The primary objective is to reduce Cost To Serve for Amazon and enable “Free Cash Flow” by optimizing the Cost per shipped unit economics across the supply chain systems through Defect Elimination. Our program will support in establishing the end-to-end supply chain checkpoints on how the inventory moves inside Amazon to identify gaps, broken processes/policies to eliminate root causes of systemic difficulties rather than merely addressing symptoms, on behalf of our customers. This team will partner with internal/external stakeholders to establish the Cost to serve charter based on opportunity size and own specific unique initiatives that are beyond the existing team’s program scope. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - HYD 15 SEZ - E55 Job ID: A2999431
Posted 2 days ago
2.0 years
0 Lacs
India
Remote
Welcome to Veradigm! Our Mission is to be the most trusted provider of innovative solutions that empower all stakeholders across the healthcare continuum to deliver world-class outcomes. Our Vision is a Connected Community of Health that spans continents and borders. With the largest community of clients in healthcare, Veradigm is able to deliver an integrated platform of clinical, financial, connectivity and information solutions to facilitate enhanced collaboration and exchange of critical patient information. Veradigm Life Veradigm is here to transform health, insightfully. Veradigm delivers a unique combination of point-of-care clinical and financial solutions, a commitment to open interoperability, a large and diverse healthcare provider footprint, along with industry proven expert insights. We are dedicated to simplifying the complicated healthcare system with next-generation technology and solutions, transforming healthcare from the point-of-patient care to everyday life. For more information, please explore Veradigm.com. Job Description For Software Engineer Job Title: Software Engineer Job Responsibilities What will your job look like: The primary purpose of this role is to perform Specification, Design, Coding, Testing, Documentation in the areas of Development and Maintenance. Responsible for creating low-level designs for complex software modules and subsystems. Provide technical guidance to the team, ensuring the successful implementation of advanced software solutions. The ideal candidate will excel at translating business requirements into detailed and comprehensive functional requirements, thereby significantly contributing to the success of our projects. An Ideal Candidate Will Have 2+ years of experience as a software engineer. SQL database experience (Redshift, PostgreSQL, MySQL, Snowflake or similar). Key areas include understanding database design principles, writing efficient queries, and utilizing advanced features. Specific items include database design, data manipulation (CRUD operations), querying data (SELECT statements with various clauses like WHERE, GROUP BY, ORDER BY, JOINs), data modeling, and understanding database concepts like primary and foreign keys. Excellent programming skills in ADF (Azure data factory pipelines ) includes data movements , data transformation ,authentication and control activities Excellent programming skills in Python, Java, C#, C++, or similar language At least 1 year working as a software developer on large distributed systems and client server architectures. 2 years Python development using frameworks like Flask, Django, Jinja, SQLAlchemy Experience building and deploying applications using Amazon Web Services or similar cloud infrastructure. Software development in life sciences industry preferred. Validated software development in a regulated environment preferred. Development/testing of ETL Experience with Apache HTTP, NGINX, Tomcat, or Jetty. Experience with standard build tools and version control systems (e.g., Git, Jenkins). Broad understanding of internet protocols and network programming. Benefits Veradigm believes in empowering our associates with the tools and flexibility to bring the best version of themselves to work. Through our generous benefits package with an emphasis on work/life balance, we give our employees the opportunity to allow their careers to flourish. Quarterly Company-Wide Recharge Days Flexible Work Environment (Remote/Hybrid Options) Peer-based incentive “Cheer” awards “All in to Win” bonus Program Tuition Reimbursement Program To know more about the benefits and culture at Veradigm, please visit the links mentioned below: - https://veradigm.com/about-veradigm/careers/benefits/ https://veradigm.com/about-veradigm/careers/culture/ Veradigm is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce. Thank you for reviewing this opportunity! Does this look like a great match for your skill set? If so, please scroll down and tell us more about yourself!
Posted 2 days ago
7.0 years
20 - 28 Lacs
Hyderabad, Telangana, India
Remote
Role: Data Engineering Lead Experience: 7-10 Years Location: Hyderabad We need immediate joiners only (Max. 15 days) This is work from office role-5 days (No Hybrid/ Remote opportunities) We are looking for candidates with strong experience in data architecture About company: We provides companies with innovative technology solutions for everyday business problems. Our passion is to help clients become intelligent, information-driven organizations, where fact-based decision-making is embedded into daily operations, which leads to better processes and outcomes. Our team combines strategic consulting services with growth-enabling technologies to evaluate risk, manage data, and leverage AI and automated processes more effectively. With deep, big four consulting experience in business transformation and efficient processes . Job Description: We are looking for an accomplished and dynamic Data Engineering Lead to join our team and drive the design, development, and delivery of cutting-edge data solutions. This role requires a balance of strong technical expertise, strategic leadership, and a consulting mindset. As the Lead Data Engineer, you will oversee the design and development of robust data pipelines and systems, manage and mentor a team of 5 to 7 engineers, and play a critical role in architecting innovative solutions tailored to client needs. You will lead by example, fostering a culture of accountability, ownership, and continuous improvement while delivering impactful, scalable data solutions in a fast-paced, consulting environment. Key Responsibilities:- Client Collaboration Act as the primary point of contact for US-based clients, ensuring alignment on project goals, timelines, and deliverables. Engage with stakeholders to understand requirements and ensure alignment throughout the project lifecycle. Present technical concepts and designs to both technical and non-technical audiences. Communicate effectively with stakeholders to ensure alignment on project goals, timelines, and deliverables. Set realistic expectations with clients and proactively address concerns or risks. Data Solution Design And Development Architect, design, and implement end-to-end data pipelines and systems that handle large-scale, complex datasets. Ensure optimal system architecture for performance, scalability, and reliability. Evaluate and integrate new technologies to enhance existing solutions. Implement best practices in ETL/ELT processes, data integration, and data warehousing. Project Leadership And Delivery Lead technical project execution, ensuring timelines and deliverables are met with high quality. Collaborate with cross-functional teams to align business goals with technical solutions. Act as the primary point of contact for clients, translating business requirements into actionable technical strategies. Team Leadership And Development Manage, mentor, and grow a team of 5 to 7 data engineers; Ensure timely follow-ups on action items and maintain seamless communication across time zones. Conduct code reviews, validations, and provide feedback to ensure adherence to technical standards. Provide technical guidance and foster an environment of continuous learning, innovation, and collaboration. Support collaboration and alignment between the client and delivery teams. Optimization And Performance Tuning Be hands-on in developing, testing, and documenting data pipelines and solutions as needed. Analyze and optimize existing data workflows for performance and cost-efficiency. Troubleshoot and resolve complex technical issues within data systems. Adaptability And Innovation Embrace a consulting mindset with the ability to quickly learn and adopt new tools, technologies, and frameworks. Identify opportunities for innovation and implement cutting-edge technologies in data engineering. Exhibit a "figure it out" attitude, taking ownership and accountability for challenges and solutions. Learning And Adaptability Stay updated with emerging data technologies, frameworks, and tools. Actively explore and integrate new technologies to improve existing workflows and solutions. Internal Initiatives And Eminence Building Drive internal initiatives to improve processes, frameworks, and methodologies. Contribute to the organization’s eminence by developing thought leadership, sharing best practices, and participating in knowledge-sharing activities. Qualifications Education Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. Certifications in cloud platforms such as Snowflake Snowpro, Data Engineer is a plus. Experience 8+ years of experience in data engineering with hands-on expertise in data pipeline development, architecture, and system optimization Demonstrated success in managing global teams, especially across US and India time zones. Proven track record in leading data engineering teams and managing end-to-end project delivery. Strong background in data warehousing and familiarity with tools such as Matillion, dbt, Striim, etc. Technical Skills Lead the design, development, and deployment of scalable data architectures, pipelines, and processes tailored to client needs Expertise in programming languages such as Python, Scala, or Java. Proficiency in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. Solid understanding of database systems (relational and NoSQL) and data modeling techniques. Hands-on experience of 2+ years in designing and developing data integration solutions using Matillion and/or dbt. Strong knowledge of data engineering and integration frameworks. Expertise in architecting data solutions. Successfully implemented at least two end-to-end projects with multiple transformation layers. Good grasp of coding standards, with the ability to define standards and testing strategies for projects. Proficiency in working with cloud platforms (AWS, Azure, GCP) and associated data services. Enthusiastic about working in Agile methodology. Possess a comprehensive understanding of the DevOps process including GitHub integration and CI/CD pipelines. Soft Skills Exceptional problem-solving and analytical skills. Strong communication and interpersonal skills to manage client relationships and team dynamics. Ability to thrive in a consulting environment, quickly adapting to new challenges and domains. Ability to handle ambiguity and proactively take ownership of challenges. Demonstrated accountability, ownership, and a proactive approach to solving problems. Why Join Us? Be at the forefront of data innovation and lead impactful projects. Work with a collaborative and forward-thinking team. Opportunity to mentor and develop talent in the data engineering space. Competitive compensation and benefits package. Skills: gcp,data modeling techniques,performance tuning,devops process,azure,data pipeline development,scala,matillion,data architecture,data integration solutions (matillion, dbt, striim),ci/cd,dbt,cloud data warehouses,data engineering,python,github integration,cloud platforms (aws, azure, gcp),aws,ci/cd pipelines,database systems (relational and nosql),data warehousing,data integration,sql, nosql & data modeling,programming languages (python, scala, java),agile methodology,java,etl/elt processes,relational and nosql databases,system optimization,data modeling,devops processes (github integration, ci/cd)
Posted 2 days ago
4.0 years
0 Lacs
India
Remote
Welcome to Veradigm! Our Mission is to be the most trusted provider of innovative solutions that empower all stakeholders across the healthcare continuum to deliver world-class outcomes. Our Vision is a Connected Community of Health that spans continents and borders. With the largest community of clients in healthcare, Veradigm is able to deliver an integrated platform of clinical, financial, connectivity and information solutions to facilitate enhanced collaboration and exchange of critical patient information. Veradigm Life Veradigm is here to transform health, insightfully. Veradigm delivers a unique combination of point-of-care clinical and financial solutions, a commitment to open interoperability, a large and diverse healthcare provider footprint, along with industry proven expert insights. We are dedicated to simplifying the complicated healthcare system with next-generation technology and solutions, transforming healthcare from the point-of-patient care to everyday life. For more information, please explore Veradigm.com. Job Description For Sr Software Engineer Job Title: Sr Software Engineer Job Responsibilities What will your job look like: The primary purpose of this role is to perform Specification, Design, Coding, Testing, Documentation in the areas of Development and Maintenance. Responsible for creating low-level designs for complex software modules and subsystems. Provide technical guidance to the team, ensuring the successful implementation of advanced software solutions. The ideal candidate will excel at translating business requirements into detailed and comprehensive functional requirements, thereby significantly contributing to the success of our projects. An Ideal Candidate Will Have 4+ years of experience as a software engineer. SQL database experience (Redshift, PostgreSQL, MySQL, Snowflake or similar). Key areas include understanding database design principles, writing efficient queries, and utilizing advanced features. Specific items include database design, data manipulation (CRUD operations), querying data (SELECT statements with various clauses like WHERE, GROUP BY, ORDER BY, JOINs), data modeling, and understanding database concepts like primary and foreign keys. Excellent programming skills in ADF (Azure data factory pipelines ) includes data movements , data transformation ,authentication and control activities Excellent programming skills in Python, Java, C#, C++, or similar language At least 1 year working as a software developer on large distributed systems and client server architectures. 3+ years Python development using frameworks like Flask, Django, Jinja, SQLAlchemy Experience building and deploying applications using Amazon Web Services or similar cloud infrastructure. Software development in life sciences industry preferred. Validated software development in a regulated environment preferred. Development/testing of ETL Experience with Apache HTTP, NGINX, Tomcat, or Jetty. Experience with standard build tools and version control systems (e.g., Git, Jenkins). Broad understanding of internet protocols and network programming. Benefits Veradigm believes in empowering our associates with the tools and flexibility to bring the best version of themselves to work. Through our generous benefits package with an emphasis on work/life balance, we give our employees the opportunity to allow their careers to flourish. Quarterly Company-Wide Recharge Days Flexible Work Environment (Remote/Hybrid Options) Peer-based incentive “Cheer” awards “All in to Win” bonus Program Tuition Reimbursement Program To know more about the benefits and culture at Veradigm, please visit the links mentioned below: - https://veradigm.com/about-veradigm/careers/benefits/ https://veradigm.com/about-veradigm/careers/culture/ Veradigm is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce. Thank you for reviewing this opportunity! Does this look like a great match for your skill set? If so, please scroll down and tell us more about yourself!
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role : As a Data Engineer, you'll build and maintain data pipelines and architectures.Responsibilities include optimizing databases and ETL processes, using Python or SQL,and collaborating with data teams for informed decision-making. Why Choose Ideas2IT: Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (see here and here). We are following suit. As a Data Engineer, exclusively focus on engineering data pipelines for complex products What’s in it for you? You will work on diverse technology challenges like: A robust distributed platform to manage a self-healing swarm of bots onunreliable network / compute Large-scale Cloud-Native applications Document Comprehension Engine leveraging RNN and other latest OCR techniques Completely data-driven low-code platform You will leverage cutting-edge technologies like Blockchain, IoT, and Data Science as you work on projects for leading Silicon Valley startups. Your role does not start or end with just Java development; you will enjoy the freedom to share your suggestions on the choice of tech stacks across the length of the project If there is a certain technology you would like to explore, you can do your Technical PoCs Work in a culture that values capability over experience and continuous learning as a core tenet Here’s what you’ll bring Proficiency in SQL and experience with database technologies (e.g., MySQL, PostgreSQL, SQL Server).Experience in any one of the cloud environments – AWS, Azure Experience with data modeling, data warehousing, and building ETL pipelines. Experience building large-scale data pipelines and data-centric applications using any distributed storage platform Experience in data processing tools like Pandas, pyspark. Experience in cloud services like S3, Lambda, SQS, Redshift, Azure Data Factory, ADLS, Function Apps, etc. Expertise in one or more high-level languages (Python/Scala) Ability to handle large-scale structured and unstructured data from internal and third-party sources Ability to collaborate with analytics and business teams to improve data models that feed business intelligence tools, increase data accessibility, and foster data-driven decision-making across the organization Experience with data visualization tools like PowerBI, Tableau Experience in containerization technologies like Docker , Kubernetes About Us: Ideas2IT stands at the intersection of Technology, Business, and Product Engineering, offering high-caliber Product Development services. Initially conceived as a CTO consulting firm, we've evolved into thought leaders in cutting-edge technologies such as Generative AI, assisting our clients in embracing innovation. Our forte lies in applying technology to address business needs, demonstrated by our track record of developing AI-driven solutions for industry giants like Facebook, Bloomberg, Siemens, Roche, and others. Harnessing our product-centric approach, we've incubated several AI-based startups—including Pipecandy, Element5, IdeaRx, and Carefi. in—that have flourished into successful ventures backed by venture capital. With fourteen years of remarkable growth behind us, we're steadfast in pursuing ambitious objectives. P.S. We're all about diversity, and our doors are wide open to everyone. Join us in celebrating the awesomeness of differences!
Posted 2 days ago
5.0 years
4 - 8 Lacs
Hyderābād
Remote
Overview: Job Title: Senior DevOps Engineer Location: Bangalore / Hyderabad / Chennai / Coimbatore Position: Full-time Department: Annalect Engineering Position Overview Annalect is currently seeking a Senior DevOps Engineer to join our technology team remotely, We are passionate about building distributed back-end systems in a modular and reusable way. We're looking for people who have a shared passion for data and desire to build cool, maintainable and high-quality applications to use this data. In this role you will participate in shaping our technical architecture, design and development of software products, collaborate with back-end developers from other tracks, as well as research and evaluation of new technical solutions. Responsibilities: Key Responsibilities: Build and maintain cloud infrastructure through terraform IaC. Cloud networking and orchestration with AWS (EKS, ECS, VPC, S3, ALB, NLB). Improve and automate processes and procedures. Constructing CI/CD pipelines. Monitoring and handling incident response of the infrastructure, platforms, and core engineering services. Troubleshooting infrastructure, network, and application issues. Help identify and troubleshoot problems within environment. Qualifications: Required Skills 5 + years of DevOps experience 5 + years of hands-on experience in administering cloud technologies on AWS, especially with IAM, VPC, Lambda, EKS, EC2, S3, ECS, CloudFront, ALB, API Gateway, RDS, Codebuild, SSM, Secret Manager, Lambda, API Gateway etc. Experience with microservices, containers (Docker), container orchestration (Kubernetes). Demonstrable experience of using Terraform to provision and configure infrastructure. Scripting ability - PowerShell, Python, Bash etc. Comfortable working with Linux/Unix based operating systems (Ubuntu preferred) Familiarity with software development, CICD and DevOps tools (Bitbucket, Jenkins, GitLab, Codebuild, Codepipeline) Knowledge of writing Infrastructure as Code (laC) using Terraform. Experience with microservices, containers (Docker), container orchestration (Kubernetes), serverless computing (AWS Lambda) and distributed/scalable systems. Possesses a problem-solving attitude. Creative, self-motivated, a quick study, and willing to develop new skills. Additional Skills Familiarity with working with data and databases (SQL, MySQL, PostgreSQL, Amazon Aurora, Redis, Amazon Redshift, Google BigQuery). Knowledge of Database administration. Experience with continuous deployment/continuous delivery (Jenkins, Bamboo). AWS/GCP/Azure Certification is a plus. Experience in python coding is welcome. Passion for data-driven software. All of our tools are built on top of data and require work with data. Knowledge of laaS/PaaS architecture with good understanding of Infrastructure and Web Application security Experience with logging/monitoring (CloudWatch, Datadog, Loggly, ELK). Passion for writing good documentation and creating architecture diagrams.
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a highly skilled and experienced Data Architect with expertise in designing and building data platforms in cloud environments. The ideal candidate will have a strong background in either AWS Data Engineering or Azure Data Engineering, along with proficiency in distributed data processing systems like Spark. Additionally, proficiency in SQL, data modeling, building data warehouses, and knowledge of ingestion tools and data governance are essential for this role. The Data Architect will also need experience with orchestration tools such as Airflow or Dagster and proficiency in Python, with knowledge of Pandas being beneficial. Here’s what you’ll bring Experience in designing and building data platforms in any cloud. Strong expertise in either AWS Data Engineering or Azure Data Engineering Develop and optimize data processing pipelines using distributed systems like Spark. • Create and maintain data models to support efficient storage and retrieval. Build and optimize data warehouses for analytical and reporting purposes, utilizing technologies such as Postgres, Redshift, Snowflake, etc. Knowledge of ingestion tools such as Apache Kafka, Apache Nifi, AWS Glue, or Azure Data Factory. Establish and enforce data governance policies and procedures to ensure data quality and security. Utilize orchestration tools like Airflow or Dagster to schedule and manage data workflows. Develop scripts and applications in Python to automate tasks and processes. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Communicate technical solutions effectively to clients and stakeholders. Familiarity with multiple cloud ecosystems such as AWS, Azure, and Google Cloud Platform (GCP). Experience with containerization and orchestration technologies like Docker and Kubernetes. Knowledge of machine learning and data science concepts. Experience with data visualization tools such as Tableau or Power BI. Understanding of DevOps principles and practices. Why Choose Ideas2IT Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (see here and here). We are following suit. What’s in it for you? You will get to work on impactful products instead of back-office applications for the likes of customers like Facebook, Siemens, Roche, and more You will get to work on interesting projects like the Cloud AI platform for personalized cancer treatment Opportunity to continuously learn newer technologies Freedom to bring your ideas to the table and make a difference, instead of being a small cog in a big wheel Showcase your talent in Shark Tanks and Hackathons conducted in the company About Us: Ideas2IT stands at the intersection of Technology, Business, and Product Engineering, offering high-caliber Product Development services. Initially conceived as a CTO consulting firm, we've evolved into thought leaders in cutting-edge technologies such as Generative AI, assisting our clients in embracing innovation. Our forte lies in applying technology to address business needs, demonstrated by our track record of developing AI-driven solutions for industry giants like Facebook, Bloomberg, Siemens, Roche, and others. Harnessing our product-centric approach, we've incubated several AI-based startups—including Pipecandy, Element5, IdeaRx, and Carefi. in—that have flourished into successful ventures backed by venture capital. With fourteen years of remarkable growth behind us, we're steadfast in pursuing ambitious objectives. P.S. We're all about diversity, and our doors are wide open to everyone. Join us in celebrating the awesomeness of differences!
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role : As a Data Engineer, you'll build and maintain data pipelines and architectures. Responsibilities include optimizing databases and ETL processes, using Python or SQL, and collaborating with data teams for informed decision-making. Why Choose Ideas2IT: Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (see here and here). We are following suit. As a Data Engineer, exclusively focus on engineering data pipelines for complex products What’s in it for you? You will work on diverse technology challenges like: A robust distributed platform to manage a self-healing swarm of bots onunreliable network / compute Large-scale Cloud-Native applications Document Comprehension Engine leveraging RNN and other latest OCR techniques Completely data-driven low-code platform You will leverage cutting-edge technologies like Blockchain, IoT, and Data Science as you work on projects for leading Silicon Valley startups. Your role does not start or end with just Java development; you will enjoy the freedom to share your suggestions on the choice of tech stacks across the length of the project If there is a certain technology you would like to explore, you can do your Technical PoCs Work in a culture that values capability over experience and continuous learning as a core tenet Here’s what you’ll bring Proficiency in SQL and experience with database technologies (e.g., MySQL, PostgreSQL, SQL Server).Experience in any one of the cloud environments – AWS, Azure Experience with data modeling, data warehousing, and building ETL pipelines. Experience building large-scale data pipelines and data-centric applications using any distributed storage platform Experience in data processing tools like Pandas, pyspark. Experience in cloud services like S3, Lambda, SQS, Redshift, Azure Data Factory, ADLS, Function Apps, etc. Expertise in one or more high-level languages (Python/Scala) Ability to handle large-scale structured and unstructured data from internal and third-party sources Ability to collaborate with analytics and business teams to improve data models that feed business intelligence tools, increase data accessibility, and foster data-driven decision-making across the organization Experience with data visualization tools like PowerBI, Tableau Experience in containerization technologies like Docker , Kubernetes About Us: Ideas2IT stands at the intersection of Technology, Business, and Product Engineering, offering high-caliber Product Development services. Initially conceived as a CTO consulting firm, we've evolved into thought leaders in cutting-edge technologies such as Generative AI, assisting our clients in embracing innovation. Our forte lies in applying technology to address business needs, demonstrated by our track record of developing AI-driven solutions for industry giants like Facebook, Bloomberg, Siemens, Roche, and others. Harnessing our product-centric approach, we've incubated several AI-based startups—including Pipecandy, Element5, IdeaRx, and Carefi. in—that have flourished into successful ventures backed by venture capital. With fourteen years of remarkable growth behind us, we're steadfast in pursuing ambitious objectives. P.S. We're all about diversity, and our doors are wide open to everyone. Join us in celebrating the awesomeness of differences!
Posted 2 days ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Company Description Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs. A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role. Job Description Key Responsibilities Design, build, and maintain scalable and secure relational and cloud-based database systems. Migrate data from spreadsheets or third-party sources into databases (PostgreSQL, MySQL, BigQuery). Create and maintain automated workflows and scripts for reliable, consistent data ingestion. Optimize query performance and indexing to improve data retrieval efficiency. Implement access controls, encryption, and data security best practices to ensure compliance. Monitor database health and troubleshoot issues proactively using appropriate tools. Collaborate with full-stack developers and data researchers to align data architecture with application needs. Uphold data quality through validation rules, constraints, and referential integrity checks. Keep up-to-date with emerging technologies and propose improvements to data workflows. Leverage tools like Python (Pandas, SQLAlchemy, PyDrive), and version control (Git). Support Agile development practices and CI/CD pipelines where applicable. Required Skills And Experience Strong SQL skills and understanding of database design principles (normalization, indexing, relational integrity). Experience with relational databases such as PostgreSQL or MySQL. Working knowledge of Python, including data manipulation and scripting (e.g., using Pandas, SQLAlchemy). Experience with data migration and ETL processes, including integrating data from spreadsheets or external sources. Understanding of data security best practices, including access control, encryption, and compliance. Ability to write and maintain import workflows and scripts to automate data ingestion and transformation. Experience with cloud-based databases, such as Google BigQuery or AWS RDS. Familiarity with cloud services (e.g., AWS Lambda, GCP Dataflow) and serverless data processing. Exposure to data warehousing tools like Snowflake or Redshift. Experience using monitoring tools such as Prometheus, Grafana, or the ELK Stack. Good analytical and problem-solving skills, with strong attention to detail. Team collaboration skills, especially with developers and analysts, and ability to work independently. Proficiency with version control systems (e.g., Git). Strong communication skills — written and verbal. Preferred / Nice-to-Have Skills Bachelor’s degree in Computer Science, Information Systems, or a related field. Experience working with APIs for data ingestion and third-party system integration. Familiarity with CI/CD pipelines (e.g., GitHub Actions, Jenkins). Python experience using modules such as gspread, PyDrive, PySpark, or object-oriented design patterns. Experience in Agile/Scrum teams or working with product development cycles. Experience using Tableau and Tableau Prep for data visualization and transformation. Why Join Us Monthly long weekends — every third Friday off Wellness reimbursement to support your health and balance Paid parental leave Remote-first with flexibility and trust Work with a world-class data and marketing team inside a globally recognized brand Qualifications 5+ Years exp in Database Engineering. Additional Information Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves
Posted 2 days ago
500.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Summary Guidewire is searching for a unique individual who is ambitious, curious, and hungry for a rare chance to transform a 500-year-old industry from the inside out. Through our data listening capabilities, we collect more data (and more important data) than any other company in our market. We seek ways to make sense of it, showcase it, and transform it into insight that feeds billions of decision points every year across pricing, portfolio management, underwriting, claims management, and risk transfer. At Guidewire, we offer a combination of good working conditions, an excellent market opportunity, a rational and meritocratic company culture, quality software products, and a long history of careful hiring have allowed us to create an enviable work environment. Guidewire Analytics helps insurers and other financial institutions to model new and evolving risks such as cyber. By combining internet-scale data listening, adaptive machine learning, and insurance risk modeling, Guidewire Analytics insights help P&C customers face new risks, take advantage of new opportunities and develop new products. Responsibilities Job Description Development: Develop robust, scalable, and efficient data pipelines. Manage platform solutions to support data engineering needs to ensure seamless integration and performance. Write clean, efficient, and maintainable code. Data Management and Optimization: Ensure data quality, integrity, and security across all data pipelines. Optimize data processing workflows for performance and cost-efficiency. Develop and maintain comprehensive documentation for data pipelines and related processes. Innovation and Continuous Improvement: Stay current with emerging technologies and industry trends in big data and cloud computing. Propose and implement innovative solutions to improve data processing and analytics capabilities. Continuously evaluate and improve existing data infrastructure and processes. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in software engineering with a focus on data engineering and building data platform Strong programming experience using Python or Java. Experience of the Big data technologies like Apache Spark, Amazon EMR, Apache Iceberg, Amazon Redshift, etc or Similar technologies Experience in RDBMS(Postgres, MySql, etc) and NoSQL(MongoDB, DynamoDB, etc) database Experience in AWS cloud services (e.g., Lambda, S3, Athena, Glue) or comparable cloud technologies. Experience in CI/CD. Experience working in Event driven and Serverless Architecture Experience with platform solutions and containerization technologies (e.g., Docker, Kubernetes). Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment. Strong communication skills, both written and verbal. About Guidewire Guidewire is the platform P&C insurers trust to engage, innovate, and grow efficiently. We combine digital, core, analytics, and AI to deliver our platform as a cloud service. More than 540+ insurers in 40 countries, from new ventures to the largest and most complex in the world, run on Guidewire. As a partner to our customers, we continually evolve to enable their success. We are proud of our unparalleled implementation track record with 1600+ successful projects, supported by the largest R&D team and partner ecosystem in the industry. Our Marketplace provides hundreds of applications that accelerate integration, localization, and innovation. For more information, please visit www.guidewire.com and follow us on Twitter: @Guidewire_PandC. Guidewire Software, Inc. is proud to be an equal opportunity and affirmative action employer. We are committed to an inclusive workplace, and believe that a diversity of perspectives, abilities, and cultures is a key to our success. Qualified applicants will receive consideration without regard to race, color, ancestry, religion, sex, national origin, citizenship, marital status, age, sexual orientation, gender identity, gender expression, veteran status, or disability. All offers are contingent upon passing a criminal history and other background checks where it's applicable to the position.
Posted 2 days ago
0 years
7 - 9 Lacs
Calcutta
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. Responsibilities: Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust end-to-end solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology: Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory skill sets: AWS Data Engineer Preferred skill sets: AWS Data Engineer Years of experience required: 4-8 Education qualification: Btech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AWS Devops, PySpark Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
4.0 - 6.0 years
6 - 8 Lacs
Indore
On-site
Job Overview: Seeking an experienced Data Engineer who had 4 to 6 years experience to design, build, and maintain scalable data infrastructure and pipelines. You'll work with cross-functional teams to ensure reliable data flow from various sources to analytics platforms, enabling data-driven decision making across the organization. Key Responsibilities: Data Pipeline Development Design and implement robust ETL/ELT pipelines using tools like Apache Airflow, Spark, or cloud-native solutions Build real-time and batch processing systems to handle high-volume data streams Optimize data workflows for performance, reliability, and cost-effectiveness Infrastructure & Architecture Develop and maintain data warehouses and data lakes using platforms like Snowflake, Redshift, BigQuery, or Databricks Implement data modeling best practices including dimensional modeling and schema design Architect scalable solutions on cloud platforms (AWS, GCP, Azure) Data Quality & Governance Implement data quality checks, monitoring, and alerting systems Establish data lineage tracking and metadata management Ensure compliance with data privacy regulations and security standards Collaboration & Support Partner with data scientists, analysts, and business stakeholders to understand requirements Provide technical guidance on data architecture decisions Mentor junior engineers and contribute to team knowledge sharing Required Qualifications: Technical Skills 4-6 years of experience in data engineering or related field Proficiency in Python, SQL, and at least one other programming language (Java, Scala, Go) Strong experience with big data technologies (Spark, Kafka, Hadoop ecosystem) Hands-on experience with cloud platforms and their data services Knowledge of containerization (Docker, Kubernetes) and infrastructure as code Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹800,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane