Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description At Ford Motor Credit Company, we are going support indirect lending for Ford Credit Bank through existing lending platforms and integrating new Ford Credit Bank data into Enterprise Data Warehouse (GCP BQ) for data insights and analytics. This role is for ETL/Data Engineer who can integrate Ford Credit Bank data from existing North America Lending platforms into Enterprise Data Warehouse (GCP BQ), To enable critical regulatory reporting, operational analytics, and risk analytics. You will be responsible for deep-dive analysis of Current State Receivables and Originations data in a Data warehouse, as well as impact analysis related to Ford Credit Bank and providing solutions for implementation. You will be responsible for designing the transformation and modernization on GCP, as well as landing data from source applications, integrating it into analytical domains, and building data marts & products in GCP. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform, Mainframe, and IBM DataStage. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right data warehouse solutions. Responsibilities Develop and modify existing data pipelines on Mainframe (JCL, COBOL), IBM DataStage and BigQuery to integrate Ford Credit Bank data into Enterprise Data Warehouse (GCP BQ) and support production deployment. Use APIs for data processing, as required Implement architecture provided by data architecture team. Will be using Fiserv bank features and mainframe data sets for enabling banks data strategy. Be proactive and implement design plans. Will be using DB2 for performing bank integrations. Prepare test plan and execution within EDW/Data Factory (end-to-end, from ingestion, to integration, to marts) to support use cases. Design and build production data engineering solutions to deliver reusable patterns using Mainframe JCL, Datastage, autosys Design and build production data engineering solutions to deliver reusable patterns using Google Cloud Platform (GCP) services: Big Query, Dataflow, DataForm, Astronomer, Data Fusion, DataProc, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Artifact Registry, GCP APIs, Cloud build and App Engine, and real-time data streaming platforms like Apache Kafka, GCP Pub/Sub and Qlik Replicate Collaborate with stakeholders and cross-functional teams to gather and define data requirements, ensuring alignment with business objectives. Design and implement batch, real-time streaming, scalable, and fault-tolerant solutions for data ingestion, processing, and storage. Perform necessary data mapping, impact analysis for changes, root cause analysis, data lineage activities, and document information flows. Develop and maintain documentation for data engineering processes, standards, and best practices, ensuring knowledge transfer and ease of system maintenance. Implement an enterprise data governance model and actively promote the concept of data - protection, sharing, reuse, quality, and standards to ensure the integrity and confidentiality of data. Work in an agile product team to deliver code frequently using Test Driven Development (TDD), continuous integration, and continuous deployment (CI/CD). Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Continuously enhance your FMCC domain knowledge, stay current on the latest data engineering practices, and contribute to the company's technical direction while maintaining a customer-centric approach. Qualifications Successfully designed and implemented data warehouses and ETL processes for over 5 years, delivering high-quality data solutions. Exposure to Fiserv banking solution is desired [VN1] [VG2] 8+ years of complex BigQuery SQL development experience, Mainframe (JCL, COBOL),gszutil, and DataStage job development. Experienced with Mainframe, Datastage, Autosys Experienced with Mainframe file formats, COBOL copybooks, ORC Formats, JCL scripts, and related technologies to manage legacy data ingestion. Design, develop, and maintain ETL processes using IBM DataStage to extract, transform, and load data from Mainframe systems and other sources such as SQL, Oracle, Postgres, AS400, MF DB2 into the data warehouse. [VN3] [VG4] Develop and modify batch scripts and workflows using Autosys to schedule and automate ETL jobs. Experienced cloud engineer with 5+ years of GCP expertise, specializing in managing cloud infrastructure and applications into production-scale solutions. Cloud Build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing with microservice architecture from container orchestration framework. Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Desired: Professional Certification in GCP (e.g., Professional Data Engineer) Master’s degree in computer science, software engineering, information systems, Data Engineering, or a related field. Data engineering or development experience gained in a regulated financial environment. Experience with Teradata to GCP migrations is a plus.
Posted 1 week ago
10.0 years
0 Lacs
India
On-site
The Impact - The Director Data Engineering will lead the development and implementation of a comprehensive data strategy that aligns with the organization’s business goals and enables data driven decision making. You will : Build and manage a team of talented data managers and engineers with the ability to not only keep up with, but also pioneer, in this space Collaborate with and influence leadership to directly impact company strategy and direction Develop new techniques and data pipelines that will enable various insights for internal and external customers Develop deep partnerships with client implementation teams, engineering and product teams to deliver on major cross-functional measurements and testing Communicate effectively to all levels of the organization, including executives Provide success in partnering teams with dramatically varying backgrounds, from the highly technical to the highly creative Design a data engineering roadmap and execute the vision behind it Hire, lead, and mentor a world-class data team Partner with other business areas to co-author and co-drive strategies on our shared roadmap Oversee the movement of large amounts of data into our data lake Establish a customer-centric approach and synthesize customer needs Own end-to-end pipelines and destinations for the transfer and storage of all data Manage 3rd-party resources and critical data integration vendors Promote a culture that drives autonomy, responsibility, perfection and mastery. Maintain and optimize software and cloud expenses to meet financial goals of the company Provide technical leadership to the team in design and architecture of data products and drive change across process, practices, and technology within the organization Work with engineering managers and functional leads to set direction and ambitious goals for the Engineering department Ensure data quality, security, and accessibility across the organization About you: 10+ years of experience in data engineering 5+ years of experience leading data teams of 30+ resources or more, including selection of talent planning / allocating resources across multiple geographies and functions. 5+ years of experience with GCP tools and technologies, specifically, Google BigQuery, Google cloud composer, Dataflow, Dataform, etc. Experience creating large-scale data engineering pipelines, data-based decision-making and quantitative analysis tools and software Experience with hands-on to code version control systems (git) Experience with CICD, data architectures, pipelines, quality, and code management Experience with complex, high volume, multi-dimensional data, based on unstructured, structured, and streaming datasets Experience with SQL and NoSQL databases Experience creating, testing, and supporting production software and systems Proven track record of identifying and resolving performance bottlenecks for production systems Experience designing and developing data lake, data warehouse, ETL and task orchestrating systems Strong leadership, communication, time management and interpersonal skills Proven architectural skills in data engineering Experience leading teams developing production-grade data pipelines on large datasets Experience designing a large data lake and lake house experience, managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model Experience with common data languages (e.g. Python, Scala) and data warehouses (e.g. Redshift, BigQuery, Snowflake, Databricks) Extensive experience on cloud tools and technologies - GCP preferred Experience managing real-time data pipelines Successful track record and demonstrated thought-leadership and cross-functional influence and partnership within an agile / water-fall development environment. Experience in regulated industries or with compliance frameworks (e.g., SOC 2, ISO 27001). Write to sanish@careerxperts.com to get connected !
Posted 1 week ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat In Your Role, You Will Be Responsible For Skilled Multiple GCP services - GCS, Big Query, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Preferred Education Master's Degree Required Technical And Professional Expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done Preferred Technical And Professional Experience Experience with AEM Core Technologies: OSGI Services, Apache Sling ,Granite Framework., Java Content Repository API, Java 8+, Localization Familiarity with building tools, Jenkin and Maven , Knowledge of version control tools, especially Git, Knowledge of Patterns and Good Practices to design and develop quality and clean code, Knowledge of HTML, CSS, and JavaScript , jQuery Familiarity with task management, bug tracking, and collaboration tools like JIRA and Confluence
Posted 1 week ago
5.0 years
0 Lacs
Greater Chennai Area
On-site
About The Role We are looking for a Data Engineer with 5+ years of experience in building modern data pipelines and ETL solutions using SQL, GCP native services such as Big Query, Dataflow, etc., and potentially other data processing tools. Requirements In-depth knowledge of data warehousing concepts and databases like Big Query, Oracle, Teradata, DB2 and PostgreSQL. Development of data integration processes potentially involving micro-services architecture for data flow. Develops data integration modules and implements data transformations. Participates in reviews of data models, ETL designs and code. Assists in deployment of data pipelines and related components to Deve.
Posted 1 week ago
8.0 years
0 Lacs
Hyderābād
Remote
Why We Work at Dun & Bradstreet Dun & Bradstreet unlocks the power of data through analytics, creating a better tomorrow. Each day, we are finding new ways to strengthen our award-winning culture and accelerate creativity, innovation and growth. Our 6,000+ global team members are passionate about what we do. We are dedicated to helping clients turn uncertainty into confidence, risk into opportunity and potential into prosperity. Bold and diverse thinkers are always welcome. Come join us! Learn more at dnb.com/careers. Develop, maintain, and analyze datasets from diverse sources, including mobile and web, government agencies, web crawls, social media, and proprietary datasets, to create insights for our clients, power our platform, and create an innovative market understanding. Create designs and share ideas for creating and improving data pipelines and tools. This role will support maintaining our existing data pipelines and building new pipelines for increased customer insights Key Responsibilities: Collaborate with cross-functional teams to identify and design requirements for advanced systems with respect to processing, analyzing, searching, visualizing, developing, and testing vast datasets to ensure data accuracy. Implement business requirements by collaborating with stakeholders. Become familiar with existing application code and achieve a complete understanding of how the applications function. Maintain data quality by writing validation tests. Understand variety of unique data sources. Create and document data documentation, including, processing systems and flow diagrams. Help maintain existing systems, including troubleshooting and resolving alerts. Expected to meet critical project deadlines. Excellent organizational, analytical, and decision-making skills. Excellent verbal, written, and interpersonal communication skills. Capable of working collaboratively and independently. Share ideas across teams to spread awareness and use of frameworks and tooling. Show an ownership mindset in everything you do; be a problem solver, be curious and be inspired to take action, be proactive, seek ways to collaborate and connect with people and teams in support of driving success. Continuous growth mindset, keep learning through social experiences and relationships with stakeholders, experts, colleagues and mentors as well as widen and broaden your competencies through structural courses and programs. Key Skills: 8+ years experience in a data analysis, visualization, and manipulation. Extensive experience working with GCP services, including Big Query, Dataflow, Pub/Sub, Cloud Storage, Cloud Run, Cloud Functions and related technologies. Extensive experience with SQL and relational databases, including optimization and design. Experience with Amazon Web Services (EC2, RDS, S3, Redshift, EMR, and more). Experience with OS level scripting (bash, sed, awk, grep, etc.). Experience in AdTech, web cookies, and online advertising technologies is a plus. Testable and efficient Python coding for data processing and analysis. Familiarity with parallelization of applications on a single machine and across a network of machines. Expertise in containerized infrastructure and CI/CD systems, including CloudBuild, Docker, Kubernetes, Harness, and GitHub Actions. Experience with version control tools such as GIT, Github, and BitBucket. Experience with Agile Project Management tools such as Jira and Confluence. Experience with object-oriented programming, functional programming a plus. Analytic tools and ETL/ELT/data pipeline frameworks a plus. Experience with data visualization tools like Looker, Tableau, or Power BI. Experience working with global remote teams. Knowledge of data transformation processes. Google Cloud certification a plus. Proficiency in Microsoft Office Suite. Fluency in English and languages relevant to the team. This position is internal titled as Senior Software Engineer All Dun & Bradstreet job postings can be found at https://www.dnb.com/about-us/careers-and-people/joblistings.html and https://jobs.lever.co/dnb. Official communication from Dun & Bradstreet will come from an email address ending in @dnb.com. Notice to Applicants: Please be advised that this job posting page is hosted and powered by Lever. Your use of this page is subject to Lever's Privacy Notice and Cookie Policy, which governs the processing of visitor data on this platform.
Posted 1 week ago
6.0 years
0 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: Design, develop, and maintain scalable ETL/ELT pipelines using PySpark and Python. Build and manage real-time data ingestion and streaming pipelines using Apache Kafka. Develop and optimize data workflows and batch processes on GCP using services like BigQuery, Dataflow, Pub/Sub, and Cloud Composer. Implement data quality checks, error handling, and monitoring across pipelines. Collaborate with data scientists, analysts, and business teams to translate requirements into technical solutions. Ensure best practices in code quality, pipeline reliability, and data governance. Maintain thorough documentation of processes, tools, and infrastructure. Requirements To be successful in this role, you should meet the following requirements: 6+ years of experience in data engineering roles. Strong programming skills in Python and PySpark. Solid experience in working with Kafka for real-time data processing. Proven hands-on experience with GCP data tools and architecture. Familiarity with CI/CD, version control (Git), and workflow orchestration tools (Airflow/Composer). Strong analytical and problem-solving skills with attention to detail. Excellent communication and team collaboration skills You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Skills and Qualifications: • Overall 3-5 years of hands-on experience as a Data Engineer, with at least 2-3 years of direct Azure/AWS/GCP Data Engineering experience. • Strong SQL and Python development skills are mandatory. • Solid experience in data engineering, working with distributed architectures, ETL/ELT, and big data technologies. • Demonstrated knowledge and experience with Google Cloud BigQuery is a must. • Experience with DataProc and Dataflow is highly preferred. • Strong understanding of serverless data warehousing on GCP and familiarity with DWBI modeling frameworks. • Extensive experience in SQL across various database platforms. • Experience in data mapping and data modeling. • Familiarity with data analytics tools and best practices. • Hands-on experience with one or more programming/scripting languages such as Python, JavaScript, Java, R, or UNIX Shell. • Practical experience with Google Cloud services including but not limited to: o BigQuery, BigTable o Cloud Dataflow, Cloud Dataproc o Cloud Storage, Pub/Sub o Cloud Functions, Cloud Composer o Cloud Spanner, Cloud SQL • Knowledge of modern data mining, cloud computing, and data management tools (such as Hadoop, HDFS, and Spark). • Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker, etc. • Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. • GCP Data Engineer Certification is highly preferred
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Chennai
Work from Office
Notice period: Immediate 15days Profile source: Anywhere in India Timings: 1:00pm-10:00pm Work Mode: WFO (Mon-Fri) Job Summary: We are looking for an experienced and highly skilled Senior Data Engineer to lead the design and development of our data infrastructure and pipelines. As a key member of the Data & Analytics team, you will play a pivotal role in scaling our data ecosystem, driving data engineering best practices, and mentoring junior engineers. This role is ideal for someone who thrives on solving complex data challenges and building systems that power business intelligence, analytics, and advanced data products. Key Responsibilities: Design and build robust, scalable, and secure data pipelines and Lead the complete lifecycle of ETL/ELT processes, encompassing data intake, transformation, and storage including the concept of SCD type2. Collaborate with data scientists, analysts, backend and product teams to define data requirements and deliver impactful data solutions. Maintain and oversee the data infrastructure, including cloud storage, processing frameworks, and orchestration tools. Build logical and physical data model using any data modeling tool Champion data governance practices, focusing on data quality, lineage tracking, and catalog Guarantee adherence of data systems to privacy regulations and organizational Guide junior engineers, conduct code reviews, and foster knowledge sharing and technical best practices within the team. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related Minimum of 5 years of practical experience in a data engineering or comparable Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, ). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), or GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and other data Solid grasp of data warehousing principles, data modeling techniques, and performance (e.g. Erwin Data Modeler, MySQL Workbench) Exceptional problem-solving abilities coupled with a proactive and team-oriented approach.
Posted 1 week ago
5.0 - 8.0 years
5 - 15 Lacs
Chennai
Work from Office
Notice period: Immediate 15days Profile source: Tamil Nadu Timings: 1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri) About the Role We are looking for an experienced and highly skilled Senior Data Engineer to lead the design and development of our data infrastructure and pipelines. As a key member of the Data & Analytics team, you will play a pivotal role in scaling our data ecosystem, driving data engineering best practices, and mentoring junior engineers. This role is ideal for someone who thrives on solving complex data challenges and building systems that power business intelligence, analytics, and advanced data products. Key Responsibilities Design and build robust, scalable, and secure data pipelines and Lead the complete lifecycle of ETL/ELT processes, encompassing data intake, transformation, and Collaborate with data scientists, analysts, and product teams to define data requirements and deliver impactful data solutions. Maintain and oversee the data infrastructure, including cloud storage, processing frameworks, and orchestration tools. Champion data governance practices, focusing on data quality, lineage tracking, and catalog Guarantee adherence of data systems to privacy regulations and organizational Guide junior engineers, conduct code reviews, and foster knowledge sharing and technical best practices within the team. Required Qualifications & Skills: Bachelors or masters degree in computer science, Engineering, or a related Minimum of 5 years of practical experience in a data engineering or comparable Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, Prefect). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and contemporary data stack Solid grasp of data warehousing principles, data modeling techniques, and performance Exceptional problem-solving abilities coupled with a proactive and team-oriented approach.
Posted 1 week ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Manage sole project priorities, deadlines, and deliverables. Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Technology Service Analyst Location: Pune, India Role Description We are looking for a motivated Operation Services Specialist to join our HRIT Production Engineering team. This role requires a Professional with a wide variety of strengths and capabilities across multiple technologies and disciplines. You are expected to continually drive automation in all day-to-day production support activities. You'll work in a collaborative, thought-provoking environment that encourages diversity of thought, creative solutions, and requires a high level of accountability to deliver technical solutions and continuous improvements in the overall support strategy. You'll manage all aspects of the HRIT systems and work closely with geographically diverse support and development teams to continue to build a market leading support organization for Corporate Functions Technology (IT). What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Supporting Production Service operations covers two shifts in a day and Week end coverage for Maintenance schedule. Develop a good understanding of the activities required to execute production management functions. Support Service Operations teams in providing optimum service level to the business lines supported. Support the resolution of incidents and problems within the team. Assist with the resolution of complex incidents. Ensure that the right problem solving techniques and processes are applied Undertake specific functions within the relevant production management process as identified for the specific production management area. Participate in regular meetings with stakeholders, prepare and document meetings, track progress and communicates to stakeholders. Collect, interpret and respond to changes in production data, as appropriate. Track the implementation of resolution tasks. Provide regular and reliable reporting of relevant data to meet management requirements. Understand thoroughly the end to end application support process and escalation procedures, become fully conversant with all support tools that will be used to provide effective support in the relevant area (i.e. service operations). Maintain an end to end view of the application and infrastructure landscape. Provide input and contribute in Production Management related audits. Engage with other Service Operations groups to understand business requirements Support the collection, analysis and production metrics on process data for KPIs to find out improvements. Work with Release Management and Transition Management on application configurations changes Update the RUN Book and KEDB as & when required Participate in all BCP and component failure tests based on the run books Understand flow of data through the application infrastructure. It is critical to understand the dataflow so as to best provide operational support Event monitoring and management via a 24x7 workbench that is both monitoring and regularly probing the service environment, and acting on instruction of a run book. Drive knowledge management across the supported applications and ensure full compliance Drive continual service improvements Works with team members to identify areas of focus, where training may improve team performance, and improve incident resolution. Your Skills And Experience Skills You'll Need Extensive IT experience (preferably within large corporate environments, controlled production environments including financial services technology in a client-facing function) and experience with people/team management and matrix management Having Engineering background on Computer Science Exhibit SRE mindset and exposure implanting SRE, DevOps, and agile methods Knowledge of databases (Oracle/MSSQL etc.), including working experience of writing Structured Query Language (SQL) scripts and queries Experience working on UNIX/Linux, Solaris, Java J2EE, PERL, Python, New Relic, Powershell scripts, Bash, and Ansible Knowledge of Cloud-based technologies (preferably Google Cloud platform), and knowledge of application performance tools (such as New Relic, Splunk, AppDynamics, Grafana) Skills That Will Help You Excel Articulate and experienced communicator able to communicate with a broad range of stakeholders and virtual teams Strong analytical and problem-solving skills Bachelor's degree in Computer Science or IT-related discipline (or equivalent work experience or diploma) ITIL Foundation Certificate CGP certification is a plus How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience. Location: Pan India Email: Shravya.Gone@ltimindtree.com Woonna.Sowmya@ltimindtree.com Job Requirement Overall, more than 5 Yrs of experience in Data projects Good Knowledge of GCP Bigquery SQL Python dataflow skills Has worked in Implementation projects building data pipelines transformation logics and data models Responsibility: • GCP Data Engineer Belongs to Data Management Engineering • Education Bachelor of engineering in any discipline equivalent • Desired Candidate Profile Technology Engineering Expertise • 4 years of experience in implementing data solutions using GCP BigquerySQL programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Key Responsibilities • Implement data solutions using GCP and need to be familiar in programming with SQLpython • Ensure clarity on NFR and implement these requirements • Work with Client Technical Manager by understanding customers landscape their IT priorities • Lead performance engineering and capacity planning exercises for databases Technology Engineering Expertise • 4 years of experience in implementing data pipelines for Data Analytics solutions • Experience in solutions using Google Cloud Data Flow Apache Beam Java programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Experience with different development methodologies RUP Scrum XP Soft skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Mandatory Skills: GCP, BigQuery, Python, SQL Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together
Posted 1 week ago
4.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience. Location: Pan India Email: Shravya.Gone@ltimindtree.com Woonna.Sowmya@ltimindtree.com Job Requirement Overall, more than 5 Yrs of experience in Data projects Good Knowledge of GCP Bigquery SQL Python dataflow skills Has worked in Implementation projects building data pipelines transformation logics and data models Responsibility: • GCP Data Engineer Belongs to Data Management Engineering • Education Bachelor of engineering in any discipline equivalent • Desired Candidate Profile Technology Engineering Expertise • 4 years of experience in implementing data solutions using GCP BigquerySQL programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Key Responsibilities • Implement data solutions using GCP and need to be familiar in programming with SQLpython • Ensure clarity on NFR and implement these requirements • Work with Client Technical Manager by understanding customers landscape their IT priorities • Lead performance engineering and capacity planning exercises for databases Technology Engineering Expertise • 4 years of experience in implementing data pipelines for Data Analytics solutions • Experience in solutions using Google Cloud Data Flow Apache Beam Java programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Experience with different development methodologies RUP Scrum XP Soft skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Mandatory Skills: GCP, BigQuery, Python, SQL Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together
Posted 1 week ago
4.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience. Location: Pan India Email: Shravya.Gone@ltimindtree.com Woonna.Sowmya@ltimindtree.com Job Requirement Overall, more than 5 Yrs of experience in Data projects Good Knowledge of GCP Bigquery SQL Python dataflow skills Has worked in Implementation projects building data pipelines transformation logics and data models Responsibility: • GCP Data Engineer Belongs to Data Management Engineering • Education Bachelor of engineering in any discipline equivalent • Desired Candidate Profile Technology Engineering Expertise • 4 years of experience in implementing data solutions using GCP BigquerySQL programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Key Responsibilities • Implement data solutions using GCP and need to be familiar in programming with SQLpython • Ensure clarity on NFR and implement these requirements • Work with Client Technical Manager by understanding customers landscape their IT priorities • Lead performance engineering and capacity planning exercises for databases Technology Engineering Expertise • 4 years of experience in implementing data pipelines for Data Analytics solutions • Experience in solutions using Google Cloud Data Flow Apache Beam Java programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Experience with different development methodologies RUP Scrum XP Soft skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Mandatory Skills: GCP, BigQuery, Python, SQL Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience. Location: Pan India Email: Shravya.Gone@ltimindtree.com Woonna.Sowmya@ltimindtree.com Job Requirement Overall, more than 5 Yrs of experience in Data projects Good Knowledge of GCP Bigquery SQL Python dataflow skills Has worked in Implementation projects building data pipelines transformation logics and data models Responsibility: • GCP Data Engineer Belongs to Data Management Engineering • Education Bachelor of engineering in any discipline equivalent • Desired Candidate Profile Technology Engineering Expertise • 4 years of experience in implementing data solutions using GCP BigquerySQL programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Key Responsibilities • Implement data solutions using GCP and need to be familiar in programming with SQLpython • Ensure clarity on NFR and implement these requirements • Work with Client Technical Manager by understanding customers landscape their IT priorities • Lead performance engineering and capacity planning exercises for databases Technology Engineering Expertise • 4 years of experience in implementing data pipelines for Data Analytics solutions • Experience in solutions using Google Cloud Data Flow Apache Beam Java programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Experience with different development methodologies RUP Scrum XP Soft skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Mandatory Skills: GCP, BigQuery, Python, SQL Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the data integration team builds data gravity on the Microsoft Cloud. Massive volumes of data are generated – not just from transactional systems of record, but also from the world around us. Our data integration products – Azure Data Factory and Power Query make it easy for customers to bring in, clean, shape, and join data, to extract intelligence. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. We’re the team that developed the Mashup Engine (M) and Power Query. We already ship monthly to millions of users across Excel, Power/Pro BI, Flow, and PowerApps; but in many ways we’re just getting started. We’re building new services, experiences, and engine capabilities that will broaden the reach of our technologies to several new areas – data “intelligence”, large-scale data analytics, and automated data integration workflows. We plan to use example-based interaction, machine learning, and innovative visualization to make data access and transformation even more intuitive for non-technical users. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Responsibilities Engine layer: designing and implementing components for dataflow orchestration, distributed querying, query translation, connecting to external data sources, and script parsing/interpretation Service layer: designing and implementing infrastructure for a containerized, micro services based, high throughput architecture UI layer: designing and implementing performant, engaging web user interfaces for data visualization/exploration/transformation/connectivity and dataflow management Embody our culture and values Qualifications Required/Minimum Qualifications Bachelor's Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python ○ OR equivalent experience. Experience in data integration or migrations or ELT or ETL tooling is mandatory Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred/Additional Qualifications BS degree in Computer Science Engine role: familiarity with data access technologies (e.g. ODBC, JDBC, OLEDB, ADO.Net, OData), query languages (e.g. T-SQL, Spark SQL, Hive, MDX, DAX), query generation/optimization, OLAP UI role: familiarity with JavaScript, TypeScript, CSS, React, Redux, webpack Service role: familiarity with micro-service architectures, Docker, Service Fabric, Azure blobs/tables/databases, high throughput services Full-stack role: a mix of the qualifications for the UX/service/backend roles Equal Opportunity Employer (EOP) #azdat #azuredata #azdat #azuredata #microsoftfabric #dataintegration Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the data integration team builds data gravity on the Microsoft Cloud. Massive volumes of data are generated – not just from transactional systems of record, but also from the world around us. Our data integration products – Azure Data Factory and Power Query make it easy for customers to bring in, clean, shape, and join data, to extract intelligence. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. We’re the team that developed the Mashup Engine (M) and Power Query. We already ship monthly to millions of users across Excel, Power/Pro BI, Flow, and PowerApps; but in many ways we’re just getting started. We’re building new services, experiences, and engine capabilities that will broaden the reach of our technologies to several new areas – data “intelligence”, large-scale data analytics, and automated data integration workflows. We plan to use example-based interaction, machine learning, and innovative visualization to make data access and transformation even more intuitive for non-technical users. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Responsibilities Engine layer: designing and implementing components for dataflow orchestration, distributed querying, query translation, connecting to external data sources, and script parsing/interpretation Service layer: designing and implementing infrastructure for a containerized, micro services based, high throughput architecture UI layer: designing and implementing performant, engaging web user interfaces for data visualization/exploration/transformation/connectivity and dataflow management Embody our culture and values Qualifications Required/Minimum Qualifications Bachelor's Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python ○ OR equivalent experience. Experience in data integration or migrations or ELT or ETL tooling is mandatory Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred/Additional Qualifications BS degree in Computer Science Engine role: familiarity with data access technologies (e.g. ODBC, JDBC, OLEDB, ADO.Net, OData), query languages (e.g. T-SQL, Spark SQL, Hive, MDX, DAX), query generation/optimization, OLAP UI role: familiarity with JavaScript, TypeScript, CSS, React, Redux, webpack Service role: familiarity with micro-service architectures, Docker, Service Fabric, Azure blobs/tables/databases, high throughput services Full-stack role: a mix of the qualifications for the UX/service/backend roles Equal Opportunity Employer (EOP) #azdat #azuredata #azdat #azuredata #microsoftfabric #dataintegration Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience. Location: Pan India Email: Shravya.Gone@ltimindtree.com Woonna.Sowmya@ltimindtree.com Job Requirement Overall, more than 5 Yrs of experience in Data projects Good Knowledge of GCP Bigquery SQL Python dataflow skills Has worked in Implementation projects building data pipelines transformation logics and data models Responsibility: • GCP Data Engineer Belongs to Data Management Engineering • Education Bachelor of engineering in any discipline equivalent • Desired Candidate Profile Technology Engineering Expertise • 4 years of experience in implementing data solutions using GCP BigquerySQL programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Key Responsibilities • Implement data solutions using GCP and need to be familiar in programming with SQLpython • Ensure clarity on NFR and implement these requirements • Work with Client Technical Manager by understanding customers landscape their IT priorities • Lead performance engineering and capacity planning exercises for databases Technology Engineering Expertise • 4 years of experience in implementing data pipelines for Data Analytics solutions • Experience in solutions using Google Cloud Data Flow Apache Beam Java programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Experience with different development methodologies RUP Scrum XP Soft skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Mandatory Skills: GCP, BigQuery, Python, SQL Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together
Posted 1 week ago
3.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Be a part of the multiskill POD which is responsible for feature as well as technical delivery. Responsible for development of API/ database /microservice/ dataflow job as the case may be with focus on high quality code. Participate in the refinement session, understand the change / story, perform AS IS analysis and come up with technical implementation. Perform code review, raise change request, present in CAB for approval and go ahead for release. Collaborate with test team to get the code thoroughly tested Requirements To be successful in this role, you should meet the following requirements: 3-4 years of overall development experience using Java, springboot as key technologies. Working knowledge of APIs and knowledge of microservices architecture. Strong hands on experience in core java, java8/11/17,spring boot Should be aware of configuration of spring boot, exception handling, spring profile, spring batch Should be aware of different logging mechanism Strong hands on experience in REST API, Junit, mockito etc Should be familiar with different code quality tools Should have working knowledge of different microservice design pattern Concept of API gateways should be clear Good to have knowledge of google cloud platform and associated services You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision – will surely be a fulfilling experience. Location: Pan India Email: Shravya.Gone@ltimindtree.com Woonna.Sowmya@ltimindtree.com Job Requirement Overall, more than 5 Yrs of experience in Data projects Good Knowledge of GCP Bigquery SQL Python dataflow skills Has worked in Implementation projects building data pipelines transformation logics and data models Responsibility: • GCP Data Engineer Belongs to Data Management Engineering • Education Bachelor of engineering in any discipline equivalent • Desired Candidate Profile Technology Engineering Expertise • 4 years of experience in implementing data solutions using GCP BigquerySQL programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Key Responsibilities • Implement data solutions using GCP and need to be familiar in programming with SQLpython • Ensure clarity on NFR and implement these requirements • Work with Client Technical Manager by understanding customers landscape their IT priorities • Lead performance engineering and capacity planning exercises for databases Technology Engineering Expertise • 4 years of experience in implementing data pipelines for Data Analytics solutions • Experience in solutions using Google Cloud Data Flow Apache Beam Java programming • Proficient in dealing data access layer RDBMS NOSQL • Experience in implementing and deploying Big data applications with GCP Big Data Services • Good to have SQL skills • Experience with different development methodologies RUP Scrum XP Soft skills • Able to deal with diverse set of stakeholders • Proficient in articulation communication and presentation • High integrity • Problem solving skills learning attitude • Team player Mandatory Skills: GCP, BigQuery, Python, SQL Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together
Posted 1 week ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Business Consultant P&C (Property & Casualty - Personal and Commercial Insurance) Candidate should have experience in working in Property & Casualty lines (both Personal and Commercial Insurance), should be familiar with anyone or more functional process – PC, BC, CC. (Preferred Guidewire/Duckcree LOBS Line of Business (Personal and Commercial Lines): must have Property Auto General Liability Good to have - Casualty Lines Professional Liability, Directors & Officers, Errors & Omissions, EPL, etc Inland Marine, Cargo Workers Compensation Umbrella, Excess Liability Roles and Responsibilities: Worked on multiple Business transformation, upgrade and modernization programs. Requirements Gathering, Elicitation –writing BRDs, FSDs. Conducting JAD sessions and Workshops to capture requirements and working close with Product Owner. Work with the client to define the most optimal future state operational process and related product configuration. Define scope by providing innovative solutions and challenging all new client requirements and change requests but simultaneously ensuring that client gets the required business value. Elaborate and deliver clearly defined requirement documents with relevant dataflow and process flow diagrams. Work closely with product design development team to analyse and extract functional enhancements. Provide product consultancy and assist the client with acceptance criteria gathering and support throughout the project life cycle. Product Experience/Other Skills: Product Knowledge – Guidewire, Duckcreek, Exigent, Majesco. (Preferred Guidewire/Duckcreek) Strong skills in stakeholder management and communication. Should have end to end processes in P&C insurance domain. Should be ready to work in flexible shifts (a good amount of overlap with US/UK hours). Good organizational and time management skills required. Should have good written and verbal communication skills in English. Industry certifications AINS 21 - Property and Liability Insurance Principles, AINS 22 - Personal Insurance, AINS 23 - Commercial Insurance and AINS 24 - General Insurance for IT and Support Professionals will be added advantage. Additional experience in Life or other insurance domain is added advantage. We expect you to work effectively as a team member and build good relationships with the client. You will have the opportunity to expand your domain knowledge and skills and will be able to collaborate frequently with other EY professionals with a wide variety of expertise. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
2.0 years
0 Lacs
India
On-site
Hi , Please find the below Job Description. Job Title: GCP Data Modeler Duration: Full Time Location: Hybrid Locations: Hyderabad, Chennai, Bengaluru, Pune, Nagpur. Job Description: Experience with with ( GCP, Bigquery, Dataflow, LookML, Looker, SQL, Python) Job Description: Senior Data Modeler with Expertise in GCP and Looker Overview: We are seeking a highly skilled and experienced Data Modeler to join our data and analytics team. The ideal candidate will have deep expertise in data modeling, particularly with Google Cloud Platform (GCP), and a strong background in managing complex data projects. This role involves designing scalable data models, optimizing workflows, and ensuring seamless data integration to support strategic business decisions. Key Responsibilities: Data Modeling: Design, develop, and maintain conceptual, logical, and physical data models to support data warehousing and analytics needs. Ensure data models are scalable, efficient, and aligned with business requirements. Database Design: Create and optimize database schemas, tables, views, indexes, and other database objects in Google BigQuery. Implement best practices for database design to ensure data integrity and performance. ETL Processes: Design and implement ETL (Extract, Transform, Load) processes to integrate data from various source systems into BigQuery. Use tools like Google Cloud Dataflow, Apache Beam, or other ETL tools to automate data pipelines. Data Integration: Work closely with data engineers to ensure seamless integration and consistency of data across different platforms. Integrate data from on-premises systems, third-party applications, and other cloud services into GCP. Data Governance: Implement data governance practices to ensure data quality, consistency, and security. Define and enforce data standards, naming conventions, and documentation. Performance Optimization: Optimize data storage, processing, and retrieval to ensure high performance and scalability. Use partitioning, clustering, and other optimization techniques in BigQuery. Collaboration: Collaborate with business stakeholders, data scientists, and analysts to understand data requirements and translate them into effective data models. Provide technical guidance and mentorship to junior team members. Data Visualization: Work with data visualization tools like Looker, Looker Studio, or Tableau to create interactive dashboards and reports. Develop LookML models in Looker to enable efficient data querying and visualization. Documentation: Document data models, ETL processes, and data integration workflows. Maintain up-to-date documentation to facilitate knowledge sharing and onboarding of new team members. Required Expertise: Looker: 2-5+ Years of Strong proficiency in Looker, including LookML, dashboard creation, and report development. BigQuery: 5+ Extensive experience with Google BigQuery, including data warehousing, SQL querying, and performance optimization. SQL& Python: 10+ years of SQL and Advanced SQL and Python skills for data manipulation, querying, and modelling. ETL: 10+ years of hands-on experience with ETL processes and tools for data integration from various source systems. Cloud Services: Familiarity with Google Cloud Platform (GCP) services, particularly BigQuery, Cloud Storage, and Dataflow. Data Modelling Techniques: Proficiency in various data modelling techniques such as star schema, snowflake schema, normalized and denormalized models, and dimensional modelling. Knowledge of data modelling frameworks, including Data Mesh, Data Vault, Medallion architecture, and methodologies by Kimball and Inmon, is highly advantageous. Problem-Solving: Excellent problem-solving skills and the ability to work on complex, ambiguous projects. Communication: Strong communication and collaboration skills, with the ability to work effectively in a team environment. Project Delivery: Proven track record of delivering successful data projects and driving business value through data insights. Preferred Qualifications: Education: Bachelor's or Master's degree in Data Science, Computer Science, Information Systems, or a related field. Certifications: Google Cloud certification in relevance to Data Modeler or engineering capabilities. Visualization Tools: Experience with other data visualization tools such as Looker, Looker Studio and Tableau. Programming: Familiarity with programming languages such as Python for data manipulation and analysis. Data Warehousing: Knowledge of data warehousing concepts and best practices.
Posted 1 week ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Business Consultant P&C (Property & Casualty - Personal and Commercial Insurance) Candidate should have experience in working in Property & Casualty lines (both Personal and Commercial Insurance), should be familiar with anyone or more functional process – PC, BC, CC. (Preferred Guidewire/Duckcree LOBS Line of Business (Personal and Commercial Lines): must have Property Auto General Liability Good to have - Casualty Lines Professional Liability, Directors & Officers, Errors & Omissions, EPL, etc Inland Marine, Cargo Workers Compensation Umbrella, Excess Liability Roles and Responsibilities: Worked on multiple Business transformation, upgrade and modernization programs. Requirements Gathering, Elicitation –writing BRDs, FSDs. Conducting JAD sessions and Workshops to capture requirements and working close with Product Owner. Work with the client to define the most optimal future state operational process and related product configuration. Define scope by providing innovative solutions and challenging all new client requirements and change requests but simultaneously ensuring that client gets the required business value. Elaborate and deliver clearly defined requirement documents with relevant dataflow and process flow diagrams. Work closely with product design development team to analyse and extract functional enhancements. Provide product consultancy and assist the client with acceptance criteria gathering and support throughout the project life cycle. Product Experience/Other Skills: Product Knowledge – Guidewire, Duckcreek, Exigent, Majesco. (Preferred Guidewire/Duckcreek) Strong skills in stakeholder management and communication. Should have end to end processes in P&C insurance domain. Should be ready to work in flexible shifts (a good amount of overlap with US/UK hours). Good organizational and time management skills required. Should have good written and verbal communication skills in English. Industry certifications AINS 21 - Property and Liability Insurance Principles, AINS 22 - Personal Insurance, AINS 23 - Commercial Insurance and AINS 24 - General Insurance for IT and Support Professionals will be added advantage. Additional experience in Life or other insurance domain is added advantage. We expect you to work effectively as a team member and build good relationships with the client. You will have the opportunity to expand your domain knowledge and skills and will be able to collaborate frequently with other EY professionals with a wide variety of expertise. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction: A Career at HARMAN Automotive We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN Automotive, we give you the keys to fast-track your career. Engineer audio systems and integrated technology platforms that augment the driving experience Combine ingenuity, in-depth research, and a spirit of collaboration with design and engineering excellence Advance in-vehicle infotainment, safety, efficiency, and enjoyment About The Role We're seeking an experienced Cloud Platform and Data Engineering Specialist with expertise in GCP (Google Cloud Platform ) or Azure to join our team. The ideal candidate will have a strong background in cloud computing, data engineering, and DevOps. What You Will Do Cloud Platform Management: Manage and optimize cloud infrastructure (GCP), ensuring scalability, security, and performance. Data Engineering: Design and implement data pipelines, data warehousing, and data processing solutions. Kubernetes and GKE: Develop and deploy applications using Kubernetes and Google Kubernetes Engine (GKE). Python Development: Develop and maintain scripts and applications using Python. What You Need To Be Successful Experience: 3-6 years of experience in cloud computing, data engineering, and DevOps. Technical Skills: Strong understanding of GCP (Google Cloud Platform) or Azure. Experience with Kubernetes and GKE. Proficiency in Python programming language (8/10). Basic understanding of data engineering and DevOps practices. Soft Skills: Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Bonus Points if You Have GCP: Experience with GCP services, including Compute Engine, Storage, and BigQuery. Data Engineering: Experience with data engineering tools, such as Apache Beam, Dataflow, or BigQuery. DevOps: Experience with DevOps tools, such as Jenkins, GitLab CI/CD, or Cloud Build. What Makes You Eligible GCP Expertise: Strong expertise in GCP is preferred, but Azure experience will be considered in worst-case scenarios. Python Proficiency: Proficiency in Python programming language is essential. Kubernetes and GKE: Experience with Kubernetes and GKE is required. What We Offer Competitive salary and benefits package Opportunities for professional growth and development Collaborative and dynamic work environment Access to cutting-edge technologies and tools Recognition and rewards for outstanding performance through BeBrilliant Chance to work with a renowned German OEM You are expected to work all 5 days in a week You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today!
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Khairatabad, Telangana, India
On-site
Job Summary Synechron is seeking a highly motivated and skilled Senior Cloud Data Engineer – GCP to join our cloud solutions team. In this role, you will collaborate closely with clients and internal stakeholders to design, implement, and manage scalable, secure, and high-performance cloud-based data solutions on Google Cloud Platform (GCP). You will leverage your technical expertise to ensure the integrity, security, and efficiency of cloud data architectures, enabling the organization to derive maximum value from cloud data assets. This role contributes directly to our mission of delivering innovative digital transformation solutions and supports the organization’s strategic objectives of scalable and sustainable cloud infrastructure. Software Requirements Required Skills: Proficiency with Google Cloud Platform (GCP) services (Compute Engine, Cloud Storage, BigQuery, Cloud Pub/Sub, Dataflow, etc.) Basic scripting skills with Python, Bash, or similar languages Familiarity with virtualization and cloud networking concepts Understanding of cloud security best practices and compliance standards Experience with infrastructure as code tools (e.g., Terraform, Deployment Manager) Strong knowledge of data management, data pipelines, and ETL processes Preferred Skills: Experience with other cloud platforms (AWS, Azure) Knowledge of SQL and NoSQL databases Familiarity with containerization (Docker, GKE) Experience with data visualization tools Overall Responsibilities Design, implement, and operate cloud data solutions that are secure, scalable, and optimized for performance Collaborate with clients and internal teams to identify infrastructure and data architecture requirements Manage and monitor cloud infrastructure and ensure operational reliability Resolve technical issues related to cloud data workflows and storage solutions Participate in project planning, timelines, and technical documentation Contribute to best practices and continuous improvement initiatives within the organization Educate and support clients in adopting cloud data services and best practices Technical Skills (By Category) Programming Languages: Essential: Python, Bash scripts Preferred: SQL, Java, or other data processing languages Databases & Data Management: Essential: BigQuery, Cloud SQL, Cloud Spanner, Cloud Storage Preferred: NoSQL databases like Firestore, MongoDB Cloud Technologies: Essential: Google Cloud Platform core services (Compute, Storage, BigQuery, Dataflow, Pub/Sub) Preferred: Cloud monitoring, logging, and security tools Frameworks & Libraries: Essential: Data pipeline frameworks, Cloud SDKs, APIs Preferred: Apache Beam, Data Studio Development Tools & Methodologies: Essential: Infrastructure as Code (Terraform, Deployment Manager) Preferred: CI/CD tools (Jenkins, Cloud Build) Security Protocols: Essential: IAM policies, data encryption, network security best practices Preferred: Compliance frameworks such as GDPR, HIPAA Experience Requirements 2-3 years of experience in cloud data engineering, cloud infrastructure, or related roles Hands-on experience with GCP is preferred; experience with AWS or Azure is a plus Background in designing and managing cloud data pipelines, storage, and security solutions Proven ability to deliver scalable data solutions in cloud environments Experience working with cross-functional teams on cloud deployments Alternative experience pathways: academic projects, certifications, or relevant internships demonstrating cloud data skills Day-to-Day Activities Develop and deploy cloud data pipelines, databases, and analytics solutions Collaborate with clients and team members to plan and implement infrastructure architecture Perform routine monitoring, maintenance, and performance tuning of cloud data systems Troubleshoot technical issues affecting data workflows and resolve performance bottlenecks Document system configurations, processes, and best practices Engage in continuous learning on new cloud features and data management tools Participate in project meetings, code reviews, and knowledge sharing sessions Qualifications Bachelor’s or Master’s degree in computer science, engineering, information technology, or a related field Relevant certifications (e.g., Google Cloud Professional Data Engineer, Cloud Architect) are preferred Training in cloud security, data management, or infrastructure design is advantageous Commitment to professional development and staying updated with emerging cloud technologies Professional Competencies Critical thinking and problem-solving skills to resolve complex cloud architecture challenges Ability to work collaboratively with multidisciplinary teams and clients Strong communication skills for technical documentation and stakeholder engagement Adaptability to evolving cloud technologies and project priorities Organized with a focus on quality and detail-oriented delivery Proactive learner with a passion for innovation in cloud data solutions Ability to manage multiple tasks effectively and prioritize in a fast-paced environment S YNECHRON’S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. Candidate Application Notice
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane