Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8 - 13 years
25 - 30 Lacs
Bengaluru
Hybrid
Over all 8+ years of solid experience in data projects. Excellent Design, develop, and maintain robust ETL/ELT pipelines for data ingestion, transformation, and storage. Proficient in SQL and must worked on complex joins, Subqueries, functions, procedure Able to perform SQL tunning and query optimization without support. Design, develop, and maintain ETL pipelines using Databricks, PySpark to extract, transform, and load data from various sources. Must have good working experience on Delta tables, deduplication, merging with terabyte of data set Optimize and fine-tune existing ETL workflows for performance and scalability. Excellent knowledge in dimensional modelling and Data Warehouse Must have experience on working with large data set Experience working with batch and real-time data processing (Good to have). Implemented data validation, quality checks , and ensure adherence to security and compliance standards. Ability to develop reliable, secure, compliant data processing systems. Work closely with cross-functional teams to support data analytics, reporting, and business intelligence initiatives. One should be self-driven and work independently without support.
Posted 1 month ago
1 - 5 years
6 - 11 Lacs
Pune
Work from Office
About The Role : Job Title Data Engineer for Private Bank One Data Platform on Google Cloud Corporate TitleAssociate LocationPune, India Role Description As part of one of the internationally staffed agile teams of the Private Bank One Data Platform, you are part of the "TDI PB Germany Enterprise & Data" division. The focus here is on the development, design, and provision of different solutions in the field of data warehousing, reporting and analytics for the Private Bank to ensure that necessary data is provided for operational and analytical purposes. The PB One Data Platform is the new strategic data platform of the Private Bank and uses the Google Cloud Platform as the basis. With Google as a close partner, we are following Deutsche Banks cloud strategy with the aim of transferring or rebuilding a significant share of todays on-prem applications to the Google Cloud Platform. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Work within software development applications as a Data Engineer to provide fast and reliable data solutions for warehousing, reporting, Customer- and Business Intelligence solutions. Partner with Service/Backend Engineers to integrate data provided by legacy IT solutions into your designed databases and make it accessible to the services consuming those data. Focus on the design and setup of databases, data models, data transformations (ETL), critical Online banking business processes in the context Customer Intelligence, Financial Reporting and performance controlling. Contribute to data harmonization as well as data cleansing. A passion for constantly learning and applying new technologies and programming languages in a constantly evolving environment. Build solutions are highly scalable and can be operated flawlessly under high load scenarios. Together with your team, you will run and develop you application self-sufficiently. You'll collaborate with Product Owners as well as the team members regarding design and implementation of data analytics solutions and act as support during the conception of products and solutions. When you see a process running with high manual effort, you'll fix it to run automated, optimizing not only our operating model, but also giving yourself more time for development. Your skills and experience Mandatory Skills Hands-on development work building scalabledata engineering pipelinesand other data engineering/modellingwork usingJava/Python. Excellent knowledge of SQL and NOSQL databases. Experience working in a fast-paced and Agile work environment. Working knowledge of public cloud environment. Preferred Skills Experience inDataflow (Apache Beam)/Cloud Functions/Cloud Run Knowledge of workflow management tools such asApache Airflow/Composer. Demonstrated ability to write clear code that is well-documented and stored in a version control system (GitHub). Knowledge ofGCS Buckets, Google Pub Sub, BigQuery Knowledge aboutETLprocesses in theData Warehouseenvironment/Data Lakeand how to automate them. Nice to have Knowledge of provisioning cloud resources usingTerraform. Knowledge ofShell Scripting. Experience withGit,CI/CD pipelines,Docker, andKubernetes. Knowledge ofGoogle Cloud Cloud Monitoring & Alerting Knowledge ofCloud Run, Data Form, Cloud Spanner Knowledge of Data Warehouse solution -Data Vault 2.0 Knowledge onNewRelic Excellent analytical and conceptual thinking. Excellent communication skills, strong independence and initiative, ability to work in agile delivery teams. Good communication and experience in working with distributed teams (especially Germany + India) How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 month ago
2 - 5 years
2 - 6 Lacs
Bengaluru
Work from Office
Req ID: 318492 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Snowflake, Python, Airflow Engineer to join our team in Bangalore, Karnataka (IN-KA), India (IN). Job DutiesTeam Overview The Controls Engineering, Measurement and Analytics (CEMA) department is responsible for Cyber Risk and Control assessment, management, monitoring, and reporting capabilities across Technology, resulting in risk reduction and better oversight of the technology risk landscape of the firm. Our work is always client focused, our engineers are problem-solvers and innovators. We seek exceptional technologists to help deliver solutions on our user-facing applications, data stores, and reporting and metric platforms while being cloud-centric, leveraging multi-tier architectures and aligned with our DevOps and Agile strategies. We are in the process of modernizing our technology stack across multiple platforms with the goal of building scalable, front-to-back assessment, measurement and monitoring systems using the latest cloud, web, and data technologies. We are looking for someone with a systematic problem-solving approach, coupled with a sense of ownership and drive. The successful candidate will be able to influence and collaborate globally. They should be a strong team-player, have an entrepreneurial approach, push innovative ideas while appropriately considering risk, and adapt in a fast-paced changing environment. Role Summary As an ETL / Data Engineer, you will be a member of the CEDAR / C3 Data Warehouse team, with a focus on sourcing and storing data from various technology platforms across the firm into a centralized data platform used to build various reporting and analytics solutions for the Technology Risk functions within Morgan Stanley. In this role you will be primarily responsible for the development of data pipelines, database views, and stored procedures, in addition to performing technical data analysis, and monitoring and tuning queries and data loads. You will be working closely with data providers, data analysts, data developers, and data analytics teams to facilitate the implementation of client-specific business requirements and requests. KEY RESPONSIBILITIES: To develop ETLs, stored procedures, triggers, and views on our existing DB2-based Data Warehouse and on our new Snowflake-based Data Warehouse. To perform data profiling and technical analysis on source system data to ensure that source system data can be integrated and represented properly in our models. To monitor the performance of queries and data loads and perform tuning as necessary. To provide assistance and guidance during the QA & UAT phases to quickly confirm the validity of potential issues and to determine the root cause and best resolution of verified issues. Minimum Skills Required Bachelor’s degree in Computer Science, Software Engineering, Information Technology, or related field required. At least 5+ years of experience in data development and solutions in highly complex data environments with large data volumes. At least 5+ years of experience developing complex ETLs with Informatica PowerCenter. At least 5+ years of SQL / PLSQL experience with the ability to write ad-hoc and complex queries to perform data analysis. At least 5+ years of experience developing complex stored procedures, triggers, MQTs and views on IBM DB2. Experience with performance tuning DB2 tables, queries, and stored procedures. An understanding of E-R data models (conceptual, logical, and physical). Strong understanding of advanced data warehouse concepts (Factless Fact Tables, Temporal \ Bi-Temporal models, etc.). Experience with Python a plus. Experience with developing data transformations using DBT a plus. Experience with Snowflake a plus. Experience with Airflow a plus. Experience with using Spark (PySpark) for data loading and complex transformations a plus. Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions. Strong communication skills both verbal and written. Capable of coll About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA endeavors to make https://us.nttdata.comaccessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here. Job Segment Cloud, Data Warehouse, Database, Computer Science, Quality Assurance, Technology
Posted 1 month ago
2 - 5 years
2 - 6 Lacs
Bengaluru
Work from Office
Req ID: 318488 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Snowflake, Python, Airflow Engineer to join our team in Bangalore, Karnataka (IN-KA), India (IN). Job DutiesTeam Overview The Controls Engineering, Measurement and Analytics (CEMA) department is responsible for Cyber Risk and Control assessment, management, monitoring, and reporting capabilities across Technology, resulting in risk reduction and better oversight of the technology risk landscape of the firm. Our work is always client focused, our engineers are problem-solvers and innovators. We seek exceptional technologists to help deliver solutions on our user-facing applications, data stores, and reporting and metric platforms while being cloud-centric, leveraging multi-tier architectures and aligned with our DevOps and Agile strategies. We are in the process of modernizing our technology stack across multiple platforms with the goal of building scalable, front-to-back assessment, measurement and monitoring systems using the latest cloud, web, and data technologies. We are looking for someone with a systematic problem-solving approach, coupled with a sense of ownership and drive. The successful candidate will be able to influence and collaborate globally. They should be a strong team-player, have an entrepreneurial approach, push innovative ideas while appropriately considering risk, and adapt in a fast-paced changing environment. Role Summary As an ETL / Data Engineer, you will be a member of the CEDAR / C3 Data Warehouse team, with a focus on sourcing and storing data from various technology platforms across the firm into a centralized data platform used to build various reporting and analytics solutions for the Technology Risk functions within Morgan Stanley. In this role you will be primarily responsible for the development of data pipelines, database views, and stored procedures, in addition to performing technical data analysis, and monitoring and tuning queries and data loads. You will be working closely with data providers, data analysts, data developers, and data analytics teams to facilitate the implementation of client-specific business requirements and requests. KEY RESPONSIBILITIES: To develop ETLs, stored procedures, triggers, and views on our existing DB2-based Data Warehouse and on our new Snowflake-based Data Warehouse. To perform data profiling and technical analysis on source system data to ensure that source system data can be integrated and represented properly in our models. To monitor the performance of queries and data loads and perform tuning as necessary. To provide assistance and guidance during the QA & UAT phases to quickly confirm the validity of potential issues and to determine the root cause and best resolution of verified issues. Minimum Skills Required Bachelor’s degree in Computer Science, Software Engineering, Information Technology, or related field required. At least 5+ years of experience in data development and solutions in highly complex data environments with large data volumes. At least 5+ years of experience developing complex ETLs with Informatica PowerCenter. At least 5+ years of SQL / PLSQL experience with the ability to write ad-hoc and complex queries to perform data analysis. At least 5+ years of experience developing complex stored procedures, triggers, MQTs and views on IBM DB2. Experience with performance tuning DB2 tables, queries, and stored procedures. An understanding of E-R data models (conceptual, logical, and physical). Strong understanding of advanced data warehouse concepts (Factless Fact Tables, Temporal \ Bi-Temporal models, etc.). Experience with Python a plus. Experience with developing data transformations using DBT a plus. Experience with Snowflake a plus. Experience with Airflow a plus. Experience with using Spark (PySpark) for data loading and complex transformations a plus. Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions. Strong communication skills both verbal and written. Capable of coll About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies.Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us atus.nttdata.com NTT DATA endeavors to make https://us.nttdata.comaccessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here. Job Segment Cloud, Data Warehouse, Computer Science, Database, SQL, Technology
Posted 1 month ago
1 - 4 years
4 - 7 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description We are seeking an MDM Associate Data Engineer with 2 –5 years of experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong data engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark , and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience Master’s degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelor’s degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced SQL expertise and data wrangling. Strong experience in Python and PySpark for data transformation workflows. Strong e xperience with Databricks and AWS architecture. Must have k nowledge of MDM, data governance, stewardship, and profiling practices. In addition to above, candidates having e xperience with Informatica or Reltio MDM platforms will be preferred . Good-to-Have Skills: Experience with IDQ, data modeling and approval workflow/DCR. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Strong grip on data engineering concepts. Professional Certifications Any ETL certification ( e.g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 month ago
- 2 years
3 - 5 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description We are seeking an MDM Associate Data Engineer with 2 –5 years of experience to support and enhance our enterprise MDM (Master Data Management) platforms using Informatica/Reltio. This role is critical in delivering high-quality master data solutions across the organization, utilizing modern tools like Databricks and AWS to drive insights and ensure data reliability. The ideal candidate will have strong SQL, data profiling, and experience working with cross-functional teams in a pharma environment. To succeed in this role, the candidate must have strong data engineering experience along with MDM knowledge, hence the candidates having only MDM experience are not eligible for this role. Candidate must have data engineering experience on technologies like (SQL, Python, PySpark , Databricks, AWS etc ), along with knowledge of MDM (Master Data Management) Roles & Responsibilities Analyze and manage customer master data using Reltio or Informatica MDM solutions. Perform advanced SQL queries and data analysis to validate and ensure master data integrity. Leverage Python, PySpark , and Databricks for scalable data processing and automation. Collaborate with business and data engineering teams for continuous improvement in MDM solutions. Implement data stewardship processes and workflows, including approval and DCR mechanisms. Utilize AWS cloud services for data storage and compute processes related to MDM. Contribute to metadata and data modeling activities. Track and manage data issues using tools such as JIRA and document processes in Confluence. Apply Life Sciences/Pharma industry context to ensure data standards and compliance. Basic Qualifications and Experience Master’s degree with 1 - 3 years of experience in Business, Engineering, IT or related field OR Bachelor’s degree with 2 - 5 years of experience in Business, Engineering, IT or related field OR Diploma with 6 - 8 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced SQL expertise and data wrangling. Strong experience in Python and PySpark for data transformation workflows. Strong experience with Databricks and AWS architecture. Must have knowledge of MDM, data governance, stewardship, and profiling practices. In addition to above, candidates having experience with Informatica or Reltio MDM platforms will be preferred. Good-to-Have Skills: Experience with IDQ, data modeling and approval workflow/DCR. Background in Life Sciences/Pharma industries. Familiarity with project tools like JIRA and Confluence. Strong grip on data engineering concepts. Professional Certifications Any ETL certification ( e.g. Informatica) Any Data Analysis certification (SQL, Python, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 month ago
5 - 10 years
15 - 30 Lacs
Chennai
Remote
Design, develop, and maintain data solutions focused on importing, processing, and transforming client CMS data for AI system Pipeline optimisation, collaboration, code development, api integration, cloud data management. Exp. in Python, Node.js, PHP
Posted 1 month ago
3 - 8 years
3 - 8 Lacs
Hyderabad
Work from Office
Name of Organization: Jarus Technologies (India) Pvt. Ltd. Organization Website: www.jarustech.com Position: Senior Software Engineer - Data warehouse Domain Knowledge: Insurance (Mandatory) Job Type: Permanent Location: Hyderabad - IDA Cherlapally, ECIL and Divyasree Trinity, Hi-Tech City. Experience: 3+ years Education: B. E. / B. Tech. / M. C. A. Resource Availability: Immediately or a maximum period of 30 days. Technical Skills: • Strong knowledge of data warehousing concepts and technologies. • Proficiency in SQL and other database languages. • Experience with ETL tools (e.g., Informatica, Talend, SSIS). • Familiarity with data modelling techniques. • Experience in building dimensional data modelling objects, dimensions, and facts. • Experience with cloud-based data warehouse platforms (e.g., AWS Redshift, Azure Synapse, Google Big Query). • Familiar with optimizing SQL queries and improving ETL processes for better performance. • Knowledge of data transformation, cleansing, and validation techniques. Experience with incremental loads, change data capture (CDC) and data scheduling. • • Comfortable with version control systems like GIT. • Familiar with BI tools like Power BI for visualization and reporting. Responsibilities: Design, develop and maintain data warehouse systems and ETL (Extract, Transform, Load) processes. • • Develop and optimize data models and schemas to support business needs. • Design and implement data warehouse architectures, including physical and logical designs. • Design and develop dimensions, facts and bridges. • Ensure data quality and integrity throughout the ETL process. • Design and implement relational and multidimensional database structures. • Understand data structures and fundamental design principles of data warehouses. • Analyze and modify data structures to adapt them to business needs. • Identify and resolve data quality issues and data warehouse problems. • Debug ETL processes and data warehouse queries. Communication skills: • Good communication skills to interact with customer • Ability to understand requirements for implementing an insurance warehouse system
Posted 1 month ago
2 - 5 years
4 - 8 Lacs
Pune
Work from Office
About The Role The candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. He/she must be able to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, he/she must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Role and responsibilities: Process mapping and identifying non value add steps / friction points in the process Discover, monitor and improve processes by extracting and analysing knowledge from the event logs in Process Mining/Celonis tool. Work alongside both technical and non-technical stakeholders to understand business challenges to help design process mining initiatives and prioritize the requests. Act as customers key contact and guide them through revealing process trends, inefficiencies & bottlenecks in the business process. Support validation of data (counts, values between source systems and Celonis). Work on process insights by creating KPIs and actions, identify process inefficiencies, and understand the root causes. Develop workflows to monitor processes, detect anomalies and turn those insights into real-time automated preventive or corrective actions using Action-engine, Action-flows and other capabilities. Technical and Functional Skills: Bachelors degree in Computer Science with 3+ years of work experience in Data Analytics, data mining & Data Transformation. Very proficient in Celonis, should be able to build, manage, and extract value from Celonis models for various use casesAdding or modifying data sources, Creating automated alerts, Action engine, Transformation center, Celonis ML workbench. Experience in SQL / PQL scripting & knowledge of data mining, should apply complex queries to build the transformation e.g. joins, union, windows functions etc. Knowledge of process improvement techniques / tools and Process Mining / Analytics. Basic knowledge of Python Scripting, should be knowing about (Numpy, Pandas, Seaborn, Matplotlib, SKLearn etc) Experience in BI tools (e.g., Tableau, Power BI etc.)- Nice to have Strong communicationand presentation skills. Understanding of business processes.
Posted 1 month ago
3 - 8 years
7 - 11 Lacs
Hyderabad
Work from Office
The Impact you will have in this role: Seeking a skilled Talend Developer with expertise in Power BI development and SQL Server to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining ETL processes using Talend, creating insightful data visualizations with Power BI, and is an expert in writing stored procedures/queries on MS SQL Server databases. What You'll Do: Design, develop, and maintain ETL processes using Talend to extract, transform, and load data from various sources. Create and maintain data visualizations and dashboards using Power BI to provide actionable insights to stakeholders. Write high performance queries on SQL Server databases, ensuring data integrity, performance, and security Collaborate with cross-functional teams to gather requirements, design solutions, and implement data integration and reporting solutions. Troubleshoot and resolve issues related to ETL processes, data visualizations, and database performance Collaborates with other team members and analysts through the delivery cycle. Participates in an Agile delivery team that builds high quality and scalable work products. Supports production releases and maintenance windows working with the Operations team Qualifications: Bachelors degree in computer science, Information Technology, or a related field. Talents Needed for Success: Min 3+ years in writing ETL processes Proven experience as a Talend Developer, with a strong understanding of ETL processes and data integration. Proficiency in Power BI development, including creating dashboards, reports, and data models. Expertise in SQL Server, including database design, optimization, and performance tuning Strong understanding of agile processes (Kanban and Scrum) and a working knowledge of JIRA is required Strong analytical and problem-solving skills, with the ability to work independently and as part of a team. Excellent communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at all levels. Additional Qualifications needed for Success: Talend Expertise : Proficiency in using Talend Studio for data integration, data quality, files manipulation. This includes designing and developing ETL processes, creating and managing Talend jobs, and using Talend components for data transformation and integration Data Integration Knowledge in Talend : Understanding of data integration concepts and best practices. This includes experience with data extraction, transformation, and loading (ETL) processes, as well as knowledge of data warehousing and data modeling. Database Skills : Proficiency in working with various databases, including MS SQL and/or Oracle databases. This includes writing complex SQL queries, understanding database schemas, and performing data migrations. Version Control and Collaboration : Experience with version control systems (e.g., Git) and collaboration tools (e.g., Jira, Confluence). This is important for managing code changes, collaborating with team members, and tracking project progress. Job Scheduling and Automation : Experience with job scheduling and automation tools. This includes setting up and managing Talend jobs using schedulers like Talend Administration Center (TAC), Autosys or third-party tools to automate ETL workflows. Data Visualization : Ability to create visually appealing and insightful reports and dashboards. This involves selecting appropriate visualizations, designing layouts, and using custom visuals when necessary using Power BI Power Query : Expertise in using Power Query for data transformation and preparation. This involves cleaning, merging, and shaping data from various source Expertise in scripting languages such as Python, and Shell/Batch programming is a plus
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane