Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
18 - 22 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 1800000 - Rs 2200000 (ie INR 18-22 LPA) Min Experience: 4 years Location: Bangalore, Bengaluru JobType: full-time We are seeking a skilled and detail-oriented Data Modeller with 4-6 years of experience to join our growing data engineering team. In this role, you will play a critical part in designing, implementing, and optimizing robust data models that support business intelligence, analytics, and operational data needs. You will collaborate with cross-functional teams to understand business requirements and convert them into scalable and efficient data solutions, primarily leveraging Amazon Redshift and Erwin Data Modeller. Requirements Key Responsibilities: Design and implement conceptual, logical, and physical data models that support business processes and reporting needs. Develop data models optimized for Amazon Redshift, ensuring performance, scalability, and integrity of data. Work closely with business analysts, data engineers, and stakeholders to translate business requirements into data structures. Use Erwin Data Modeller (Erwin ERP) to create and maintain data models and maintain metadata repositories. Collaborate with ETL developers to ensure efficient data ingestion and transformation pipelines that align with the data model. Apply normalization, denormalization, and indexing strategies to optimize data performance and access. Perform data profiling and source system analysis to validate assumptions and model accuracy. Create and maintain detailed documentation, including data dictionaries, entity relationship diagrams (ERDs), and data lineage information. Drive consistency and standardization across all data models, ensuring alignment with enterprise data architecture and governance policies. Identify opportunities to improve data quality, model efficiency, and pipeline performance. Required Skills and Qualifications: 4-6 years of hands-on experience in data modeling, including conceptual, logical, and physical modeling. Strong expertise in Amazon Redshift and Redshift-specific modeling best practices. Proficiency with Erwin Data Modeller (Erwin ERP) or similar modeling tools. Strong knowledge of SQL with experience writing complex queries and performance tuning. Solid understanding of ETL processes and experience working alongside ETL engineers to integrate data from multiple sources. Familiarity with dimensional modeling, data warehousing principles, and star/snowflake schemas. Experience with metadata management, data governance, and maintaining modeling standards. Ability to work independently and collaboratively in a fast-paced, data-driven environment. Strong analytical and communication skills with the ability to present technical concepts to non-technical stakeholders. Preferred Qualifications: Experience working in a cloud-native data environment (AWS preferred). Exposure to other data modeling tools and cloud data warehouses is a plus. Familiarity with data catalog tools, data lineage tracing, and data quality frameworks
Posted 2 weeks ago
4.0 - 6.0 years
12 - 18 Lacs
Chennai, Bengaluru
Work from Office
Key Skills : Python, SQL, PySpark, Databricks, AWS, Data Pipeline, Data Integration, Airflow, Delta Lake, Redshift, S3, Data Security, Cloud Platforms, Life Sciences. Roles & Responsibilities : Develop and maintain robust, scalable data pipelines for ingesting, transforming, and optimizing large datasets from diverse sources. Integrate multi-source data into performant, query-optimized formats such as Delta Lake, Redshift, and S3. Tune data processing jobs and storage layers to ensure cost efficiency and high throughput. Automate data workflows using orchestration tools like Airflow and Databricks APIs for ingestion, transformation, and reporting. Implement data validation and quality checks to ensure reliable and accurate data. Manage and optimize AWS and Databricks infrastructure to support scalable data operations. Lead cloud platform migrations and upgrades, transitioning legacy systems to modern, cloud-native solutions. Enforce security best practices, ensuring compliance with regulatory standards such as IAM and data encryption. Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders to deliver data solutions. Experience Requirement : 4-6 years of hands-on experience in data engineering with expertise in Python, SQL, PySpark, Databricks, and AWS. Strong background in designing and building data pipelines, and optimizing data storage and processing. Proficiency in using cloud services such as AWS (S3, Redshift, Lambda) for building scalable data solutions. Hands-on experience with containerized environments and orchestration tools like Airflow for automating data workflows. Expertise in data migration strategies and transitioning legacy data systems to modern cloud platforms. Experience with performance tuning, cost optimization, and lifecycle management of cloud data solutions. Familiarity with regulatory compliance (GDPR, HIPAA) and security practices (IAM, encryption). Experience in the Life Sciences or Pharma domain is highly preferred, with an understanding of industry-specific data requirements. Strong problem-solving abilities with a focus on delivering high-quality data solutions that meet business needs. Education : Any Graduation.
Posted 2 weeks ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greeting from Infosys BPM Ltd., We are hiring for Test Automation using Java and Selenium, with knowledge on testing process, SQL, ETL DB Testing, ETL Testing Automation skills. Please walk-in for interview on 14th & 15th July 2025 at Chennai location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-217871 Interview details Interview Date: 14th & 15th July 2025 Interview Time: 10 AM till 1 PM Interview Venue:TP 1/1, Central Avenue Techno Park, SEZ, Mahindra World City, Paranur, TamilNadu Please find below Job Description for your reference: Work from Office*** Rotational Shifts Min 2 years of experience on project is mandate*** Job Description: Test Automation using Java and Selenium, with knowledge on testing process, SQL Java, Selenium automation, SQL, Testing concepts, Agile. Tools: Jira and ALM, Intellij Functional Testing: UI Test Automation using Selenium, Java Financial domain experience Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team.
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greeting from Infosys BPM Ltd, Exclusive Women's Walkin drive We are hiring for Walkme, ETL Testing + Python Programming, Automation Testing with Java, Selenium, BDD, Cucumber, Test Automation using Java and Selenium, with knowledge on testing process, SQL, ETL DB Testing, ETL Testing Automation skills. Please walk-in for interview on 16th July 2025 at Pune location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-217822 Interview details Interview Date: 16th July 2025 Interview Time: 10 AM till 1 PM Interview Venue: Pune:: Hinjewadi Phase 1 Infosys BPM Limited, Plot No. 1, Building B1, Ground floor, Hinjewadi Rajiv Gandhi Infotech Park, Hinjewadi Phase 1, Pune, Maharashtra-411057 Please find below Job Description for your reference: Work from Office*** Min 2 years of experience on project is mandate*** Job Description: Walkme Design, develop, and deploy WalkMe solutions to enhance user experience and drive digital adoption. Experience in task-based documentation, training and content strategy Experience working in a multi-disciplined team with geographically distributed co-workers Working knowledge technologies such as CSS and JavaScript Project management and/or Jira experience Experience in developing in-app guidance using tools such as WalkMe, Strong experience in technical writing, instructional video or guided learning experience in a software company Job Description: ETL Testing + Python Programming Experience in Data Migration Testing (ETL Testing), Manual & Automation with Python Programming. Strong on writing complex SQLs for data migration validations. Work experience with Agile Scrum Methodology Functional Testing- UI Test Automation using Selenium, Java Financial domain experience Good to have AWS knowledge Job Description: Automation Testing with Java, Selenium, BDD, Cucumber Hands on exp in Automation. Java, Selenium, BDD , Cucumber expertise is mandatory. Banking Domian Experience is good. Financial domain experience Automation Talent with TOSCA skills, Payment domain skills is preferable. Job Description: Test Automation using Java and Selenium, with knowledge on testing process, SQL Java, Selenium automation, SQL, Testing concepts, Agile. Tools: Jira and ALM, Intellij Functional Testing: UI Test Automation using Selenium, Java Financial domain experience Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team.
Posted 2 weeks ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greeting from Infosys BPM Ltd., We are hiring for Test Automation using Java and Selenium, with knowledge on testing process, SQL, ETL DB Testing, ETL Testing Automation skills. Please walk-in for interview on 16th July 2025 at Chennai location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-217871 Interview details Interview Date: 16th July 2025 Interview Time: 10 AM till 1 PM Interview Venue:TP 1/1, Central Avenue Techno Park, SEZ, Mahindra World City, Paranur, TamilNadu Please find below Job Description for your reference: Work from Office*** Rotational Shifts Min 2 years of experience on project is mandate*** Job Description: Test Automation using Java and Selenium, with knowledge on testing process, SQL Java, Selenium automation, SQL, Testing concepts, Agile. Tools: Jira and ALM, Intellij Functional Testing: UI Test Automation using Selenium, Java Financial domain experience Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team.
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Software Analysis Associate Advisor Sr. BI Analyst and Visualization Expert at Evernorth Health Services, a division of The Cigna Group, you will play a crucial role in creating pharmacy, care, and benefits solutions to improve health and increase vitality for millions of people. Your primary responsibility will be to design, develop, and maintain BI reports using Cognos and other BI tools. You will ensure that BI solutions are optimized for performance and scalability, and develop Materialized Views to support complex data aggregation and reporting requirements. Your role will involve conducting in-depth data analysis to generate business insights and support strategic decision-making. You will identify trends, patterns, and anomalies in data and provide actionable recommendations. Collaborating closely with business stakeholders, you will gather requirements and translate them into BI solutions. Additionally, you will provide training and support to end-users on BI tools and reporting capabilities. Ensuring data accuracy and integrity in all BI outputs will be a key focus area for you. You will participate in data quality and governance initiatives to maintain reliable data sources. Staying updated with the latest BI technologies and trends, you will continuously improve BI processes and methodologies. Working with distributed requirements and technical stakeholders, you will complete shared design and development tasks. To excel in this role, you must have extensive experience with BI tools, particularly Cognos, and proficiency in SQL and other data querying languages. Strong data visualization skills and experience with tools like Cognos, Tableau, or Power BI are preferred. You should also have experience creating and managing Materialized Views in data warehouse and data lake environments, a solid understanding of OOP, Design Patterns, and JSON Data Structures, as well as familiarity with AWS, Redshift, and CI/CD practices. With a minimum of 8 years of experience and a college degree in related technical/business areas, you will be recognized internally as the go-to person for the most complex software engineering assignments. Your proven experience with architecture, design, and development of large-scale enterprise application solutions, along with industry certifications in BI or data analysis, will be valuable assets in this role. Evernorth is an Equal Opportunity Employer that actively encourages and supports diversity, equity, and inclusion efforts across the organization. Join us in our mission to make the prediction, prevention, and treatment of illness and disease more accessible to diverse client populations.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for developing and modifying programs using Python, AWS Glue/Redshift, and PySpark technologies. Your role will involve writing effective and scalable code, as well as identifying areas for program modifications. Additionally, you must have a strong understanding of AWS cloud technologies such as CloudWatch, Lambda, Dynamo, API Gateway, and S3. Experience in creating APIs from scratch and integrating with 3rd party APIs is also required. This is a full-time position based in Hyderabad/Chennai/Bangalore, and the ideal candidate should have a maximum notice period of 15 days.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
The ideal candidate for this position should have advanced proficiency in Python, with a solid understanding of inheritance and classes. Additionally, the candidate should be well-versed in EMR, Athena, Redshift, AWS Glue, IAM roles, CloudFormation (CFT is optional), Apache Airflow, Git, SQL, Py-Spark, Open Metadata, and Data Lakehouse. Experience with metadata management is highly desirable, particularly with AWS Services such as S3. The candidate should possess the following key skills: - Creation of ETL Pipelines - Deploying code in EMR - Querying in Athena - Creating Airflow Dags for scheduling ETL pipelines - Knowledge of AWS Lambda and ability to create Lambda functions This role is for an individual contributor, and as such, the candidate is expected to autonomously manage client communication and proactively resolve technical issues without external assistance.,
Posted 2 weeks ago
3.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for a skilled R Shiny programmer to create interactive reports that transform clinical trial data into actionable clinical insights. As an R Shiny programmer, your role will involve designing, developing, deploying, and optimizing user-friendly web applications for analyzing and visualizing clinical data. Your responsibilities will include designing, developing, testing, and deploying interactive R Shiny web applications. You will collaborate with data scientists, bioinformatics programmers, analysts, and stakeholders to understand application requirements and translate them into intuitive R Shiny applications. Additionally, you will be responsible for translating complex data analysis and visualization tasks into clear and user-friendly interfaces, writing clean and efficient R code, conducting code reviews, and validating R programming. Moreover, you will integrate R Shiny applications with AWS services like AWS Redshift, implement unit tests to ensure quality and performance, benchmark and optimize application performance, and address any inconsistencies in data, analytical, or reporting problems that may arise. Other duties may be assigned as needed. The ideal candidate should possess a Bachelor's degree in computer science, Data Science, or a related field, along with 3 to 8 years of relevant experience. Proven expertise in building R Shiny applications, strong proficiency in R programming, including data manipulation, statistical analysis, and data visualization, experience in using SQL, and an understanding of user interface (UI) and user experience (UX) principles are essential. Experience with gathering requirements, using RStudio, Version Control software, managing programming code, and working with POSIT Workbench, Connect, and/or Package Manager is preferred. Candidates should have the ability to manage multiple tasks, work independently and in a team environment, effectively communicate technical concepts in written and oral formats, and experience with R markdown, continuous integration/continuous delivery (CI/CD) pipelines, and AWS cloud computing services such as Redshift, EC2, S3, and CloudWatch. The required education for this position is a BE/MTech/MCA degree in a computer-related field. A satisfactory background check is mandatory for this role.,
Posted 2 weeks ago
6.0 - 12.0 years
0 Lacs
karnataka
On-site
Your role as a Supervisor at Koch Global Services India (KGSI) will involve being part of a global team dedicated to creating new solutions and enhancing existing ones for Koch Industries. With over 120,000 employees worldwide, Koch Industries is a privately held organization engaged in manufacturing, trading, and investments. KGSI is being established in India to expand its IT operations and serve as an innovation hub within the IT function. This position offers the chance to join at the inception of KGSI and play a pivotal role in its development over the coming years. You will collaborate closely with international colleagues, providing valuable global exposure to the team. In this role, you will lead a team responsible for developing innovative solutions for KGS and its customers. You will oversee the performance and growth of data engineers at KGSI, ensuring the delivery of application solutions. Collaboration with global counterparts will be essential for enterprise-wide delivery success. Your responsibilities will include mentoring team members, providing feedback, and coaching them for their professional growth. Additionally, you will focus on understanding individual career aspirations, addressing challenges, and facilitating relevant training opportunities. Ensuring compensation aligns with Koch's philosophy and maintaining effective communication with HR will be key aspects of your role. Timely delivery of projects is crucial, and you will be responsible for identifying and addressing delays proactively. By fostering knowledge sharing and best practices within the team, you will contribute to the overall success of KGSI. Staying updated on market trends, talent acquisition, and talent retention strategies will be vital for your role. Your ability to lead by example, communicate effectively, and solve problems collaboratively will be essential in driving team success. To qualify for this role, you should hold a Bachelor's or Master's degree in computer science or information technology with a minimum of 12 years of IT experience, including leadership roles in integration teams. A solid background in data engineering, AWS cloud migration, and team management is required. Strong communication skills, customer focus, and a proactive mindset towards innovation are essential for success in this position. Experience with AWS Lambda, Glue, ETL projects, Python, SQL, and BI tools will be advantageous. Familiarity with manufacturing business processes and exposure to Scrum Master practices would be considered a plus. Join Koch Global Services (KGS) to be part of a dynamic team that creates solutions to support various business functions worldwide. With a global presence in India, Mexico, Poland, and the United States, KGS empowers employees to make a significant impact on a global scale.,
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Data Engineer (Aerospace/Aviation Background) Key Responsibilities • Lead and manage end-to-end data engineering projects, collaborating with cross-functional teams including analytics, product, and engineering. • Design and maintain scalable ETL/ELT pipelines using Redshift, SQL, and AWS services (e.g., S3, Glue, Lambda). • Optimize Redshift clusters and SQL queries for performance and cost-efficiency. • Serve as the domain expert for data modeling, architecture, and warehousing best practices. • Proactively identify and resolve bottlenecks and data quality issues. • Mentor junior engineers and enforce coding and architectural standards. • Own the data lifecycle: from ingestion and transformation to validation and delivery for reporting. Qualifications • 5+ years of experience in data engineering or a related field. • Proven expertise in AWS Redshift, advanced SQL, and modern data pipeline tools. • Hands-on experience with data lakes, data warehousing, and distributed systems. • Strong understanding of data governance, security, and performance tuning. • Demonstrated ability to lead projects independently and drive them to completion. • Excellent problem-solving, communication, and stakeholder management skills.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. Job Overview The AWS Architect with Data Engineering Skills will be responsible for designing, implementing, and managing scalable, robust, and secure cloud infrastructure and data solutions on AWS. This role requires a deep understanding of AWS services, data engineering best practices, and the ability to translate business requirements into effective technical solutions. Key Responsibilities Architecture Design: Design and architect scalable, reliable, and secure AWS cloud infrastructure. Develop and maintain architecture diagrams, documentation, and standards. Data Engineering Design and implement ETL pipelines using AWS services such as Glue, Lambda, and Step Functions. Build and manage data lakes and data warehouses using AWS services like S3, Redshift, and Athena. Ensure data quality, data governance, and data security across all data platforms. AWS Services Management Utilize a wide range of AWS services (EC2, S3, RDS, Lambda, DynamoDB, etc.) to support various workloads and applications. Implement and manage CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy. Monitor and optimize the performance, cost, and security of AWS resources. Collaboration And Communication Work closely with cross-functional teams including software developers, data scientists, and business stakeholders. Provide technical guidance and mentorship to team members on best practices in AWS and data engineering. Security And Compliance Ensure that all cloud solutions follow security best practices and comply with industry standards and regulations. Implement and manage IAM policies, roles, and access controls. Innovation And Improvement Stay up to date with the latest AWS services, features, and best practices. Continuously evaluate and improve existing systems, processes, and architectures. (ref:hirist.tech),
Posted 2 weeks ago
0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Software Engineer – Integration (Cloud) Skills To be successful in this role as a Cloud focused Integration “Software Engineer – OSS Platform Engineering", you should possess the following skillsets: Deep Expertise in Cloud platforms (AWS, Azure or GCP) infrastructure design and cost optimization. An expert in containerization and Orchestration using dockers and Kubernetes (deployments, service mesh etc.) Hands-on expertise with platform engineering and productization (for other app consumption as tenants) of opensource monitoring/logging tools (Prometheus, Grafana, ELK and similar) and cloud-native tools based. Strong Knowledge and demonstrable hands-on experience with middleware technologies (Kafka, API gateways etc) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Some Other Highly Valued Skills Include Expertise building ELT pipelines and cloud/storage integrations - data lake/warehouse integrations (redshift, BigQuery, Snowflake etc). Solid understanding of DevOps tooling, GitOps, CI/CD, config management, Jenkins, build pipelines and source control systems. Working knowledge of cloud infrastructure services: compute, storage, networking, hybrid connectivity, monitoring/logging, security and IAM. SRE Experience. Expertise building and defining KPI’s (SLI/SLO’s) using open-source tooling like ELK, Prometheus and various other instrumentation, telemetry, and log analytics. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Pune office.
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Information Company Yubi Date Opened 07/10/2025 Job Type Full time Work Experience 1-3 years Industry Technology City Bangalore State/Province Karnataka Country India Zip/Postal Code 560076 About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. Job Description Data Engineer 2 Position Summary: As a Data Engineer, you will be part of a highly talented Data Engineering team. Responsible for developing reusable capabilities and tools to automate various types of data processing pipelines. You will be contributing to different stages of data engineering like data acquisition, ingestion, processing, monitoring pipelines and validating data. Your contribution will be really crucial in keeping various data ingestion and processing pipelines running successfully. Along with ensuring the data points available in the data lake are up to date, valid and usable. Technology Experience: 3+ years of experience in data engineering. Comfortable and hands on with the Python programming. Strong experience in working with RDBMS and NoSQL systems. Strong experience in working on AWS ecosystem with hands-on experience in working with different AWS components like Airflow, EMR , Redshift, S3, Athena, PySpark etc. Strong experience in developing REST APIs with Python using frameworks like flask, fastapi. Prior experience in working with crawling libraries like BeautifulSoup in Python would be desirable. Proven ability to work with SQL queries, including writing complex queries to retrieve key metrics. Skilled in connecting to, exploring, and understanding upstream data. Experience working with various data lake storage format types and ability to choose it based on the use cases. Responsibilities: Design and build scalable data pipelines that can handle large volumes of data. Develop ETL/ELT pipelines and extract the data from any upstream sources and sync with the data lakes with the format of parquet, iceberg, delta formats. Optimize and ensure the data pipelines are running successfully and ensure the business continuity. Collaborate with cross functional teams and source all the data required for the business use cases. Stay up-to-date with emerging data technologies and trends to ensure the continuous improvement of our data infrastructure and architecture Follow best practices in data querying and manipulation to ensure data integrity.
Posted 2 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Our Customer Experience and Business Trends (CXBT) team is seeking a skilled and motivated Business Intelligence Engineer (BIE) to analyze and deliver insights to help us better serve customers. Our team within the CXBT organization is called Benchmarking Economics, Analytics, and Measurement (BEAM). BEAM is a central team that consists of economics, analytics (business intelligence) and measurement science (data scientists). Our mission is to drive customer experience (CX) improvement through science modeling and quantitative data analytics. Our core functional skills include: data collection, science modeling, insights reporting, and automation. The right candidate is passionate about understanding customer needs, perceptions, and experiences, diving deep into complex problems, and continuously striving to deliver deeper insights. The person in this role will innovate, build new methodologies to generate insights, and make recommendations to drive actions that directly impact our current and future customers. A successful candidate will possess excellent analytical skills, and have the ability to work collaboratively to influence business leaders at all levels, including senior management. Key job responsibilities • Own, design, develop, document, and manage scalable solutions for new and ongoing analyses metrics, reports, and dashboards to support business needs • Identify new data sources and invent new methodologies and approaches to understand and drive improved customer experiences • Drive efforts to simplify, automate, and standardize processes across the team to drive efficiencies, expand scope, and drive increased impact to customer experience • Articulate assumptions, methodologies, results, and implications • Present analyses to both technical and non-technical stakeholders, ensuring clarity and understanding About the team Customer Experience and Business Trends (CXBT) is an organization made up of a diverse suite of functions dedicated to deeply understanding and improving customer experience, globally. We are a team of builders that develop products, services, ideas, and various ways of leveraging data to influence product and service offerings – for almost every business at Amazon – for every customer (e.g., consumers, developers, sellers/brands, employees, investors, streamers, gamers). Our approach is based on determining the customer need, along with problem solving, and we work backwards from there. We use technical and non-technical approaches and stay aware of industry and business trends. We are a global team, made up of a diverse set of profiles, skills, and backgrounds – including: Product Managers, Computer Vision experts, Solutions Architects, Data Scientists, Business Intelligence Engineers, Business Analysts, Risk Managers, and more. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
10.0 - 17.0 years
0 Lacs
hyderabad, telangana
On-site
We have an exciting opportunity for an ETL Data Architect position with an AI-ML driven SaaS Solution Product Company in Hyderabad. As an ETL Data Architect, you will play a crucial role in designing and implementing a robust Data Access Layer to provide consistent data access needs to the underlying heterogeneous storage layer. You will also be responsible for developing and enforcing data governance policies to ensure data security, quality, and compliance across all systems. In this role, you will lead the architecture and design of data solutions that leverage the latest tech stack and AWS cloud services. Collaboration with product managers, tech leads, and cross-functional teams will be essential to align data strategy with business objectives. Additionally, you will oversee data performance optimization, scalability, and reliability of data systems while guiding and mentoring team members on data architecture, design, and problem-solving. The ideal candidate should have at least 10 years of experience in data-related roles, with a minimum of 5 years in a senior leadership position overseeing data architecture and infrastructure. A deep background in designing and implementing enterprise-level data infrastructure, preferably in a SaaS environment, is required. Extensive knowledge of data architecture principles, data governance frameworks, security protocols, and performance optimization techniques is essential. Hands-on experience with AWS services such as RDS, Redshift, S3, Glue, Document DB, as well as other services like MongoDB, Snowflake, etc., is highly desirable. Familiarity with big data technologies (e.g., Hadoop, Spark) and modern data warehousing solutions is a plus. Proficiency in at least one programming language (e.g., Node.js, Java, Golang, Python) is a must. Excellent communication skills are crucial in this role, with the ability to translate complex technical concepts to non-technical stakeholders. Proven leadership experience, including team management and cross-functional collaboration, is also required. A Bachelor's degree in computer science, Information Systems, or related field is necessary, with a Master's degree being preferred. Preferred qualifications include experience with Generative AI and Large Language Models (LLMs) and their applications in data solutions, as well as familiarity with financial back-office operations and the FinTech domain. Stay updated on emerging trends in data technology, particularly in AI/ML applications for finance. Industry: IT Services and IT Consulting,
Posted 2 weeks ago
4.0 years
0 Lacs
Greater Kolkata Area
On-site
Roles And Responsibilities Build PySpark-based data ingestion pipelines for Adobe Experience Platform. Design Redshift schemas and optimize data loading. Create performant SQLs for reporting and validation. Support data model implementation and migration across sandboxes. Troubleshoot ingestion and configuration issues in AEP. Contribute to internal innovation and present complex use and Qualifications : 4+ years of experience with PySpark and Redshift. Hands-on experience with AWS Glue, Lambda, Kafka, Athena. Proficiency in SQL performance tuning and analytics. Familiarity with REST APIs, data modeling, and ingestion tools. Experience with ETL tools like Talend, Informatica. Ability to work independently across multiple engagements. Strong documentation and communication capabilities (ref:hirist.tech)
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description As a Data Engineer, you will be part of a Data and Analytics (D & A) team responsible for building data pipelines that enables us to make informed decisions across the entire organization. This is a great opportunity to make a real impact on the course of the company, which makes data-based decisions as a part of its Data + Analytics Strategy. The Data Engineer is responsible for design, development, testing, and implementation of automated data pipelines and for the Enterprise Data Warehouse, hosted in the Cloud. The Data Engineer works closely with Business Intelligence / Analytics teams and business users to understand requirements, translate them into technical design, develop data pipelines and implement solutions in the Enterprise Data Warehouse (Redshift). Primary Responsibilities Include Analyze existing & create new stored procedures which involve complex data models and business rules. Build data pipelines utilizing various ETL transformation tools such as informatica or AWS Glue Actively participate through all phases of the project cycle from ideation to post-implementation stabilization. Work with business and technical peers to define how best to meet requirements, balancing speed & robustness. Build high-quality, maintainable SQL OLAP/Analytic functions following established patterns and coding practices. Analyze technical data to establish solutions that achieve complex data transformations. Participate / perform testing to ensure data quality and integrity via unit, integration, regression, and UAT testing. Create and maintain process design, data model, and operations documentation. Assist in the maintenance of the codebase, unit tests, and related technical design docs and configurations. Engage and collaborate with stakeholders via the Agile process, identifying and mitigating risks & issues as needed. Maintain software velocity & quality for deliverables, holding oneself accountable to commitments. JOB Requirements (minimum Competencies Required For Job Performance) Experience in PL/SQL scripting and query optimization, required. Experience with AWS (Amazon Web Services) Redshift, Oracle, or PostgreSQL, preferred. Experience with Informatica Power Center and/or Informatica Cloud / IDMC, preferred. Experience in data model design, dimensional data modeling, and complex stored procedure development, required. Strong analytical skills, synthesizing information with attention to detail & accuracy to establish patterns and solutions. Experience with AWS, e.g., S3, PySpark, Glue, Redshift, Lambda, preferred. Experience with Data Lake House platforms, e.g., Databricks, Snowflake, preferred. Experience in scripting languages, e.g., Python, Scala, Java, Unix Shell, Bash, preferred. Experience operating in Agile and Waterfall development methodologies, preferred. Experience building data visualization solutions using BI platforms, e.g., Tableau, Power BI, Qlik, preferred. Capable of balancing technology ideals and business objectives, evaluating options and implications. Must possess strong written and verbal communication skills. Manages and prioritizes work effectively with minimal supervision, seeking and offering help as needed to achieve goals. Adaptable to change and able to work independently and as part of a team. Applies curiosity and creativity to solve problems, seeking opportunities and overcoming challenges with resourcefulness. High bias for action in meeting commitments & deadlines; effectively sees, communicates, and mitigates risks and issues. Active participant in the development community; seeks and offers guidance, coaching, and professional development. (ref:hirist.tech)
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Data Analytics Engineer – CL4 Role Overview : As a Data Analytics Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive engineering craftmanship across multiple programming languages and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop engineering solutions that solve complex problems with valuable outcomes, ensuring high-quality, lean designs and implementations. Technical Leadership and Advocacy: Serve as the technical advocate for products, ensuring code integrity, feasibility, and alignment with business and customer goals. Lead requirement analysis, component design, development, unit testing, integrations, and support. Engineering Craftsmanship: Maintain accountability for the integrity of code design, implementation, quality, data, and ongoing maintenance and operations. Stay hands-on, self-driven, and continuously learn new approaches, languages, and frameworks. Create technical specifications, and write high-quality, supportable, scalable code ensuring all quality KPIs are met or exceeded. Demonstrate collaborative skills to work effectively with diverse teams. Customer-Centric Engineering: Develop lean engineering solutions through rapid, inexpensive experimentation to solve customer needs. Engage with customers and product teams before, during, and after delivery to ensure the right solution is delivered at the right time. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a learning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, and delivery. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies and DevSecOps to deliver daily product deployments using full automation from code check-in to production with all quality checks through SDLC lifecycle. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery. Demonstrate understanding of the full lifecycle product development, focusing on continuous improvement, and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs, architectures, and data designs into technical specifications and code. Be a valuable, flexible, and dedicated team member, supportive of teammates, and focused on quality and tech debt payoff. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence teammates and product teams through well-structured arguments and trade-offs supported by evidence. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with product engineering teams at all organizational levels, including customers as needed. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Strong data engineering foundation with deep understanding of data-structure, algorithms, code instrumentations, etc. 5+ years proven experience with data ETL and ELT tools (such as ADF, Alteryx, cloud-native tools), data warehousing tools (such as SAP HANA, Snowflake, ADLS, Amazon Redshift, Google Cloud BigQuery). 5+ years of experience with cloud-native engineering, using FaaS/PaaS/micro-services on cloud hyper-scalers like Azure, AWS, and GCP. Strong understanding of methodologies & tools like, XP, Lean, SAFe, DevSecOps, SRE, ADO, GitHub, SonarQube, etc. Strong preference will be given to candidates with experience in AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306372
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Data Analytics Engineer – CL3 Role Overview : As a Data Analytics Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive engineering craftmanship across multiple programming languages and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop engineering solutions that solve complex problems with valuable outcomes, ensuring high-quality, lean designs and implementations. Technical Leadership and Advocacy: Serve as the technical advocate for products, ensuring code integrity, feasibility, and alignment with business and customer goals. Lead requirement analysis, component design, development, unit testing, integrations, and support. Engineering Craftsmanship: Maintain accountability for the integrity of code design, implementation, quality, data, and ongoing maintenance and operations. Stay hands-on, self-driven, and continuously learn new approaches, languages, and frameworks. Create technical specifications, and write high-quality, supportable, scalable code ensuring all quality KPIs are met or exceeded. Demonstrate collaborative skills to work effectively with diverse teams. Customer-Centric Engineering: Develop lean engineering solutions through rapid, inexpensive experimentation to solve customer needs. Engage with customers and product teams before, during, and after delivery to ensure the right solution is delivered at the right time. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a learning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, and delivery. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Foster a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess deep expertise in modern software engineering practices and principles, including Agile methodologies and DevSecOps to deliver daily product deployments using full automation from code check-in to production with all quality checks through SDLC lifecycle. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery. Demonstrate understanding of the full lifecycle product development, focusing on continuous improvement, and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs, architectures, and data designs into technical specifications and code. Be a valuable, flexible, and dedicated team member, supportive of teammates, and focused on quality and tech debt payoff. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating complex technical concepts clearly and compellingly. Inspire and influence teammates and product teams through well-structured arguments and trade-offs supported by evidence. Create coherent narratives that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Engage and collaborate with product engineering teams at all organizational levels, including customers as needed. Build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Align diverse perspectives and drive consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes that leverages a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Strong data engineering foundation with deep understanding of data-structure, algorithms, code instrumentations, etc. 3+ years proven experience with data ETL and ELT tools (such as ADF, Alteryx, cloud-native tools), data warehousing tools (such as SAP HANA, Snowflake, ADLS, Amazon Redshift, Google Cloud BigQuery). 3+ years of experience with cloud-native engineering, using FaaS/PaaS/micro-services on cloud hyper-scalers like Azure, AWS, and GCP. Strong understanding of methodologies & tools like XP, Lean, SAFe, DevSecOps, SRE, ADO, GitHub, SonarQube, etc. Strong preference will be given to candidates with experience in AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306373
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Engineer Introduction to role:- Are you ready to make a significant impact in the world of biopharmaceuticals? AstraZeneca, a global leader in innovation-driven prescription medicines, is seeking a dedicated Data Engineer to join our Commercial IT Data Analytics & AI (DAAI) team. With operations in over 100 countries and headquarters in the United Kingdom, AstraZeneca offers a unique workplace culture that fosters innovation and collaboration. As a Data Engineer, you will play a crucial role in supporting and enhancing our data platforms built on AWS services. Your expertise in ETL, Data Warehousing, Databricks, and AWS applications will be vital in ensuring business continuity and driving efficiency. Are you up for the challenge? Accountabilities Monitor and maintain the health and performance of production systems and applications. Provide timely incident response, solve, and resolution for technical issues raised by users or monitoring tools. Perform root cause analysis for recurring issues and implement preventive measures. Investigate data anomalies, solve failures, and coordinate with relevant teams for resolution. Collaborate with development and infrastructure teams to support deployments and configuration changes. Maintain and update technical documentation, standard operating procedures, and knowledge bases. Ensure alignment to service-level agreements (SLAs) and minimize downtime or service disruptions. Manage user access, permissions, and security-related requests as per organizational policies. Participate in on-call rotations and provide after-hours support as needed. Communicate effectively with collaborators, providing status updates and post-incident reports. Proactively find opportunities for automation and process improvement in support activities. Support data migration, upgrades, and transitions as required. Support business continuity and disaster recovery exercises as required.. Essential Skills/Experience Education Background: B.E/B.Tech/MCA/MSc/BSc Overall Years of Experience: 3 to 5 years of experience Solid experience with SQL, data warehousing, and building ETL pipelines Hands-on experience with AWS services, including EMR, EC2, S3, Athena, RDS, Databricks, and Redshift. Skilled in working with columnar databases such as Redshift, Cassandra or BigQuery. Good understanding of ETL processes and data warehousing concepts. Familiarity with scheduling tools (especially Airflow is a plus). Able to write complex SQL queries for data extraction, transformation, and reporting. Excellent communication skills and ability to work well with both technical and non-technical teams. Strong analytical and troubleshooting skills in complex data environments Desirable Skills/Experience Experience with Databricks or Snowflake Proficient in scripting and programming languages such as Shell Scripting and Python Familiar with CI/CD using Bamboo Proficient in version control systems, including Bitbucket and GitHub Preferably experienced with release management processes Significant prior experience in an IT environment within the pharmaceutical or healthcare industry At AstraZeneca, we are committed to driving exciting transformation on our journey to becoming a digital and data-led enterprise. Our work connects across the entire business to power each function, influencing patient outcomes and improving lives. By unleashing the power of our latest innovations in data, machine learning, and technology, we turn complex information into life-changing insights. Join us to work alongside leading experts in our specialist communities, where your contributions are recognized from the top. Ready to take the next step? Apply now to become part of our dynamic team! Date Posted 09-Jul-2025 Closing Date 13-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 2 weeks ago
6.0 - 11.0 years
20 - 35 Lacs
Hyderabad
Hybrid
Greetings from Astrosoft Technologies! We are currently seeking for Highly skilled and talented Senior AWS Data Engineer for our Hyderabad office AstroSoft Technologies (https://www.astrosofttech.com/) Join Astrosoft Technologies , a global award-winning leader in Data, Cloud, AI/ML, and Digital Innovation , founded in 2004 , Headquarters in FL , USA , Corporate Office - India , Hyderabad. as a Senior AWS Data Engineer . We are looking for highly skilled professionals with strong cloud engineering and data pipeline expertise to be part of our fast-growing IT team in Hyderabad. If you're passionate about delivering scalable, real-time data solutions and have a strong foundation in AWS and big data technologies, we want to hear from you. Apply here to Email: karthik.jangam@astrosofttech.com Role: Senior AWS Data Engineer Location: Gachibowli, Hyderabad (Vasavi Sky City) Work Mode: Hybrid (Work from Office Tue to Thu | WFH Mon & Fri) Job Type: Full-Time Shift: 12:30 PM 9:30 PM IST Experience Required: 7+ years Key Responsibilities: Design and develop scalable data pipelines using Kafka, Kinesis, Spark, Flink Strong experience in AWS services: S3, Glue, EMR, DMS, SNS, SQS, MWAA (Airflow) Proficiency in Python , Java, or Scala (Python preferred) Infrastructure automation with Terraform Experience with ETL tools (ODI is a plus) Work with Oracle , Redshift , advanced SQL tuning , and physical DB optimization Implement monitoring tools: CloudWatch, Splunk, Data Dog and SRE bestpractices Collaborate with cross-functional teams to translate business needs into technical solutions Desired Candidate Profile: 7+ years total experience; 4+ years in AWS Data Engineering Hands-on, solution-driven mindset with strong critical thinking AWS Certification preferred Excellent communication and stakeholder engagement skills Immediate joiners preferred Why Join Astrosoft? H1B Sponsorship (based on performance/project) Daily Lunch & Dinner provided Group Health Insurance Skill Certifications & Learning Support Competitive Leave Policy Work in a collaborative, innovation-driven environment Thanks & Regards Karthik Kumar HR TAG Lead-India Astrosoft Technologies, Unit 1810, level 18, Vasavi Sky city, Gachibowli, Hyderabad, Telangana 500081. Contact: +91-8712229084 Email: karthik.jangam@astrosofttech.com Winner Telangana - Best Employer Brand Award - 2024
Posted 2 weeks ago
6.0 - 11.0 years
5 - 15 Lacs
Mumbai, Navi Mumbai
Work from Office
Job Title: SQL Developer Location: Mumbai Duration: Full-time Job Description We are seeking an experienced SQL Developer with a minimum of 6 years in SQL development and a strong command of Amazon Redshift . The ideal candidate will be responsible for designing, developing, and optimizing complex SQL queries and data models to support our data warehousing and reporting infrastructure. This role is based at our Mumbai (Airoli) office and requires hands-on expertise in working with large datasets and cloud-based data warehouses. Responsibilities Design, develop, and maintain complex SQL queries, procedures, functions, and views. Optimize queries for performance across large datasets, particularly within Amazon Redshift. Work closely with data engineers, analysts, and business stakeholders to understand data needs and deliver reliable solutions. Create and maintain documentation related to database structures, data flows, and processes. Troubleshoot and resolve issues related to data accuracy, performance, and integration. Ensure data security, integrity, and compliance with organizational policies. Participate in code reviews and support continuous improvement of development practices. Requirements Minimum 6 years of experience in SQL development. Strong hands-on experience with Amazon Redshift mandatory. Proficiency in writing and optimizing complex SQL queries. Experience with ETL processes and working with large-scale data environments. Familiarity with data modeling techniques and best practices. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Bachelor's degree in Computer Science, Information Systems, or related field (preferred). Role & responsibilities Preferred candidate profile
Posted 2 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It With Pride. You will provide technical contributions to the data science process. In this role, you are the internally recognized expert in data, building infrastructure and data pipelines/retrieval mechanisms to support our data needs How You Will Contribute You will: Operationalize and automate activities for efficiency and timely production of data visuals Assist in providing accessibility, retrievability, security and protection of data in an ethical manner Search for ways to get new data sources and assess their accuracy Build and maintain the transports/data pipelines and retrieve applicable data sets for specific use cases Understand data and metadata to support consistency of information retrieval, combination, analysis, pattern recognition and interpretation Validate information from multiple sources. Assess issues that might prevent the organization from making maximum use of its information assets What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: Extensive experience in data engineering in a large, complex business with multiple systems such as SAP, internal and external data, etc. and experience setting up, testing and maintaining new systems Experience of a wide variety of languages and tools (e.g. script languages) to retrieve, merge and combine data Ability to simplify complex problems and communicate to a broad audience In This Role As a Senior Data Engineer, you will have the opportunity to design and build scalable, secure, and cost-effective cloud-based data solutions. You will develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes, ensuring data quality and validation processes to maintain data accuracy and integrity. You will ensure efficient data storage and retrieval for optimal performance, and collaborate closely with data teams, product owners, and other stakeholders to stay updated with the latest cloud technologies and best practices. Role & Responsibilities: Design and Build: Develop and implement scalable, secure, and cost-effective cloud-based data solutions. Manage Data Pipelines: Develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes. Ensure Data Quality: Implement data quality and validation processes to ensure data accuracy and integrity. Optimize Data Storage: Ensure efficient data storage and retrieval for optimal performance. Collaborate and Innovate: Work closely with data teams, product owners, and stay updated with the latest cloud technologies and best practices. Technical Requirements: Programming: Python, PySpark, Go/Java Database: SQL, PL/SQL ETL & Integration: DBT, Databricks + DLT, AecorSoft, Talend, Informatica/Pentaho/Ab-Initio, Fivetran. Data Warehousing: SCD, Schema Types, Data Mart. Visualization: Databricks Notebook, PowerBI (Optional), Tableau (Optional), Looker. GCP Cloud Services: Big Query, GCS, Cloud Function, PubSub, Dataflow, DataProc, Dataplex. AWS Cloud Services: S3, Redshift, Lambda, Glue, CloudWatch, EMR, SNS, Kinesis. Azure Cloud Services: Azure Datalake Gen2, Azure Databricks, Azure Synapse Analytics, Azure Data Factory, Azure Stream Analytics. Supporting Technologies: Graph Database/Neo4j, Erwin, Collibra, Ataccama DQ, Kafka, Airflow. Soft Skills: Problem-Solving: The ability to identify and solve complex data-related challenges. Communication: Effective communication skills to collaborate with Product Owners, analysts, and stakeholders. Analytical Thinking: The capacity to analyze data and draw meaningful insights. Attention to Detail: Meticulousness in data preparation and pipeline development. Adaptability: The ability to stay updated with emerging technologies and trends in the data engineering field. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Data Science Analytics & Data Science
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team About Workday At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team The Enterprise Data & AI Technologies and Architecture (EDATA) organization is a dynamic and evolving team that is spearheading Workday’s growth through trusted data excellence, innovation, and architectural thought leadership. Equipped with an array of skills in data science, engineering, and analytics, this team orchestrates the flow of data across our growing company while ensuring data accessibility, accuracy, and security. With a relentless focus on innovation and efficiency, Workmates in EDATA enable the transformation of complex data sets into actionable insights that fuel strategic decisions and position Workday at the forefront of the technology industry. EDATA is a global team distributed across the U.S, India and Canada. About The Role Join a pioneering organization at the forefront of technological advancement, dedicated to leveraging data-driven insights to transform industries and drive innovation. We are seeking a highly skilled and motivated Data Quality Engineer to join our dynamic team. The ideal candidate is someone who loves to learn, is detail oriented, has exceptional critical thinking and analytical skills. As a Data Quality Engineer, you will play a critical role in ensuring the accuracy, consistency, and completeness of our data across the enterprise data platform. You will be responsible for designing, developing, and implementing data quality processes, standards, and best practices across various data sources and systems to identify, resolve data issues. This role offers an exciting opportunity to learn, collaborate with cross-functional teams, including data engineers, data scientists, and business analysts, to drive data quality improvements and enhance decision-making capabilities. Responsibilities The incumbent will be responsible for (but not limited to) the following: Design and automate data quality checks; resolve issues and improve data pipelines with engineering and product teams. Collaborate with stakeholders to define data quality requirements and best practices. Develop test automation strategies and integrate checks into CI/CD pipelines. Monitor data quality metrics, identify root causes, and drive continuous improvements. Provide guidance on data quality standards across projects. Work with Data Ops to address production issues and document quality processes. About You Basic Qualifications 5+ years of experience as a Data Quality Engineer in data quality management or data governance. Good understanding of data management concepts, including data profiling, data cleansing, and data integration. Proficiency in SQL for data querying and manipulation. Develop and execute automated data quality tests using tools like SQL, Python (Pyspark), and data quality frameworks. Hands-on experience with cloud platforms (AWS/GCP), data warehouses (Snowflake, Databricks, Redshift), and integration tools (Snaplogic, dbt, Talend, etc.) Exposure to data quality tools (e.g., Acceldata, Tricentis) and CI/CD or DevOps practices is a plus. Experience with data quality monitoring tools (Acceldata, Tricentis) a plus. Other Qualifications Proven ability to prioritize and manage multiple tasks in a fast-paced environment. Certification in relevant technologies or data management disciplines is a plus. Analytical mindset with the ability to think strategically and make data-driven decisions. If you are a results-driven individual with a passion for data and analytics and a proven track record in data quality assurance, we invite you to apply for this exciting opportunity. Join our team and contribute to the success of our data-driven initiatives. Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi