Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
32 - 40 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Customer Excellence Advisory Lead (CEAL) aims to enable customers to fully leverage their data by offering top-tier architectural guidance and design. As part of the Oracle Analytics Service Excellence organization, our team includes Solution Architects who specialize in Oracle Analytics Cloud, Oracle Analytics Server, and Fusion Data Intelligence. Our main goal is to ensure the successful adoption of Oracle Analytics. We engage with customers and partners globally, building trust in Oracle Analytics. We also collaborate with Product Management to enhance product offerings and share our insights through blogs, webinars, and demonstrations. The candidate will collaborate with strategic FDI customers and partners, guiding them towards an optimized implementation and crafting a Go-live plan focused on achieving high usage. Proactively recognize customer requirements, uncover unaddressed needs, and develop potential solutions across various customer groups. Assist in shaping intricate product and program strategies based on customer interactions, and effectively implement solutions and projects for customers that are scalable to complex, multiple enterprise environments. Collaborate with customers and/or internal stakeholders to communicate the strategy, synchronize the timeline for solution implementation, provide updates, and adjust plans according to evolving objectives, effectively and promptly. Prepare for complex product or solution-related inquiries or challenges that customers may present. Gather and convey detailed product insights driven by customer needs and requirements. Promote understanding of customer complexities and the value propositions of various programs (e.g., speaking at different events, team meetings, product reviews) to key internal stakeholders. Primary Skills: Must possess over 7 years of experience with OBIA and Oracle Analytics. Must have a robust knowledge of Analytics RPD design, development, and deployment. Should possess a strong understanding of BI/data warehouse analysis, design, development, and testing. Extensive experience in data analysis, data profiling, data quality, data modeling, and data integration. Proficient in crafting complex queries and stored procedures using Oracle SQL and Oracle PL/SQL. Skilled in developing visualizations and user-friendly workbooks. Previous experience in developing solutions that incorporate AI and ML using Analytics. Experienced in enhancing report performance. Desirable Skills: Experience with Fusion Applications (ERP/HCM/SCM/CX) Ability to design and develop ETL Interfaces, Packages, Load plans, user functions, variables, and sequences in ODI to support both batch and real-time data integrations. Worked with multiple Cloud Platforms. Certified on FDI, OAC and ADW.
Posted 1 week ago
2.0 - 3.0 years
7 - 8 Lacs
Hyderabad
Work from Office
We are seeking a detail-oriented data annotator to join the Oracle Analytics team. As a member of our data operations team you will be responsible for creating and annotating natural language and structured data to facilitate the development of new AI solutions for Oracles Analytics product portfolio. You will work with applied science and data science teams to design and evaluate new AI capabilities, test systems ahead of deployment, and support customers with implementing and customizing AI solutions for industry verticals or specific use cases. The ideal candidate will have in-depth familiarity with Oracles analytics products (Fusion Data Intelligence, Oracle Analytics Cloud), SQL, Python, and data visualization, as well as experience in subject matter areas related to ERP, Finance, Supply Chain Management, or Human Capital Management. Knowledge of data collection management practices, quality control processes is a strong plus. Career Level - IC2 Key Responsibilities: Collaborate with applied scientists and product managers to gather data requirements. Help develop and improve data annotation guidelines and quality control mechanisms. Produce and deliver data meeting predefined quality targets and timelines. Help test and evaluate AI solutions. Partner with customers and solution architects to improve usability of AI products. Required Qualifications: Bachelors degree in Business Administration, Information Systems, Data Science, Linguistics, or a related field. 2-3 years of industry experience in data operations, data annotation, business analytics, or a related role. Experience with Oracle Analytics tools and dashboards, SQL, and general task management tools (e.g., Jira). Excellent communication and collaboration abilities. Attention to detail, sense of ownership, and customer-focused mindset. Software engineering skills (SQL, Python). Preferred Qualifications: Knowledge of machine learning concepts and how annotated data supports model development. Previous experience with large-scale data collection/annotation and data quality control processes.
Posted 1 week ago
4.0 - 7.0 years
7 - 12 Lacs
Gurugram
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC s Design, develop, and maintain scalable data pipelines using Azure data services such as Azure Data Factory and Apache Spark. Implement efficient Extract, Transform, Load (ETL) processes to move and transform data across various sources. Design, develop, and maintain data solutions using Azure Synapse Analytics. Implement data ingestion, transformation, and extraction processes using Azure Synapse Pipelines. Knowledge about data warehousing concepts Utilize Azure SQL Database, Azure Blob Storage, Azure Data Lake Storage, and other Azure data services to store and retrieve data. Performance optimization and troubleshooting capabilities Advanced SQL knowledge, capable to write optimized queries for faster data workflows. Proven work experience in Spark, Python, SQL, Any RDBMS. Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as highscale or distributed RDBMS Must be extremely well versed with handling large volume data and work using different tools to derive the required solution. Mandatory skill sets Azure Databricks, Azure Data Factory (ADF), or Azure Synapse Analytics, along with Python and SQL expertise Preferred skill sets Experienced in Delta Lake, Power BI, or Azure DevOps. Knowledge of Databricks will be a plus Knowledge of Spark, Scala, or other distributed processing frameworks. Exposure to BI tools like Power BI, Tableau, or Looker. Familiarity with data security and compliance in the cloud. Experience in leading a development team. Years of experience required 4 7 yrs Education qualification B.tech/MBA/MCA Education Degrees/Field of Study required Master of Business Administration, Bachelor of Technology Degrees/Field of Study preferred Required Skills Azure Synapse Analytics, Databricks Platform Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Travel Requirements Government Clearance Required?
Posted 1 week ago
4.0 - 7.0 years
7 - 12 Lacs
Gurugram
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC s Design, develop, and maintain scalable data pipelines using Azure data services such as Azure Data Factory and Apache Spark. Implement efficient Extract, Transform, Load (ETL) processes to move and transform data across various sources. Design, develop, and maintain data solutions using Azure Synapse Analytics. Implement data ingestion, transformation, and extraction processes using Azure Synapse Pipelines. Knowledge about data warehousing concepts Utilize Azure SQL Database, Azure Blob Storage, Azure Data Lake Storage, and other Azure data services to store and retrieve data. Performance optimization and troubleshooting capabilities Advanced SQL knowledge, capable to write optimized queries for faster data workflows. Proven work experience in Spark, Python, SQL, Any RDBMS. Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as highscale or distributed RDBMS Must be extremely well versed with handling large volume data and work using different tools to derive the required solution. Mandatory skill sets Azure Databricks, Azure Data Factory (ADF), or Azure Synapse Analytics, along with Python and SQL expertise Preferred skill sets Experienced in Delta Lake, Power BI, or Azure DevOps. Knowledge of Databricks will be a plus Knowledge of Spark, Scala, or other distributed processing frameworks. Exposure to BI tools like Power BI, Tableau, or Looker. Familiarity with data security and compliance in the cloud. Experience in leading a development team. Years of experience required 4 7 yrs Education qualification B.tech/MBA/MCA Education Degrees/Field of Study required Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred Required Skills Azure Synapse Analytics, Databricks Platform Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Travel Requirements Government Clearance Required?
Posted 1 week ago
4.0 - 5.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Job Summary This job will lead the development and implementation of advanced data science models and algorithms. You will work with stakeholders to understand requirements and deliver solutions. Your role involves driving best practices in data science, ensuring data quality, and mentoring junior scientists. Job Description Essential Responsibilities Lead the development and implementation of advanced data science models. Collaborate with stakeholders to understand requirements. Drive best practices in data science. Ensure data quality and integrity in all processes. Mentor and guide junior data scientists. Stay updated with the latest trends in data science. Minimum Qualifications Minimum of 5 years of relevant work experience and a Bachelors degree or equivalent experience. Preferred Qualification Strong analytical and problem-solving mindset , with a degree in a quantitative field such as Data Science, Computer Science, Engineering, Statistics, Mathematics, Economics, or a related discipline 4+ years of experience in Product Data Science, including A/B testing and experimentation, advanced analytics, and applied machine learning Proven track record of partnering with Product teams in domains such as Payments, eCommerce, or SaaS to shape product strategy, inform roadmaps, and deliver impactful data-driven insights Advanced proficiency in SQL and Python , with expertise in working with large-scale, complex datasets to derive actionable insights Deep experience analyzing high-volume, high-dimensional real-world data , with a focus on scalability and business impact Hands-on experience with modern data tools and platforms such as Jupyter Notebooks, BigQuery, Teradata, Hadoop, or Hive is highly desirable Exceptional communication and data storytelling skills , with the ability to distill complex findings for both technical and non-technical audiences and influence cross-functional decision-making Highly motivated and self-directed , with a strong passion for experimentation, product thinking, and navigating ambiguity in a fast-paced environment Subsidiary PayPal Travel Percent 0 For the majority of employees, PayPals balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https//www.paypalbenefits.com . Who We Are Click Here to learn more about our culture and community. Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at talentaccommodations@paypal.com . Belonging at PayPal Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don t hesitate to apply.
Posted 1 week ago
8.0 - 12.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Must have a solid working knowledge of AWS database & data services and the Power BI stack Must have exp. in gathering requirements, modeling data, designing & supporting high-performance big data backend and data visualization systems Exp. in utilizing methodologies & platform stacks like Map Reduce, Spark, Streaming solutions (like Kafka, Kinesis), ETL systems (like Glue, Firehose), storage (like S3), warehouse stacks (like Redshift, DynamoDB) and equivalent open source stacks Exp. in designing & implementing solutions using visualization technologies like Power BI, Quick Sight Exp. in Maintain and continuously groom the product backlog, the release pipeline, and the product roadmap Capture problem statement and opportunities raised by customers as demand items, epics & stories. Lead database physical design sessions with the engineers in the team Lead quality assurance, load test of the solution to ensure customer experience is maintained. Support data governance and data quality (cleansing) Efforts. Aws Database, Big Data, Data Services, Powerbi Stack
Posted 1 week ago
3.0 - 5.0 years
10 - 15 Lacs
Bengaluru
Work from Office
. Roles and Responsibility Position- AWS Serverless Developer Role mission We are seeking a talented AWS Serverless Developer for backend development, to join our dynamic team. The ideal candidate will have strong experience in developing and deploying serverless APIs/Data Pipelines using AWS technologies, using PySpark/Python. Main responsibilities 1. Design, develop, and maintain serverless data pipelines using AWS Lambda , AWS Glue , I AM Roles & Policies, RDS and other AWS services and tools, ensuring they are scalable and efficient. 2. Worked on the building Audit & logging process to capture any failure. 3. Having knowledge on the data quality and Data governance on handling of critical data. 4. Having knowledge on CICD and Git repo for code versioning 5. Work on functionality development of new features, including unit, integration, and performance testing. 6. Write clean, maintainable, and efficient code. 7. Troubleshoot issues and perform bug fixing to ensure application reliability. 8. Collaborate closely with the Lead Developer, Business Analyst and application Architect to align on project goals. 9. Maintain documentation for applications, architecture, and processes to ensure clarity and compliance. 10. Work with a distributed team following Agile methodologies to ensure timely delivery of projects. Experience and Skills 1. 3-5 years of experience in software development, with a focus on using AWS Lambda, I AM Roles & Policies, RDS and other AWS services and tools technologies. 2. Proficiency in either PySpark or Python programming languages. 3. Strong experience on CloudFormation Templates. 4. Manage version control using tools like Git. 5. Hands-on experience with AWS Aurora and/or Redshift. 6. Experience in Data engineering topics such as AWS Glue and Data pipelines development. 7. Experience in API data interfacing over API Gateway would be an added value. 8. Strong analytical, problem-solving, and debugging skills. 9. Excellent organizational skills with the ability to prioritize tasks based on project needs.
Posted 1 week ago
5.0 - 10.0 years
3 - 7 Lacs
Hyderabad
Work from Office
We are seeking an experienced Data Modeler with a strong background in real estate, investment management, and master data management. The ideal candidate will be responsible for designing, implementing, and maintaining data models that support our business objectives. This role requires a deep understanding of data architecture, data integration, and database optimization. Key Responsibilities: Design and Develop Data Models: Create conceptual, logical, and physical data models to support business requirements in the real estate and investment management domains. Master Data Management (MDM): Develop and manage master data solutions to ensure data consistency, accuracy, and reliability across the organization. Data Integration: Integrate data from various sources, ensuring consistency and accuracy across systems. Data Mapping: Map data elements to business requirements and create detailed data mapping documents. Collaboration: Work closely with data analysts, database administrators, and business stakeholders to understand data needs and deliver solutions. Documentation: Maintain comprehensive documentation of data models, data flows, and data dictionaries. Data Governance: Ensure data models comply with data governance and security policies. Qualifications: Experience: Overall 12+ Yrs Minimum of 5 years of experience in data modeling, with a focus on real estate, investment management, and master data management. Technical Skills: Proficiency in SQL, data modeling tools (e.g., ER/Studio, ERwin), and database management systems (e.g., Oracle, SQL Server). Domain Expertise: In-depth knowledge of real estate and investment management processes and data requirements. MDM Expertise: Strong experience in master data management, including data governance, data quality, and data stewardship. Analytical Skills: Strong analytical and problem-solving skills. Communication: Excellent verbal and written communication skills. Preferred Skills: Experience with data warehousing and business intelligence tools. Familiarity with cloud-based data solutions (e.g., AWS, Azure). Knowledge of data governance frameworks and best practices.
Posted 1 week ago
7.0 - 10.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Working at Atlassian Atlassians can choose where they work whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. ","responsibilities":" Be a trusted technology partner in driving alignment to the company s cross-channel vision and strategy, supporting Channel KPIs and delivering the highest value capabilities. Drive, develop and maintain numerous project deliverables that support the program throughout the change management cycle, including launch timelines, training delivery and adoption. Drive standardization, process consistency, and data quality across business processes. Build and maintain effective working relationships with SalesTech Product Management team and business stakeholders. Lead, guide, and support team members, and help streamline work processes to improve efficiency and consistency ","qualifications":" 7-10 years of solution, design and development experience in building solutions on Experience Cloud/Salesforce platform. Have 12+ years of overall work experience in building or supporting internal systems. Experience in building and supporting the future technology vision for Atlassian s RevOps Strategy in the areas of Lead Generation, Opportunity Management, Quote-to-Cash, Analytics & Insights. Benefits & Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit
Posted 1 week ago
2.0 - 7.0 years
4 - 7 Lacs
Hyderabad
Work from Office
Job_Description":" Data Engineer Position Overview Role Summary We are searching for a talented and motivated Data Engineerto join our team. The ideal candidate will have expertise in data modeling,analytical thinking, and developing ETL processes using Python. In this role,you will be pivotal in transforming raw data from landing tables into reliable,curated master tables, ensuring accuracy, accessibility, and integrity withinour Snowflake data platform. Main Responsibilities Design, Develop, and Maintain ETL Processes: Build and maintain scalable ETL pipelines inPython to extract, transform, and load data into Snowflake master tables.Automate data mastering, manage incremental updates, and ensure consistencybetween landing and master tables. Data Modeling: Create and optimize logical and physical datamodels in Snowflake for efficient querying and reporting. Translate businessneeds into well-structured data models, defining tables, keys, relationships,and constraints. Analytical Thinking and Problem Solving: Analyze complex datasets, identify trends, andwork with analysts and stakeholders to resolve data challenges. Investigatedata quality issues and design robust solutions aligned with business goals. Data Quality and Governance: Implement routines for data validation,cleansing, and error handling to ensure accuracy and reliability in Snowflake.Support the creation and application of data governance standards. Automation and Optimization: Seek automation opportunities for dataengineering tasks, enhance ETL processes for performance, and scale systems asdata volumes grow within Snowflake. Documentation and Communication: Maintain thorough documentation of data flows,models, transformation logic, and pipeline configurations. Clearly communicatetechnical concepts to all stakeholders. Collaboration: Work closely with data scientists, analysts, andengineers to deliver integrated data solutions, contributing tocross-functional projects with your data engineering expertise. Required Qualifications Bachelors or Masters degree in ComputerScience, IT, Engineering, Mathematics, or related field At least 2 years of experience as a DataEngineer or similar role Strong Python skills, including experiencedeveloping ETL pipelines and automation scripts Solid understanding of relational anddimensional data modeling Experience with Snowflake for SQL, schemadesign, and managing pipelines Proficient in SQL for querying and data analysisin Snowflake Strong analytical and problem-solving skills Familiarity with data warehousing and bestpractices Knowledge of data quality, cleansing, andvalidation techniques Experience with version control systems like Gitand collaborative workflows Excellent communication, both verbal and written Preferred Qualifications In-depth knowledge of Snowflake features likeSnowpipe, Streams, Tasks, and Time Travel Experience with cloud platforms such as AWS,Azure, or Google Cloud Familiarity with workflow orchestration toolslike Apache Airflow or Luigi Understanding of big data tools like Spark,Hadoop, or distributed databases Experience with CI/CD pipelines in dataengineering Background in streaming data and real-timeprocessing Experience deploying data pipelines inproduction Sample Responsibilities in Practice Develop automated ETL pipelines in Python toingest daily CSVs into a Snowflake landing table, validate data, and mergeclean records into a master table, handling duplicates and change tracking. Design scalable data models in Snowflake tosupport business intelligence reporting, ensuring both integrity and queryperformance. Collaborate with business analysts to adapt datamodels and pipelines to evolving needs. Monitor pipeline performance and troubleshootinconsistencies, documenting causes and solutions. Key Skills and Competencies Technical Skills: Python (including pandas,SQLAlchemy); Snowflake SQL and management; schema design; ETL processdevelopment Analytical Thinking: Ability to translatebusiness requirements into technical solutions; strong troubleshooting skills Collaboration and Communication: Effective teamplayer; clear technical documentation Adaptability: Willingness to adopt newtechnologies and proactively improve processes Our Data Environment Our organization manages diverse data sources,including transactional systems, third-party APIs, and unstructured data. Weare dedicated to building a top-tier Snowflake data infrastructure foranalytics, reporting, and machine learning. In this role, you will influenceour data architecture, implement modern data engineering practices, andcontribute to a culture driven by data. ","
Posted 1 week ago
5.0 - 10.0 years
9 - 14 Lacs
Mohali, Pune
Work from Office
About the Role We re looking for a Senior Azure Data Engineer with deep expertise in Databricks and modern Azure data engineering tools. You ll design, build, and optimize scalable data pipelines and solutions, leveraging advanced features of Databricks and Azure to deliver high-quality, secure, and efficient data platforms. Key Skills & Expertise Strong hands-on experience with Databricks , including: Databricks Notebooks (Python, SQL) Unity Catalog for governance & security Advanced Databricks performance optimization techniques Best practices for cost and compute efficiency Proficient in Azure Data Factory (ADF) for orchestrating ETL workflows. Excellent programming in Python , with advanced PySpark skills. Solid understanding of Apache Spark internals & tuning . Expertise in SQL writing complex queries, optimizing joins, working with large datasets. Familiar with data warehousing principles & modeling techniques (e.g., star/snowflake schemas). Knowledge of Azure data services: Data Lake Storage , Synapse Analytics , SQL Database . Experience implementing data governance , access control, and data lineage . What You ll Do Design and implement robust, scalable, and efficient data pipelines using Databricks and ADF. Leverage Unity Catalog to secure and govern sensitive data. Optimize Databricks jobs & queries for speed, cost, and scalability. Build and maintain Delta Lake tables and data models suitable for analytics and BI. Collaborate with stakeholders to define data needs, provide solutions, and deliver business value. Automate manual workflows, improve reliability, and ensure data quality. Troubleshoot and monitor pipelines to guarantee uptime and data accuracy. Mentor junior engineers and foster best practices in Databricks & Azure data engineering. Preferred Background 5+ years in data engineering with a focus on Azure. Demonstrated ability to work with large-scale distributed systems . Strong communication and teamwork skills. Certifications in Databricks and/or Azure Data Engineering are a plus.
Posted 1 week ago
5.0 - 10.0 years
15 - 25 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Work from Office
Role & responsibilities Develop, test, and deploy robust Dashboards and reports in Power BI using SAP HANA and Snowflake Datasets Basic Qualifications Excellent verbal and written communication skills 5+ years of experience working with Power BI with SAP HANA and Snowflake Datasets 5+ hands-on experience in developing moderate to complex ETL data pipelines is a plus 5+ years of hands-on experience with ability to resolve complex SQL query performance issues. 5+ years of ETL Python development experience; experience parallelizing pipelines a plus Demonstrated ability to troubleshoot complex query, pipeline, and data quality issues Call : - 9584022831 Email: - Mayank@axiomsoftwaresolutions.com
Posted 1 week ago
3.0 - 8.0 years
7 - 11 Lacs
Telangana
Work from Office
We have Immediate Openings on Alteryx for Contract to Hire role for multiple clients. Design and implement operational data quality controlling. Build a 'user friendly' interface, visual workflows, inferential rules, 3rd party data, and other capabilities to drive improvements in measuring the quality of data in various data sources within bank. Qualifications / Required Skills Minimum 3 years of related experience with Alteryx. Certification preferred. Minimum 7 years of experience in IT industry and focus on data. Need understanding of developing Alteryx Macros Need understanding of Alteryx Application creation. Should have experience in Database Data connection. knowledge in workflow performance At least developed/worked in 30 workflows. Core certified in Alteryx. Experience in Scheduling and Triggering workflows in Gallery. Experience in Error handling techniques If you are interested in, please share the update profile with below details. Current CTC Expected CTC Notice Period Total Experience Relevant Experience
Posted 1 week ago
2.0 - 7.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Amazon strives to be Earths most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want low prices, vast selection, and convenience Amazon continues to grow and evolve as a world-class e-commerce platform. Do you have solid analytical thinking, metrics-driven decision making and want to solve problems with solutions that will meet the growing worldwide need? Then SmartCommerce is the team for you. We are looking for top notch Business Intelligence Engineer to be part of our analytics team. The ideal candidate will be curious, have attention to detail, be energized by challenging entrepreneurial environment, be comfortable thinking big while also diving deep. Are you a smart, hungry, flexible, and world-class analytics professional excited by the challenge of launching a new business initiative for Amazon? SmartCommerce team is looking for Business Intelligence Engineer to be part of a new team being built from the ground up. They will be primarily working on our product SmartBiz. SmartBiz by Amazon is a one-stop shop for Indian sellers to fulfill their online selling needs. Whether a small business, an entrepreneur, or a neighborhood store, a seller can now create their own e-commerce store within minutes and start showcasing and selling their products online. 1.Responsible for designing, building and maintaining complex data solutions for Amazons SmartCommerce businesses 2.Actively participates in the code review process, design discussions, team planning, operational excellence, and constructively identifies problems and proposes solutions 3.Makes appropriate trade-offs, re-use where possible, and is judicious about introducing dependencies 4.Makes efficient use of resources (e.g., system hardware, data storage, query optimization, AWS infrastructure etc.) 5.Asks correct questions when data model and requirements are not well defined and comes up with designs which are scalable, maintainable and efficient 6.Makes enhancements that improve team s data architecture, making it better and easier to maintain (e.g., data auditing solutions, automating, ad-hoc or manual operation steps) 7.Owns the data quality of important datasets and any new changes/enhancements 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Masters degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis
Posted 1 week ago
5.0 - 10.0 years
2 - 5 Lacs
Chennai
Work from Office
Job Title:Data EngineerExperience5-10YearsLocation:Remote : Responsibilities: Design, build and maintain core data infrastructure pieces that allow Aircall to support our many data use cases. Enhance the data stack, lineage monitoring and alerting to prevent incidents and improve data quality. Implement best practices for data management, storage and security to ensure data integrity and compliance with regulations. Own the core company data pipeline, responsible for converting business needs to efficient & reliable data pipelines. Participate in code reviews to ensure code quality and share knowledge. Lead efforts to evaluate and integrate new technologies and tools to enhance our data infrastructure. Define and manage evolving data models and data schemas. Manage SLA for data sets that power our company metrics. Mentor junior members of the team, providing guidance and support in their professional development. Collaborate with data scientists, analysts and other stakeholders to drive efficiencies for their work, supporting complex data processing, storage and orchestration A little more about you: Bachelor's degree or higher in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering, with a strong focus on designing and building data pipelines and infrastructure. Proficient in SQL and Python, with the ability to translate complexity into efficient code. Experience with data workflow development and management tools (dbt, Airflow). Solid understanding of distributed computing principles and experience with cloud-based data platforms such as AWS, GCP, or Azure. Strong analytical and problem-solving skills, with the ability to effectively troubleshoot complex data issues. Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment. Experience with data tooling, data governance, business intelligence and data privacy is a plus.
Posted 1 week ago
7.0 - 10.0 years
22 - 30 Lacs
Hyderabad
Hybrid
Work Mode: Hybrid(3days WFO & 2 days WFH) Job Description: Key Responsibilities: Defining and implementing data quality standards: Collaborating with stakeholders to establish clear data quality rules and metrics, such as accuracy, completeness, consistency, and timeliness. Data profiling and analysis: Examining data to understand its structure, identify anomalies, and assess its quality against defined standards. Developing and implementing data quality checks: Creating automated processes and scripts to monitor data quality, detect errors, and trigger alerts when thresholds are breached. Troubleshooting and resolving data quality issues: Investigating data errors, identifying root causes, and working with relevant teams to implement corrective actions. Data validation and testing: Designing and executing test cases to ensure data accuracy and consistency throughout various data pipelines and systems. Data quality automation: Developing and maintaining automated tools and scripts to streamline data quality processes and improve efficiency. Collaboration and communication: Working with data engineers, business analysts, and other stakeholders to ensure data quality requirements are met and data quality issues are addressed effectively. Data governance: Participating in the development and implementation of data governance policies and procedures to ensure data quality and compliance. Required Skills and Qualifications: Strong understanding of data quality principles and methodologies. Proficiency in SQL and relational databases. Experience with data quality tools and frameworks. Experience with data profiling, validation, and testing techniques. Familiarity with data warehousing and cloud platforms (AWS, Azure, GCP) is a plus. Programming skills in languages like Python or R. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Need 1 month or less candidates .
Posted 1 week ago
5.0 - 10.0 years
4 - 8 Lacs
Pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : standard 15 years experience Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Your role will also include monitoring data workflows and troubleshooting any issues that arise, ensuring that data is accessible and reliable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and optimize data pipelines to enhance data processing efficiency.- Collaborate with data scientists and analysts to understand data needs and provide necessary support. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A standard 15 years experience is required. Qualification standard 15 years experience
Posted 1 week ago
4.0 - 8.0 years
0 - 0 Lacs
Pune
Hybrid
So, what’s the role all about? Within Actimize, the AI and Analytics Team is developing the next generation advanced analytical cloud platform that will harness the power of data to provide maximum accuracy for our clients’ Financial Crime programs. As part of the PaaS/SaaS development group, you will be responsible for developing this platform for Actimize cloud-based solutions and to work with cutting edge cloud technologies. How will you make an impact? NICE Actimize is the largest and broadest provider of financial crime, risk and compliance solutions for regional and global financial institutions & has been consistently ranked as number one in the space At NICE Actimize, we recognize that every employee’s contributions are integral to our company’s growth and success. To find and acquire the best and brightest talent around the globe, we offer a challenging work environment, competitive compensation, and benefits, and rewarding career opportunities. Come share, grow and learn with us – you’ll be challenged, you’ll have fun and you’ll be part of a fast growing, highly respected organization. This new SaaS platform will enable our customers (some of the biggest financial institutes around the world) to create solutions on the platform to fight financial crime. Have you got what it takes? Design, implement, and maintain real-time and batch data pipelines for fraud detection systems. Automate data ingestion from transactional systems, third-party fraud intelligence feeds, and behavioral analytics platforms. Ensure high data quality, lineage, and traceability to support audit and compliance requirements. Collaborate with fraud analysts and data scientists to deploy and monitor machine learning models in production. Monitor pipeline performance and implement alerting for anomalies or failures. Ensure data security and compliance with financial regulations Qualifications: Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. 4-6 years of experience in DataOps role, preferably in fraud or risk domains. Strong programming skills in Python and SQL. Knowledge of financial fraud patterns, transaction monitoring, and behavioral analytics. Familiarity with fraud detection systems, rules engines, or anomaly detection frameworks. Experience with AWS cloud platforms Understanding of data governance, encryption, and secure data handling practices. Experience with fraud analytics tools or platforms like Actimize What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7822 Reporting into: Director Role Type: Tech Manager
Posted 1 week ago
5.0 - 8.0 years
11 - 21 Lacs
Pune
Work from Office
This role is accountable to develop, expand and optimize Data Management Architecture, Design & Implementation under Singtel Data Platform & Management Design, develop and implement data governance and management solution, data quality, Privacy, protection & associated control technology solutions as per best industry practice. Review, evaluate and implement Data Management standards primarily Data Classification, Data Retention across systems. Design, develop and implement Automated Data Discovery rules to identify presence of PII attributes. Drive development, optimization, testing and tooling to improve overall data control management (Security, Data Privacy, protection, Data Quality) Review, analyze, benchmark, and approve solution design from product companies, internal teams, and vendors. Ensure that proposed solutions are aligned and conformed to the data landscape, big data architecture guidelines and roadmap. SECTION B: KEY RESPONSIBILITIES AND RESULTS 1 Design and implement data management standards like Catalog Management, Data Quality, Data Classification, Data Retention 2 Drive BAU process, testing and tooling to improve data security, privacy, and protection 3 Identify, design, and implement internal process improvements: automating manual processes, control and optimizing data technology service delivery. 4 Implement and support Data Management Technology solution throughout lifecycle like user onboarding, upgrades, fixes, access management etc.. SECTION C: QUALIFICATIONS / EXPERIENCE / KNOWLEDGE REQUIRED Category Essential for this role Education and Qualifications Diploma in Data Analytics, Data Engineering, IT, Computer Science, Software Engineering, or equivalent. Work Experience Exposure to Data Management and Big Data Concepts Knowledge and experience in Data Management, Data Integration, Data Quality products Technical Skills Informatica CDGC, Collibra, Alatian Informatica Data Quality, Data Privacy Management Azure Data Bricks This role is accountable to develop, expand and optimize Data Management Architecture, Design & Implementation under Singtel Data Platform & Management Design, develop and implement data governance and management solution, data quality, Privacy, protection & associated control technology solutions as per best industry practice. Review, evaluate and implement Data Management standards primarily Data Classification, Data Retention across systems. Design, develop and implement Automated Data Discovery rules to identify presence of PII attributes. Drive development, optimization, testing and tooling to improve overall data control management (Security, Data Privacy, protection, Data Quality) Review, analyze, benchmark, and approve solution design from product companies, internal teams, and vendors. Ensure that proposed solutions are aligned and conformed to the data landscape, big data architecture guidelines and roadmap. SECTION B: KEY RESPONSIBILITIES AND RESULTS 1 Design and implement data management standards like Catalog Management, Data Quality, Data Classification, Data Retention 2 Drive BAU process, testing and tooling to improve data security, privacy, and protection 3 Identify, design, and implement internal process improvements: automating manual processes, control and optimizing data technology service delivery. 4 Implement and support Data Management Technology solution throughout lifecycle like user onboarding, upgrades, fixes, access management etc.. SECTION C: QUALIFICATIONS / EXPERIENCE / KNOWLEDGE REQUIRED Category Essential for this role Education and Qualifications Diploma in Data Analytics, Data Engineering, IT, Computer Science, Software Engineering, or equivalent. Work Experience Exposure to Data Management and Big Data Concepts Knowledge and experience in Data Management, Data Integration, Data Quality products Technical Skills Informatica CDGC, Collibra, Alatian Informatica Data Quality, Data Privacy Management Azure Data Bricks
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Noida
Work from Office
Responsibilities : Data Exploration and Insights : - Conduct continuous data exploration and analysis to identify opportunities for enhancing data matching logic, including fuzzy logic, and improving overall data quality within the SCI solution. - This includes working with large datasets from various sources, including Excel files and databases. Data Quality Improvement : - Perform various analyses specifically aimed at improving data quality within the SCI system. - This will involve identifying data quality issues, proposing solutions, and implementing improvements. Weekly Playback and Collaboration : - Participate in weekly playback sessions, using Jupyter Notebook to demonstrate data insights and analysis. - Incorporate new explorations and analyses based on feedback from the working group and prioritized tasks. Project Scaling and Support : - Contribute to the scaling of the SCI project by supporting data acquisition, cleansing, and validation processes for new markets. - This includes pre-requisites for batch ingestion and post-batch ingestion analysis and validation of SCI records. Data Analysis and Validation : - Perform thorough data analysis and validation of SCI records after batch ingestion. - Proactively identify insights and implement solutions to improve data quality. Stakeholder Collaboration : - Coordinate with business stakeholders to facilitate the manual validation of records flagged for manual intervention. - Communicate findings and recommendations clearly and effectively. Technical Requirements : - 5+ years of experience as a Data Scientist. - Strong proficiency in Python and SQL. - Extensive experience using Jupyter Notebook for data analysis and visualization. - Working knowledge of data matching techniques, including fuzzy logic. - Experience working with large datasets from various sources (Excel, databases, etc. - Solid understanding of data quality principles and methodologies. Skills : - SQL - Machine Learning (While not explicitly required in the initial description, it's a valuable skill for a Data Scientist and should be included) - Data Analysis - Jupyter Notebook - Data Cleansing - Fuzzy Logic - Python - Data Quality Improvement - Data Validation - Data Acquisition - Communication and Collaboration - Problem-solving and Analytical skills Preferred Qualifications (Optional, but can help attract stronger candidates) : - Experience with specific data quality tools and techniques. - Familiarity with cloud computing platforms (e.g., AWS, Azure, GCP). - Experience with data visualization tools (e.g., Tableau, Power BI). - Knowledge of statistical modeling and machine learning algorithms
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Gurugram
Work from Office
Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 week ago
5.0 - 7.0 years
7 - 9 Lacs
Noida
Work from Office
Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 week ago
7.0 - 10.0 years
10 - 14 Lacs
Gurugram
Work from Office
About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.
Posted 1 week ago
7.0 - 10.0 years
5 - 8 Lacs
Gurugram
Remote
Location : Remote (India). Employment Type : Contract (Remote). Experience Required : 7+ Years. Job Summary : We are looking for a highly skilled Data Engineer / Data Modeler with strong experience in Snowflake, DBT, and GCP to support our data infrastructure and modeling initiatives. The ideal candidate should possess excellent SQL skills, hands-on experience with Erwin Data Modeler, and a strong background in modern data architectures and data modeling techniques. Key Responsibilities : - Design and implement scalable data models using Snowflake and Erwin Data Modeler. - Create, maintain, and enhance data pipelines using DBT and GCP (BigQuery, Cloud Storage, Dataflow). - Perform reverse engineering on existing systems (e.g., Sailfish/DDMS) using DBeaver or similar tools to understand and rebuild data models. - Develop efficient SQL queries and stored procedures for data transformation, quality, and validation. - Collaborate with business analysts and stakeholders to gather data requirements and convert them into physical and logical models. - Ensure performance tuning, security, and optimization of the Snowflake data warehouse. - Document metadata, data lineage, and business logic behind data structures and flows. - Participate in code reviews, enforce coding standards, and provide best practices for data modeling and governance. Must-Have Skills : - Snowflake architecture, schema design, and data warehouse experience. - DBT (Data Build Tool) for data transformation and pipeline development. - Strong expertise in SQL (query optimization, complex joins, window functions, etc.) - Hands-on experience with Erwin Data Modeler (logical and physical modeling). - Experience with GCP (BigQuery, Cloud Composer, Cloud Storage). - Experience in reverse engineering legacy systems like Sailfish or DDMS using DBeaver. Good To Have : - Experience with CI/CD tools and DevOps for data environments. - Familiarity with data governance, security, and privacy practices. - Exposure to Agile methodologies and working in distributed teams. - Knowledge of Python for data engineering tasks and orchestration scripts. Soft Skills : - Excellent problem-solving and analytical skills. - Strong communication and stakeholder management. - Self-driven with the ability to work independently in a remote setup.
Posted 1 week ago
7.0 - 10.0 years
0 - 1 Lacs
Bengaluru
Work from Office
Data Governance Engineer : Level of Experience: 8+ years of experience Must-Have Skillset: Hands-on experience with Data Governance: Data Governance encompasses various components such as Data Control, Data Privacy, Data Ethics, and Data Strategy. While the candidate may not have experience in all areas, hands-on experience in at least one and an understanding of the others is essential. Existing understanding of Understanding of EDM/DAMA/DCAM would be very useful. Experience in multi-team global collaboration: The CoE team is central to multiple global teams in International. The candidate should be adept at navigating these complexities. Experience with strategic initiatives: The I-DM CoE, particularly the Data Governance segment, is a strategic team within International. Prior experience with strategic solutioning is crucial, whereas experience in delivery roles may not be suitable. Strong communication skills. Good-to-Have Skillset: Pharma background, as the enterprise data landscape in the pharma industry differs from other domains. Experience working with non-US clients. Consulting background Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com) Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: o Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental LeaveoLearning & Career Development Employee Wellness About Apexon: Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences – to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. We enable #HumanFirstDigital
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi