Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are hiring for one of the IT product-based company Designation: - Product Manager Location: - Pune Skills: -Power BI/ Analytics, Databricks User case history (Jira)or similar tool Agile env. Domain Experience: - Healthacare/RCM Job Description:- Consolidates input from customers, competitors, marketing, sales, customer support, and development teams into market requirements for an assigned Product space. - Translates market requirements into a product roadmap - Develops and maintains a prioritized list of product features - Works with Product Analyst(s) to translate product roadmap into epics for releases - understands product P&L and key drivers for increasing revenue and profitability - Analyzes market competition by comparing the company's product to competitors' products " - Ability to take complicated or complex information and present it in a logical and concise manner - Comfortable presenting to Director & VP level - Maintain an ""Inner Circle"" of 5 customer contacts at the user or manager level for industry and real-world insight"
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You should have experience in understanding and translating data, analytic requirements, and functional needs into technical requirements while collaborating with global customers. Your responsibilities will include designing cloud-native data architectures to support scalable, real-time, and batch processing. You will be required to build and maintain data pipelines for large-scale data management in alignment with data strategy and processing standards. Additionally, you will define strategies for data modeling, data integration, and metadata management. Your role will also involve having strong experience in database, data warehouse, and data lake design and architecture. You should be proficient in leveraging cloud platforms such as AWS, Azure, or GCP for data storage, compute, and analytics services. Experience in database programming using various SQL flavors is essential. Moreover, you will need to implement data governance frameworks encompassing data quality, lineage, and cataloging. Collaboration with cross-functional teams, including business analysts, data engineers, and DevOps teams, will be a key aspect of this role. Familiarity with the Big Data ecosystem, whether on-premises (Hortonworks/MapR) or in the Cloud, is required. You should be able to evaluate emerging cloud technologies and suggest enhancements to the data architecture. Proficiency in any orchestration tool like Airflow or Oozie for scheduling pipelines is preferred. Hands-on experience in utilizing tools such as Spark Streaming, Kafka, Databricks, and Snowflake is necessary. You should be adept at working in an Agile/Scrum development process and optimizing data systems for cost efficiency, performance, and scalability.,
Posted 5 days ago
8.0 - 13.0 years
0 Lacs
hyderabad, telangana
On-site
At Techwave, we are committed to fostering a culture of growth and inclusivity. We ensure that every individual associated with our brand is challenged at every step and provided with the necessary opportunities to excel in their professional and personal lives. People are at the core of everything we do. Techwave is a leading global IT and engineering services and solutions company dedicated to revolutionizing digital transformations. Our mission is to enable clients to maximize their potential and achieve a greater market share through a wide array of technology services, including Enterprise Resource Planning, Application Development, Analytics, Digital solutions, and the Internet of Things (IoT). Founded in 2004 and headquartered in Houston, TX, USA, Techwave leverages its expertise in Digital Transformation, Enterprise Applications, and Engineering Services to help businesses accelerate their growth. We are a team of dreamers and doers who constantly push the boundaries of what's possible, and we want YOU to be a part of it. Job Title: Data Lead Experience: 10+ Years Mode of Hire: Full-time Key Skills: As a senior-level ETL developer with 10-13 years of experience, you will be responsible for building relational and data warehousing applications. Your primary role will involve supporting the existing EDW, designing and developing various layers of our data, and testing, documenting, and optimizing the ETL process. You will collaborate within a team environment to design and develop frameworks and services according to specifications. Your responsibilities will also include preparing detailed system documentation, performing unit and system tests, coordinating with Operations staff on application deployment, and ensuring that all activities are performed with quality and compliance standards. Additionally, you will design and implement ETL batches that meet SLAs, develop data collection, staging, movement, quality, and archiving strategies, and design automation processes to control data access and movement. To excel in this role, you must have 8-10 years of ETL/ELT experience, strong SQL skills, and proficiency in Stored Procedures and database development. Experience in Azure Data Lake, Synapse, Azure Data Factory, and Databricks, as well as Snowflake, is essential. You should possess a good understanding of data warehouse ETL and ELT design best practices, be able to work independently, and have a strong database experience with DB2, SQL Server, and Azure. Furthermore, you should be adept at designing Relational and Dimensional Data models, have a good grasp of Enterprise reporting (particularly Power BI), and understand Agile practices and methodologies. Your role will also involve assisting in analyzing and extracting relevant information from historical business data to support Business Intelligence initiatives and conducting Proof of Concept for new technology selection and proposing data warehouse architecture enhancements. If you are a self-starter with the required skills and experience, we invite you to join our dynamic team at Techwave and be a part of our journey towards innovation and excellence.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be joining Coders Brain Technology Pvt. Ltd., a global leader in services, digital, and business solutions. At Coders Brain, we partner with our clients to simplify, strengthen, and transform their businesses. We are committed to providing the highest levels of certainty and satisfaction through our comprehensive industry expertise and global network of innovation and delivery centers. As a Data Engineer with a minimum of 5 years of experience, you will be working remotely. Your role will involve collaborating with other developers to define and refine solutions. You will work closely with the business to deliver data and analytics projects. Your responsibilities will include working on data integration with various tools such as Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in AWS Cloud environment. You should have strong real-life experience in Python development, especially in pySpark within AWS Cloud. Designing, developing, testing, deploying, maintaining, and improving data integration pipelines will be a key part of your role. Additionally, you should have experience with Python and common libraries, Perl, Unix Scripts, and analytical skills with databases. Proficiency in source control systems like Git, Bitbucket, and continuous integration tools like Jenkins is required. Experience with continuous deployment (CI/CD), Databricks, Airflow, and Apache Spark will be beneficial. Knowledge of databases such as Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar is essential. Exposure to ETL tools including Informatica is preferred. A degree in Computer Science, Computer Engineering, or Electrical Engineering is desired. If you are interested in this opportunity, click on the apply button. Alternatively, you can send your resume to prerna.jain@codersbrain.com or pooja.gupta@codersbrain.com.,
Posted 5 days ago
0.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu
Remote
Location: Chennai, Tamil Nadu, India Job ID: R0097953 Date Posted: 2025-07-10 Company Name: HITACHI ENERGY TECHNOLOGY SERVICES PRIVATE LIMITED Profession (Job Category): IT, Telecom & Internet Job Schedule: Full time Remote: No Job Description: The opportunity: To software development using Power Apps, power automate & SharePoint as per j ob d e scripti o n. How you’ll make an impact: To develop complex applications with Microsoft power Apps, Power Automate using SharePoint / Dataverse or SQL as backend. Propose and guide team to establish App’s data storage and retrieval in Enterprise data platform (using data lake, data bricks) To Connect with business to gather requirement and set priorities for development. Connect with subject matter experts to understand the business processes. Organize the change requests in a structured manner with excellent traceability. Convert the business requirements into process flow chart. Shall work independently in developing Power App Applications. To conduct periodic design review meetings to ensure the development is progressing as per agreed timeline. Follow up with business to ensure required inputs are received on time. Support business users during user acceptance test and ensure. Undertake change requests Responsible to ensure compliance with applicable external and internal regulations, procedures, and guidelines. Living Hitachi Energy’s core values of safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Your background: B.Tech / MCA 4-8years of experience Should have executed at least 5 projects using Power Apps and Power Automate platform in lead role. Should have good technical and working knowledge in SQL server. Should have expertise in Canvas apps and model driven apps Expertise in creating complex Power automate flows. To have exposure to Enterprise data platform, data lake, Databricks concepts. To have expertise in interfacing with software platforms such as SAP, Salesforce etc. To have knowledge in Artificial intelligence / Machine learning concepts and implementation methods Qualified individuals with a disability may request a reasonable accommodation if you are unable or limited in your ability to use or access the Hitachi Energy career site as a result of your disability. You may request reasonable accommodations by completing a general inquiry form on our website. Please include your contact information and specific details about your required accommodation to support you during the job application process. This is solely for job seekers with disabilities requiring accessibility assistance or an accommodation in the job application process. Messages left for other purposes will not receive a response.
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description As a Data Engineer, you will be part of a Data and Analytics (D & A) team responsible for building data pipelines that enables us to make informed decisions across the entire organization. This is a great opportunity to make a real impact on the course of the company, which makes data-based decisions as a part of its Data + Analytics Strategy. The Data Engineer is responsible for design, development, testing, and implementation of automated data pipelines and for the Enterprise Data Warehouse, hosted in the Cloud. The Data Engineer works closely with Business Intelligence / Analytics teams and business users to understand requirements, translate them into technical design, develop data pipelines and implement solutions in the Enterprise Data Warehouse (Redshift). Primary Responsibilities Include Analyze existing & create new stored procedures which involve complex data models and business rules. Build data pipelines utilizing various ETL transformation tools such as informatica or AWS Glue Actively participate through all phases of the project cycle from ideation to post-implementation stabilization. Work with business and technical peers to define how best to meet requirements, balancing speed & robustness. Build high-quality, maintainable SQL OLAP/Analytic functions following established patterns and coding practices. Analyze technical data to establish solutions that achieve complex data transformations. Participate / perform testing to ensure data quality and integrity via unit, integration, regression, and UAT testing. Create and maintain process design, data model, and operations documentation. Assist in the maintenance of the codebase, unit tests, and related technical design docs and configurations. Engage and collaborate with stakeholders via the Agile process, identifying and mitigating risks & issues as needed. Maintain software velocity & quality for deliverables, holding oneself accountable to commitments. JOB Requirements (minimum Competencies Required For Job Performance) Experience in PL/SQL scripting and query optimization, required. Experience with AWS (Amazon Web Services) Redshift, Oracle, or PostgreSQL, preferred. Experience with Informatica Power Center and/or Informatica Cloud / IDMC, preferred. Experience in data model design, dimensional data modeling, and complex stored procedure development, required. Strong analytical skills, synthesizing information with attention to detail & accuracy to establish patterns and solutions. Experience with AWS, e.g., S3, PySpark, Glue, Redshift, Lambda, preferred. Experience with Data Lake House platforms, e.g., Databricks, Snowflake, preferred. Experience in scripting languages, e.g., Python, Scala, Java, Unix Shell, Bash, preferred. Experience operating in Agile and Waterfall development methodologies, preferred. Experience building data visualization solutions using BI platforms, e.g., Tableau, Power BI, Qlik, preferred. Capable of balancing technology ideals and business objectives, evaluating options and implications. Must possess strong written and verbal communication skills. Manages and prioritizes work effectively with minimal supervision, seeking and offering help as needed to achieve goals. Adaptable to change and able to work independently and as part of a team. Applies curiosity and creativity to solve problems, seeking opportunities and overcoming challenges with resourcefulness. High bias for action in meeting commitments & deadlines; effectively sees, communicates, and mitigates risks and issues. Active participant in the development community; seeks and offers guidance, coaching, and professional development. (ref:hirist.tech)
Posted 6 days ago
5.0 - 8.0 years
0 Lacs
India
Remote
Position: Azure Data Engineer (Offshore - Remote) Experience: 5 - 8 years Start Date: Need to start within 30 days Engagement Type: Full-Time About the Role: Smartbridge is seeking an Azure Data Engineer to design, develop, and optimize data solutions leveraging Microsoft Azure technologies. The ideal candidate will have 5 to 8 years of experience working with Azure Data Factory (ADF), Azure Synapse Analytics, SQL, and ETL processes . Responsibilities: Develop and maintain ETL pipelines using Azure Data Factory (ADF) . Design and implement data models for efficient storage and retrieval in Azure Synapse Analytics . Optimize SQL queries and performance tuning for large datasets. Work with Azure Data Lake, Azure SQL Database, and other cloud data solutions . Implement data security measures , including role-based access control (RBAC) and data masking . Collaborate with business analysts and stakeholders to understand data requirements and translate them into technical solutions. Technical Skills Required: Azure Data Factory (ADF) – Building, orchestrating, and monitoring data pipelines. Azure Synapse Analytics – Data modeling, performance optimization. SQL Server & T-SQL – Writing complex queries, stored procedures. ETL & Data Transformation – Experience handling large datasets. Azure Data Lake & Blob Storage – Managing structured and unstructured data. Power BI or other visualization tools (preferred but not mandatory). Python or Spark (Databricks experience is a plus). Additional Requirements: Bachelor’s degree in Computer Science, Data Engineering, or a related field . Microsoft Azure certification (preferred but not mandatory). Experience in Oil & Gas, Life Science, or Food & Beverage is a plus. 100% remote role – Must be available for the second shift to overlap with the US team until Noon CST . 3-month probation period before full-time confirmation. If certification is not already held, it must be completed during probation. Recruitment Process & Technical Testing: Candidates will undergo a 45-60 minute TestGorilla assessment , including: Intro Video Section – Candidate introduction & motivation. Analytical & Problem-Solving Skills – Scenario-based questions. Technical Test – Covering SQL, Azure Data Engineering-related questions, and possible coding tasks. Join Smartbridge and be part of an innovative team driving cloud data solutions !
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview: Maxima Tek is a well-funded and rapidly growing company specializing in software solutions for software-defined vehicles, with over four million vehicles already on the road with top global OEM brands. They are at the forefront of automotive digital transformation and have a diverse team across various international locations. This is a hybrid position based in Sunnyvale, CA, requiring 3 days in the office. The Opportunity: Sonatus is seeking a highly motivated AI Engineer to drive software innovations for next-generation software-defined vehicles. The role focuses on customer-first product development and solving real-world problems. Key Responsibilities: Lead efforts to leverage existing AI models and frameworks to solve complex business challenges. Conduct the full data modeling and algorithm development lifecycle (modeling, training, tuning, validating, deploying, maintaining). Demonstrate strong domain expertise in various AI areas including LLM, Computer Vision, Time Series, RAG, fine-tuning large models, and traditional ML models. Stay current with industry trends and advancements in data science and AI. Perform data analysis and provide insights for business decisions. Ensure adherence to data privacy and security protocols. Collaborate with cross-functional teams to translate requirements into AI/data science solutions. Document and communicate technical designs, processes, and best practices. Manage projects to ensure timely completion in a dynamic environment. Requirements: Master’s or PhD in Computer Science, Engineering, Mathematics, Applied Sciences, or related field. Strong programming skills in Python, Java, or C++, with experience in TensorFlow, PyTorch, or scikit-learn. In-depth knowledge of current machine learning algorithms, AI technologies, and platforms. Experience with data engineering/processing frameworks (e.g., Databricks, Spark, Dataflow) and SQL proficiency. Solid experience in data preprocessing, feature engineering, and model evaluation. Familiarity with cloud platforms (AWS, Azure, Google Cloud) and containerization (Docker, Kubernetes) is a plus. Strong knowledge of software development best practices, version control systems, and agile methodologies. Results-driven with excellent problem-solving, communication (verbal and written), and collaboration skills. Experience in the automotive industry is highly desirable
Posted 6 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we’re shaping the future and making a meaningful impact on the world. About the Role The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions. Data is our bread and butter for innovation. We are looking for a Software Development Engineer-II (AI) who understands healthcare data and can leverage the data to build algorithms to personalize treatments based on the clinical and behavioral history of patients. The purpose for this role, would be to build agents for the platform, automating areas around support engineering and data engineering. We are looking for a superstar who will define and build the next generation of predictive analytics tools in healthcare. A Day in the Life Design and lead the development of various artificial intelligence initiatives to help improve health and wellness of patients Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them. Proven ability to break down complex business problems into machine learning problems and design solution workflows. Work with our data platform team to help them successfully integrate the agent capability or algorithms in their product/workflows. Work with development teams to build tools for repeatable data tasks that will accelerate and automate development cycle. What You Need 3+ years of experience in Software Engineering, with experience in building APIs, who understand how APIs function, and can effectively develop applications. Familiarity with Prompt Engineering and working experience in fine tuning LLMs Hands-on experience of working with Multi-Agent Systems with Frameworks like CrewAI/Langchain/Autogen (One of them is a must) and Prompt engineering. Hands-on with Vector Databases such as ChromaDB, FAISS etc. Experience in Reinforcement learning (RL), specially for autonomous agents. Working experience with Embedding models and RAG design. Strong hands-on experience in Python - building enterprise applications along with optimization techniques, along with working with API integrations (Fast API/Django) Hands on experience with at least one ML platform from amongst Databricks, Azure ML, Sagemaker Good to have - Comfortable with Docker, Kubernetes, AWS cloud technologies, Snowflake and also some experience in Healthcare. Preferred Skills Python - Building highly scalable and performant applications LLM - Deep experience in working and fine tuning LLM Models Re-inforcement Learning and MAS Vector Databases Here’s What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days. Parental Leave : Leverage one of industry's best parental leave policies to spend time with your new addition. Sabbatical : Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most. Care Program: Whether it’s a celebration or a time of need, we’ve got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need. Financial Assistance : Life happens, and when it does, we’re here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube , Glassdoor , LinkedIn , Instagram , and the Web .
Posted 6 days ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Engineer Introduction to role:- Are you ready to make a significant impact in the world of biopharmaceuticals? AstraZeneca, a global leader in innovation-driven prescription medicines, is seeking a dedicated Data Engineer to join our Commercial IT Data Analytics & AI (DAAI) team. With operations in over 100 countries and headquarters in the United Kingdom, AstraZeneca offers a unique workplace culture that fosters innovation and collaboration. As a Data Engineer, you will play a crucial role in supporting and enhancing our data platforms built on AWS services. Your expertise in ETL, Data Warehousing, Databricks, and AWS applications will be vital in ensuring business continuity and driving efficiency. Are you up for the challenge? Accountabilities Monitor and maintain the health and performance of production systems and applications. Provide timely incident response, solve, and resolution for technical issues raised by users or monitoring tools. Perform root cause analysis for recurring issues and implement preventive measures. Investigate data anomalies, solve failures, and coordinate with relevant teams for resolution. Collaborate with development and infrastructure teams to support deployments and configuration changes. Maintain and update technical documentation, standard operating procedures, and knowledge bases. Ensure alignment to service-level agreements (SLAs) and minimize downtime or service disruptions. Manage user access, permissions, and security-related requests as per organizational policies. Participate in on-call rotations and provide after-hours support as needed. Communicate effectively with collaborators, providing status updates and post-incident reports. Proactively find opportunities for automation and process improvement in support activities. Support data migration, upgrades, and transitions as required. Support business continuity and disaster recovery exercises as required.. Essential Skills/Experience Education Background: B.E/B.Tech/MCA/MSc/BSc Overall Years of Experience: 3 to 5 years of experience Solid experience with SQL, data warehousing, and building ETL pipelines Hands-on experience with AWS services, including EMR, EC2, S3, Athena, RDS, Databricks, and Redshift. Skilled in working with columnar databases such as Redshift, Cassandra or BigQuery. Good understanding of ETL processes and data warehousing concepts. Familiarity with scheduling tools (especially Airflow is a plus). Able to write complex SQL queries for data extraction, transformation, and reporting. Excellent communication skills and ability to work well with both technical and non-technical teams. Strong analytical and troubleshooting skills in complex data environments Desirable Skills/Experience Experience with Databricks or Snowflake Proficient in scripting and programming languages such as Shell Scripting and Python Familiar with CI/CD using Bamboo Proficient in version control systems, including Bitbucket and GitHub Preferably experienced with release management processes Significant prior experience in an IT environment within the pharmaceutical or healthcare industry At AstraZeneca, we are committed to driving exciting transformation on our journey to becoming a digital and data-led enterprise. Our work connects across the entire business to power each function, influencing patient outcomes and improving lives. By unleashing the power of our latest innovations in data, machine learning, and technology, we turn complex information into life-changing insights. Join us to work alongside leading experts in our specialist communities, where your contributions are recognized from the top. Ready to take the next step? Apply now to become part of our dynamic team! Date Posted 09-Jul-2025 Closing Date 13-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 6 days ago
2.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About The Role Permanent position that can be based in Delhi (India), or Ulaanbaatar (Mongolia) We are looking for a Senior Analyst People Analytics to be responsible for the maintenance and sustainability of People Data and Insights products and data solutions that enable the People Function to make well-informed, evidence-based decisions. This role is a great opportunity for an experienced individual to support the group’s focus on delivering data & analysis to drive decision making, help foster stronger partnership with internal customers and achieve business excellence. We are looking for an enterprising person with the ability to derive actionable insights from multiple data sources, love for data and statistics, and a willingness to learn and grow. Reporting to the People Data Solutions Lead and working in a collaborative community within the People Data and Insights division, you will (but not limited to) As a system support team member (People Data Solutions team) Support the maintenance and sustainability of the People Data & Insights products across People Insights, Data Management, Data Science, Data Governance, and Surveys & Research teams Manage and prioritize support requests, ensuring timely resolution of incidents and requests Provide support to the human resources team with ongoing data and reporting activities and special projects as needed Act as a primary point of contact for end-users and stakeholders to provide guidance, advice, and support on People Data & Insights products in production Ensure all support processes and procedures are in place and followed to ensure consistent and efficient support Identify and implement improvements to support processes and tools to increase the efficiency and effectiveness of the People Data Solutions team Actively get involved and monitor the transition process of the data solutions from development to operational Continuously monitor the performance and availability of operational data solutions and products, taking proactive steps to prevent incidents and resolve issues Get involved in maintenance of the HR Lakehouse (Databricks) and data solutions built within it As a member of the People Data and Insights team member Discover issues with data accuracy, caused by system and human errors, provide recommendations for improvement Identify data quality and integrity issues to discover fit for purpose data sets. Ensure compliance with human resource reporting quality standards Maintain and implement data governance and confidentiality framework to protect employee data Ensure proper source control, document best practices and quality assurance processes are implemented and followed to maintain resilience & process integrity Collaborate with internal development teams to resolve complex issues and provide feedback on application design and development Continuously learn and get involved in development of data solutions of the People Data and Insights team whenever needed and possible Get involved in projects when and wherever necessary What You’ll Bring A commitment to the safety of yourself and your team Overall, 2-4 years of experience in a global organization with multi-cultural discipline Experience of working in the high-performance data engineering and analytics team environment Knowledge (mid to expert level) in data extraction and transformation experience using common systems – for example Workday, SAP BW, Databricks, SQL databases, AWS, Cloud services, and others Understanding process documentation principles and skilled in version control (e.g., GitHub) Knowledge and experience (mid to high level) in data visualization tools such as Power BI, Excel (Pivot tables, analytical functions, macros), Tableau Advanced and working knowledge of SQL, Python, PySpark Communication & writing skills – ability to tell the story behind the data Diligence and attention to details Ability to manage and deliver routine work on strict timelines along with special business projects It will also be beneficial if you have. Working experience in working with Human Resources data and data models, as well as an understanding of data security and data privacy Working experience (mid to expert level) in data extraction and transformation experience using common systems – for example Workday, SAP BW, Databricks, SQL databases, AWS, Cloud services, and others Proven working knowledge of Python, PySpark, SQL Hands-on experience with Databricks, AWS/Azure, Terraform, especially in defining and maintaining ETL pipelines and infrastructure as code Familiarity with basic/advanced machine learning algorithms and underlying statistical techniques Experience in stakeholder and customer management. About Rio Tinto Rio Tinto is a leading global mining and materials company. We operate in 35 countries where we produce iron ore, copper, aluminium, critical minerals, and other materials needed for the global energy transition and for people, communities, and nations to thrive. We have been mining for 150 years and operate with knowledge built up across generations and continents. Our purpose is finding better ways to provide the materials the world needs – striving for innovation and continuous improvement to produce materials with low emissions and to the right environmental, social and governance standards. But we can’t do it on our own, so we’re focused on creating partnerships to solve problems, create win-win situations and meet opportunities. Every Voice Matters At Rio Tinto, we particularly welcome and encourage applications from Indigenous Peoples, women, the LGBTQIA+ community, mature workers, people with disabilities and people from different cultural backgrounds. We are committed to an inclusive environment where people feel comfortable to be themselves. We want our people to feel that all voices are heard, all cultures respected and that a variety of perspectives are not only welcome – they are essential to our success. We treat each other fairly and with dignity regardless of race, gender, nationality, ethnic origin, religion, age, sexual orientation or anything else that makes us different.
Posted 6 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 316918BR Job Type Full Time Your role Are you an analytic thinker? Do you enjoy creating valuable insights with data? Do you want to play a key role in transforming our firm into an agile organization? At UBS, we re-imagine the way we work, the way we connect with each other – our colleagues, clients and partners – and the way we deliver value. Being agile will make us more responsive, more adaptable, and ultimately more innovative. We’re looking for a Data Engineer to: transform data into valuable insights that inform business decisions, making use of our internal data platforms and applying appropriate analytical techniques design, model, develop, and improve data pipelines and data products engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using data platform infrastructure effectively develop, train, and apply machine-learning models to make better predictions, automate manual processes, and solve challenging business problems ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements. build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues understand, represent, and advocate for client needs Your team In our agile operating model, crews are aligned to larger products and services fulfilling client needs and encompass multiple autonomous pods. You’ll be working in the Developer Workspaces Team focusing on providing compute, development environments and tooling to developers and business users. Your expertise comprehensive understanding and ability to apply data engineering techniques, from event streaming and real-time analytics to computational grids and graph processing engines curious to learn new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind strong command of at least one language among Python, Java, Golang understanding of data management and database technologies including SQL/NoSQL understanding of data products, data structures and data manipulation techniques including classification, parsing, pattern matching experience with Databricks, ADLS, Delta Lake/Tables, ETL tools would be an asset good understanding of engineering practices and software development lifecycle enthusiastic, self-motivated and client-focused About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 6 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, we're shaping the future and making a meaningful impact on the world. About The Role The technology that once promised to simplify patient care has brought more issues than anyone ever anticipated. At Innovaccer, we defeat this beast by making full use of all the data Healthcare has worked so hard to collect, and replacing long-standing problems with ideal solutions. Data is our bread and butter for innovation. We are looking for a Software Development Engineer II (AI) who understands healthcare data and can leverage the data to build algorithms to personalize treatments based on the clinical and behavioral history of patients. The purpose for this role, would be to build agents for the platform, automating areas around support engineering and data engineering. We are looking for a superstar who will define and build the next generation of predictive analytics tools in healthcare. A Day in the Life Design and lead the development of various artificial intelligence initiatives to help improve health and wellness of patients Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them Proven ability to break down complex business problems into machine learning problems and design solution workflows Work with our data platform team to help them successfully integrate the agent capability or algorithms in their product/workflows Work with development teams to build tools for repeatable data tasks that will accelerate and automate development cycle What You Need 3+ years of experience in Software Engineering, with experience inbuilding APIs, who understand how APIs function, and can effectively develop applications Familiarity with Prompt Engineering and working experience in fine tuning LLMs Hands-on experience of working with Multi-Agent Systems with Frameworks like CrewAI/Langchain/Autogen (One of them is a must) and Prompt engineering Hands-on with Vector Databases such as ChromaDB, FAISS etc Experience in Reinforcement learning (RL), specially for autonomous agents Working experience with Embedding models and RAG design Strong hands-on experience in Python - building enterprise applications along with optimization techniques, along with working with API integrations (Fast API/Django) Hands on experience with at least one ML platform from amongst Databricks, Azure ML, Sagemaker Good to have - Comfortable with Docker, Kubernetes, AWS cloud technologies, Snowflake and also some experience in Healthcare Preferred Skills Python - Building highly scalable and performant applications LLM - Deep experience in working and fine tuning LLM Models Re-inforcement Learning and MAS Vector Databases Here's What We Offer Generous Leaves: Enjoy generous leave benefits of up to 40 days Parental Leave: Leverage one of industry's best parental leave policies to spend time with your new addition Sabbatical: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer comprehensive health insurance to support you and your family, covering medical expenses related to illness, disease, or injury. Extending support to the family members who matter most Care Program: Whether it's a celebration or a time of need, we've got you covered with care vouchers to mark major life events. Through our Care Vouchers program, employees receive thoughtful gestures for significant personal milestones and moments of need Financial Assistance: Life happens, and when it does, we're here to help. Our financial assistance policy offers support through salary advances and personal loans for genuine personal needs, ensuring help is there when you need it most Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure, extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube, Glassdoor, LinkedIn, Instagram, and the Web.
Posted 6 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Gen AI Engineer – Azure AI & Databricks (US Healthcare & RCM Focus) About The Role As a Gen AI Engineer, you will be a pivotal contributor within our Data & AI team, driving innovation by leveraging advanced AI and cloud technologies to transform healthcare operations. You will work primarily with Azure AI services and Databricks to build scalable, intelligent solutions tailored to the US healthcare domain, with a focus on Revenue Cycle Management (RCM). You will collaborate closely with healthcare domain experts, data scientists, and business stakeholders to design, develop, and deploy AI-powered applications that improve healthcare outcomes and operational efficiency. Join us and be part of a mission-driven team dedicated to helping people live healthier lives by applying cutting-edge AI technologies to solve complex healthcare challenges. Responsibilities Design, develop, and deploy AI/ML models and solutions on Azure AI platform and Databricks environment focused on US healthcare datasets and RCM workflows. Collaborate with cross-functional teams to gather and analyse requirements, ensuring AI solutions meet business and regulatory needs. Build, optimize, and maintain scalable AI pipelines integrating data from multiple sources including healthcare claims, patient records, and financial systems. Apply NLP, computer vision, predictive analytics, and generative AI techniques to extract insights and automate healthcare operational processes. Work with large-scale healthcare data, ensuring data quality, governance, and compliance with HIPAA and other healthcare regulations. Develop prototypes and proofs-of-concept using generative AI models and tools to solve complex problems in Revenue Cycle Management. Support the evaluation, selection, and integration of AI technologies and frameworks in the healthcare domain. Collaborate with data engineers and analysts to create dashboards, visualizations, and reports to communicate AI outcomes to business stakeholders. Monitor model performance and retrain models as needed to ensure accuracy and reliability in production environments. Mentor junior engineers and share best practices on AI development, cloud deployment, and healthcare data handling. Required Qualifications Bachelor’s or higher degree in Computer Science, Engineering, Data Science, or a related technical field. 5+ years of experience designing and deploying AI/ML solutions on Azure AI services (Azure Cognitive Services, Azure ML) and Databricks. Strong programming skills in Python, Scala, or similar languages used for AI development. Experience with generative AI models (e.g., GPT, BERT, or custom models) and machine learning frameworks like PyTorch, TensorFlow, or Hugging Face. Hands-on experience working with US healthcare data, including knowledge of healthcare standards (HL7, FHIR) and regulations (HIPAA). Understanding of Revenue Cycle Management (RCM) processes and terminology in healthcare is highly preferred. Experience with big data technologies (Spark, Delta Lake) and building data pipelines on Databricks. Proficient in SQL and data querying for healthcare datasets. Familiarity with cloud infrastructure, containerization (Docker, Kubernetes), and CI/CD pipelines for AI solutions. Strong analytical, problem-solving, and communication skills to work effectively with technical and non-technical stakeholders. Preferred Qualifications Master’s degree or higher in a relevant field. Certification in Azure AI, Databricks, or relevant cloud and AI technologies. Experience with data visualization tools like Power BI, Tableau, or equivalent. Prior exposure to healthcare payment integrity, claims processing, or Co-ordination of Benefits (COB). Experience working in agile teams and familiarity with software development lifecycle (SDLC) best practices. Strong ability to innovate and implement strategic AI-driven business solutions in healthcare.
Posted 6 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description You are a strategic thinker passionate about driving solutions in business architecture and data management. You have found the right team. As a Banking Book Product Owner Analyst in our Firmwide Finance Business Architecture (FFBA) team, you will spend each day defining, refining, and delivering set goals for our firm. You will partner with stakeholders across various lines of business and subject matter experts to understand products, data, source system flows, and business requirements related to Finance and Risk applications and infrastructure. As a Product Owner on the Business Architecture team, you will work closely with Line of Business stakeholders, data Subject Matter Experts, Consumers, and technology teams across Finance, Credit Risk & Treasury, and various Program Management teams. Your primary responsibilities will include prioritizing the traditional credit product book of work, developing roadmaps, and delivering on multiple projects and programs during monthly releases. Your expertise in data analysis and knowledge will be instrumental in identifying trends, optimizing processes, and driving business growth. As our organization grows, so does our reliance on insightful, data-driven decisions. You will dissect complex datasets to unearth actionable insights while possessing a strong understanding of data governance, data quality, and data management principles. Job Responsibilities Utilize Agile Framework to write business requirements in the form of user stories to enhance data, test execution, reporting automation, and digital analytics toolsets. Engage with development teams to translate business needs into technical specifications, ensuring acceptance criteria are met. Drive adherence to product and Release Management standards and operating models. Manage the release plan, including scope, milestones, sourcing requirements, test strategy, execution, and stakeholder activities. Collaborate with lines of business to understand products, data capture methods, and strategic data sourcing into a cloud-based big data architecture. Identify and implement solutions for business process improvements, creating supporting documentation and enhancing end-user experience. Collaborate with Implementation leads, Release managers, Project managers, and data SMEs to align data and system flows with Finance and Risk applications. Oversee the entire Software Development Life Cycle (SDLC) from requirements gathering to testing and deployment, ensuring seamless integration and execution. Required Qualifications, Capabilities, And Skills Bachelor’s degree with 3+ years of experience in Project Management or Product Ownership, with a focus on process re-engineering. Proven experience as a Product Owner with a strong understanding of agile principles and delivering complex programs. Strong analytical and problem-solving abilities, with the capacity to quickly assimilate business and technical knowledge. Experience in Finance, Risk, or Operations as a Product Lead. Familiarity with Traditional Credit Products and Liquidity and Credit reporting data. Highly responsible, detail-oriented, and able to work with tight deadlines. Excellent written and verbal communication skills, with the ability to articulate complex concepts to diverse audiences. Strong organizational abilities to manage multiple work streams concurrently, maintaining sound judgment and a risk mindset. Solid understanding of financial and regulatory reporting processes. Energetic, adaptable, self-motivated, and effective under pressure. Basic knowledge of cloud technologies (e.g., AWS). Preferred Qualifications, Capabilities, And Skills Knowledge of JIRA, SQL, Microsoft suite of applications, Databricks and data visualization/analytical tools (Tableau, Alteryx, Python) is a plus. Knowledge and experience of Traditional Credit Products (Loans, Deposits, Cash etc.,) and Trading Products (Derivatives and Securities) a plus. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Global Finance & Business Management works to strategically manage capital, drive growth and efficiencies, maintain financial reporting and proactively manage risk. By providing information, analysis and recommendations to improve results and drive decisions, teams ensure the company can navigate all types of market conditions while protecting our fortress balance sheet.
Posted 6 days ago
3.0 - 5.0 years
5 - 8 Lacs
Bengaluru
Remote
As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics
Posted 6 days ago
3.0 - 5.0 years
4 - 6 Lacs
Chennai, Bengaluru
Work from Office
Job Overview: We are seeking a highly skilled Technical Data Analyst for a remote contract position (6 to 12 months) to help build a single source of truth for our high-volume direct-to-consumer accounting and financial data warehouse. You will work closely with Finance & Accounting teams and play a pivotal role in dashboard creation, data transformation, and migration from Snowflake to Databricks. Key Responsibilities: 1. Data Analysis & Reporting Develop month-end accounting and tax dashboards using SQL in Snowflake (Snowsight) Migrate and transition reports/dashboards to Databricks Gather, analyze, and transform business requirements from finance/accounting stakeholders into data products 2. Data Transformation & Aggregation Build transformation pipelines in Databricks to support balance sheet look-forward views Maintain data accuracy and consistency throughout the Snowflake Databricks migration Partner with Data Engineering to optimize pipeline performance 3. ERP & Data Integration Support integration of financial data with NetSuite ERP Validate transformed data to ensure correct ingestion and mapping into ERP systems 4. Ingestion & Data Ops Work with Fivetran for ingestion and resolve any pipeline or data accuracy issues Monitor data workflows and collaborate with engineering teams on troubleshooting Required Skills & Qualifications: 5+ years of experience as a Data Analyst (preferably in Finance/Accounting domain) Strong in SQL, with proven experience in Snowflake and Databricks Experience in building financial dashboards (month-end close, tax reporting, balance sheets) Understanding of financial/accounting data: GL, journal entries, balance sheet, income statements Familiarity with Fivetran or similar data ingestion tools Experience with data transformation in a cloud environment Strong communication and stakeholder management skills Nice to have: Experience working with NetSuite ERP Location: Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 6 days ago
7.0 - 12.0 years
20 - 35 Lacs
Hyderabad, Chennai
Work from Office
AWS Data Engineer Employment Type: Full Time Employment Work Mode: Work From Office Job Location: Chennai/Hyderabad Walkin Date: 26-July-2025 Time: 11:00 am to 3:00pm Years of experience: 7 to 12 years Notice: Immediate to 90 days Venue: Agilisium Consulting, World Trade Center, Perungudi, Chennai . Skillset: Python, Pyspark, SQL, AWS, Databricks, Data Modelling Airflow-Good to have
Posted 6 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Fundamentals of DevOps, DevSecOps, CD / CI Pipeline using ADO Good understanding of MPP Architecture, MySQL, RDS, MS-SQL DB, Oracle ,Postgres DB Would need to interact with Software Integrators on a day-to- day basis. Deployment and testing skills Strong communication skills ELT - Trino, Azure Data factory, Azure Databricks, PySpark, Python, Iceberg, Parquet CDC Tool like Qlik/ Golden Gate/Dbsium/IBM CDC, Kafka/ Solace Scripting – Shell, Python, Java, Good Understanding of Azure Cloud Engineering – ADLS, Iceberg, Databricks, AKS, RHEL Good understanding of MS- Project Development skill using Trino, PySpark and Databricks2 or more years of experience of software or application development and implementation Experience with data integration concepts
Posted 6 days ago
4.0 - 6.0 years
0 Lacs
Mulshi, Maharashtra, India
On-site
Area(s) of responsibility Job Description: Data Scientist (4C) Candidate with hands-on real-world project experience. Possessing sound fundamentals rather than surface application of techniques. Ability to learn and adapt to the ever-evolving AI field and technology. Practical experience in GenAI, large-language models (LLMs) is highly desired. Experience: 4-6 years 4C Education : STEM field. B.Tech/B.E. /Masters in Computer Engineering or related fields Work/Problem Space Work will span domains – ranging from manufacturing, life-sciences, finance to retail. Problems could span business functions - ranging from Operations, Sales, Finance, IT to HR Core Skills Application of Machine Learning and Data Science to problems Statistical ML techniques, deep-learning, computer vision and NLP Practical experience in GenAI, large-language models (LLMs) Testing of models Model performance analysis (evidence-based and statistical techniques) Assist in deploying and post-live monitoring of solutions Post-live monitoring of solutions Assist in deploying – API creation, docker containerization Technology Space OpenSource: Python Coding standards based on PEP8 or Google Python Platforms: Azure, Databricks (preferred) Database systems: Working/integration knowledge of basic SQL based RDBMS. Optional/preferred: NoSQL document based databases (ElasticSearch/MongoDB) Deployment strategies - APIs and docker containers Additional Soft Skills Preferred Work independently and/or in small teams Agile working environment Collaborate with architects, data engineers and deployment teams Interface and interact with business clients, internal corporate group and function leads Generally good learning, leadership, communication, team working
Posted 6 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: Databricks Engineer Location: Pune (WFO) Experience: 5 to 8 years Notice: Immediate Skillset : Databricks platform expertise CI/CD and Data pipeline Good SQL, Python, and PySpark knowledge Unit test case implementation experience MongoDB, Aggregation A minimum of 5 years of experience.
Posted 6 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Are You Ready to Make It Happen at Mondelēz International? Join our Mission to Lead the Future of Snacking. Make It With Pride. You will provide technical contributions to the data science process. In this role, you are the internally recognized expert in data, building infrastructure and data pipelines/retrieval mechanisms to support our data needs How You Will Contribute You will: Operationalize and automate activities for efficiency and timely production of data visuals Assist in providing accessibility, retrievability, security and protection of data in an ethical manner Search for ways to get new data sources and assess their accuracy Build and maintain the transports/data pipelines and retrieve applicable data sets for specific use cases Understand data and metadata to support consistency of information retrieval, combination, analysis, pattern recognition and interpretation Validate information from multiple sources. Assess issues that might prevent the organization from making maximum use of its information assets What You Will Bring A desire to drive your future and accelerate your career and the following experience and knowledge: Extensive experience in data engineering in a large, complex business with multiple systems such as SAP, internal and external data, etc. and experience setting up, testing and maintaining new systems Experience of a wide variety of languages and tools (e.g. script languages) to retrieve, merge and combine data Ability to simplify complex problems and communicate to a broad audience In This Role As a Senior Data Engineer, you will have the opportunity to design and build scalable, secure, and cost-effective cloud-based data solutions. You will develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes, ensuring data quality and validation processes to maintain data accuracy and integrity. You will ensure efficient data storage and retrieval for optimal performance, and collaborate closely with data teams, product owners, and other stakeholders to stay updated with the latest cloud technologies and best practices. Role & Responsibilities: Design and Build: Develop and implement scalable, secure, and cost-effective cloud-based data solutions. Manage Data Pipelines: Develop and maintain data pipelines to extract, transform, and load data into data warehouses or data lakes. Ensure Data Quality: Implement data quality and validation processes to ensure data accuracy and integrity. Optimize Data Storage: Ensure efficient data storage and retrieval for optimal performance. Collaborate and Innovate: Work closely with data teams, product owners, and stay updated with the latest cloud technologies and best practices. Technical Requirements: Programming: Python, PySpark, Go/Java Database: SQL, PL/SQL ETL & Integration: DBT, Databricks + DLT, AecorSoft, Talend, Informatica/Pentaho/Ab-Initio, Fivetran. Data Warehousing: SCD, Schema Types, Data Mart. Visualization: Databricks Notebook, PowerBI (Optional), Tableau (Optional), Looker. GCP Cloud Services: Big Query, GCS, Cloud Function, PubSub, Dataflow, DataProc, Dataplex. AWS Cloud Services: S3, Redshift, Lambda, Glue, CloudWatch, EMR, SNS, Kinesis. Azure Cloud Services: Azure Datalake Gen2, Azure Databricks, Azure Synapse Analytics, Azure Data Factory, Azure Stream Analytics. Supporting Technologies: Graph Database/Neo4j, Erwin, Collibra, Ataccama DQ, Kafka, Airflow. Soft Skills: Problem-Solving: The ability to identify and solve complex data-related challenges. Communication: Effective communication skills to collaborate with Product Owners, analysts, and stakeholders. Analytical Thinking: The capacity to analyze data and draw meaningful insights. Attention to Detail: Meticulousness in data preparation and pipeline development. Adaptability: The ability to stay updated with emerging technologies and trends in the data engineering field. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Data Science Analytics & Data Science
Posted 6 days ago
2.0 years
5 - 7 Lacs
Gurgaon
On-site
About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About the role: Gartner is seeking an Advanced Data Engineer specializing in data modeling and reporting with Azure Analysis Services and Power BI. As a key member of the team, you will contribute to the development and support of Gartner’s Enterprise Data Warehouse and a variety of data products. This role involves integrating data from both internal and external sources using diverse ingestion APIs. You will have the opportunity to work with a broad range of data technologies, focusing on building and optimizing data pipelines, as well as supporting, maintaining, and enhancing existing business intelligence solutions. What you will do: Develop, manage, and optimize enterprise data models within Azure Analysis Services, including configuration, scaling, and security management Design and build tabular data models in Azure Analysis Services for seamless integration with Power BI Write efficient SQL queries and DAX (Data Analysis Expressions) to support robust data models, reports, and dashboards Tune and optimize data models and queries for maximum performance and efficient data retrieval Design, build, and automate data pipelines and applications to support data scientists and business users with their reporting and analytics needs Collaborate with a team of Data Engineers to support and enhance the Azure Synapse Enterprise Data Warehouse environment What you will need: 2–4 years of hands-on experience developing enterprise data models in Azure Analysis Services Strong expertise in designing and developing tabular models using Power BI and SQL Server Data Tools (SSDT) Advanced proficiency in DAX for data analysis and SQL for data manipulation and querying Proven experience creating interactive Power BI dashboards and reports for business analytics Deep understanding of relational database systems and advanced SQL skills Experience with T-SQL, ETL processes, and Azure Data Factory is highly desirable Solid understanding of cloud computing concepts and experience with Azure services such as Azure Data Factory, Azure Blob Storage, and Azure Active Directory Nice to Have: Experience with version control systems (e.g., Git, Subversion) Familiarity with programming languages such as Python or Java Knowledge of various database technologies (NoSQL, Document, Graph databases, etc.) Experience with Data Intelligence platforms like Databricks Who you are: Effective time management skills and ability to meet deadlines Excellent communications skills interacting with technical and business audience’s Excellent organization, multitasking, and prioritization skills Must possess a willingness and aptitude to embrace new technologies/ideas and master concepts rapidly. Intellectual curiosity, passion for technology and keeping up with new trends Delivering project work on-time within budget with high quality Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. #LI-PM3 Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:101546 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 6 days ago
4.0 years
0 Lacs
Gurgaon
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Data Scientist II Job Title – Data Scientist II – Data & Analytics Our Purpose We work to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. We cultivate a culture of inclusion for all employees that respects their individual strengths, views, and experiences. We believe that our differences enable us to be a better team – one that makes better decisions, drives innovation and delivers better business results. Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Our Team We are an Artificial Intelligence Centre of Excellence working on initiatives in Core and Commercial Payments. Our focus is to create value and improvements through digital intervention inspired by state of the art AI and Machine Learning. As part of the team, you will play a key role in building new AI / ML models, monitoring long term performance, innovate with research – all while creating significant business impact. Are you excited about coding and the solutions we can build with it? Do you love doing hands on work with opportunities to learn new tech? Do you believe that AI has huge potential to improve business processes? Are you recreational with Mathematics and Statistics? If yes, then this role is for you! The Role The candidate, will be working on numerous AI and ML initiatives, spanning across different use cases and stages of delivery. You will expected to work and code hands-on, keeping up to date with latest best practices and advances in the field of AI. You will be required to work closely in collaboration with multiple internal business groups across Mastercard. You are also responsible for creating design documents, including data models, data flow diagrams, and system architecture diagrams. All about You Majors in Computer Science, Data Science, Analytics, Mathematics, Statistics, or a related engineering field or equivalent work experience 4+ Years of experience in using Python with knowledge of client server architecture 2+ Years of experience on building, deploying and maintaining ML models 1+ Years of experience in working on Gen AI projects including knowledge of modern frameworks like LangChain, LangGraph, OpenAI Chat Completion APIs Demonstrated success interacting with stakeholders to understand technical needs and ensuring analyses and solutions meet their needs effectively. Able to work in a fast-paced, deadline-driven environment as part of a team and as an individual contributor. Ability to easily move between business, analytical, and technical teams and articulate solution requirements for each group. Experience with Enterprise Business Intelligence Platform/Data platform i.e. Tableau, PowerBI, Streamlit will be a plus. Experience with cloud-based (SaaS) solutions, ETL processes or API integrations will be a plus. Experience on Cloud Data Platforms Azure/AWS/Databricks will be a plus. Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills Analytical/Problem Solving Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency Project Management/Risk Mitigation Able to prioritize and perform multiple tasks simultaneously Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 6 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team About Workday At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team The Enterprise Data & AI Technologies and Architecture (EDATA) organization is a dynamic and evolving team that is spearheading Workday’s growth through trusted data excellence, innovation, and architectural thought leadership. Equipped with an array of skills in data science, engineering, and analytics, this team orchestrates the flow of data across our growing company while ensuring data accessibility, accuracy, and security. With a relentless focus on innovation and efficiency, Workmates in EDATA enable the transformation of complex data sets into actionable insights that fuel strategic decisions and position Workday at the forefront of the technology industry. EDATA is a global team distributed across the U.S, India and Canada. About The Role Join a pioneering organization at the forefront of technological advancement, dedicated to leveraging data-driven insights to transform industries and drive innovation. We are seeking a highly skilled and motivated Data Quality Engineer to join our dynamic team. The ideal candidate is someone who loves to learn, is detail oriented, has exceptional critical thinking and analytical skills. As a Data Quality Engineer, you will play a critical role in ensuring the accuracy, consistency, and completeness of our data across the enterprise data platform. You will be responsible for designing, developing, and implementing data quality processes, standards, and best practices across various data sources and systems to identify, resolve data issues. This role offers an exciting opportunity to learn, collaborate with cross-functional teams, including data engineers, data scientists, and business analysts, to drive data quality improvements and enhance decision-making capabilities. Responsibilities The incumbent will be responsible for (but not limited to) the following: Design and automate data quality checks; resolve issues and improve data pipelines with engineering and product teams. Collaborate with stakeholders to define data quality requirements and best practices. Develop test automation strategies and integrate checks into CI/CD pipelines. Monitor data quality metrics, identify root causes, and drive continuous improvements. Provide guidance on data quality standards across projects. Work with Data Ops to address production issues and document quality processes. About You Basic Qualifications 5+ years of experience as a Data Quality Engineer in data quality management or data governance. Good understanding of data management concepts, including data profiling, data cleansing, and data integration. Proficiency in SQL for data querying and manipulation. Develop and execute automated data quality tests using tools like SQL, Python (Pyspark), and data quality frameworks. Hands-on experience with cloud platforms (AWS/GCP), data warehouses (Snowflake, Databricks, Redshift), and integration tools (Snaplogic, dbt, Talend, etc.) Exposure to data quality tools (e.g., Acceldata, Tricentis) and CI/CD or DevOps practices is a plus. Experience with data quality monitoring tools (Acceldata, Tricentis) a plus. Other Qualifications Proven ability to prioritize and manage multiple tasks in a fast-paced environment. Certification in relevant technologies or data management disciplines is a plus. Analytical mindset with the ability to think strategically and make data-driven decisions. If you are a results-driven individual with a passion for data and analytics and a proven track record in data quality assurance, we invite you to apply for this exciting opportunity. Join our team and contribute to the success of our data-driven initiatives. Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31300 Jobs | Dublin
Wipro
16502 Jobs | Bengaluru
EY
10539 Jobs | London
Accenture in India
10399 Jobs | Dublin 2
Uplers
8481 Jobs | Ahmedabad
Amazon
8475 Jobs | Seattle,WA
IBM
7957 Jobs | Armonk
Oracle
7438 Jobs | Redwood City
Muthoot FinCorp (MFL)
6169 Jobs | New Delhi
Capgemini
5811 Jobs | Paris,France