Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
0 - 0 Lacs
Ahmedabad
Work from Office
Position Summary: The Data Engineer is responsible for maintaining and expanding data workflows, exporting clinical data and creating complex reports, designing, uploading and maintaining report objects in the Companys online report platform. Role & responsibilities: Receives business and technical requirements, provides subject-matter expertise, analyzes and implements data engineering techniques Writes and optimizes complex SQL scripts Develops ETL procedures from sources of various data formats to MariaDB/MySQL databases Manages and maintains Databases Works closely with Data Managers on advanced Analytics Develops, sets up and maintains report objects and dashboards in the online BI platform Surveys and recommends site scalability scenarios and upgrades Designs and develops OLAP cubes and reports Prepares documentations and specifications Collaborate with other team members Complies with the Company's Quality and Information Security Management Systems and applicable national and international legislation, including legislation for data protection Education Requirements: Required: BSc in Informatics or Computer Science and Engineering Desired: MSc in Data Science Professional Experience requirements: Required: 3 years working experience in similar disciplines Excellent knowledge of SQL and relational databases (MariaDB) Experience with at least one data processing scripting language: Python, Java, R, Scala, etc. Desired: Experience in an active online BI Platform (e.g., Jasper, Tableau, PowerBI etc.) Experience in NoSQL databases (e.g., MongoDB etc.)
Posted 2 weeks ago
2.0 - 5.0 years
16 - 18 Lacs
Coimbatore
Work from Office
Overview Overview Annalect is currently seeking a Senior Data Engineer to join our Technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions Responsibilities Designing, building, testing, and deploying data transfers across various cloud environments (Azure, GCP, AWS, Snowflake, etc). Developing data pipelines, monitoring, maintaining, and tuning. Write at-scale data transformations in SQL and Python. Perform code reviews and provide leadership and guidance to junior developers. Qualifications Curiosity in learning the business requirements that are driving the engineering requirements. Interest in new technologies and eagerness to bring those technologies and out of the box ideas to the team. 3+ years of SQL experience. 3+ years of professional Python experience. 3+ years of professional Linux experience. Preferred familiarity with Snowflake, AWS, GCP, Azure cloud environments. Intellectual curiosity and drive; self-starters will thrive in this position. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges. Additional Skills BS BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience. Experience with big data and/or infrastructure. Bonus for having experience in setting up Petabytes of data so they can be easily accessed. Understanding of data organization, ie partitioning, clustering, file sizes, file formats. Experience working with classical relational databases (Postgres, Mysql, MSSQL). Experience with Hadoop, Hive, Spark, Redshift, or other data processing tools (Lots of time will be spent building and optimizing transformations) Proven ability to independently execute projects from concept to implementation to launch and to maintain a live product. Perks of working at Annalect We have an incredibly fun, collaborative, and friendly environment, and often host social and learning activities such as game night, speaker series, and so much more! Halloween is a special day on our calendar since it is our Founding Day – we go all out with decorations, costumes, and prizes! Generous vacation policy. Paid time off (PTO) includes vacation days, personal days, and a Summer Friday program. Extended time off around the holiday season. Our office is closed between Xmas and New Year to encourage our hardworking employees to rest, recharge and celebrate the season with family and friends. As part of Omnicom, we have the backing and resources of a global billion-dollar company, but also have the flexibility and pace of a “startup” - we move fast, break things, and innovate. Work with modern stack and environment to keep on learning and improving helping to experiment and shape latest technologies
Posted 2 weeks ago
4.0 - 9.0 years
8 - 16 Lacs
Kolkata
Remote
Enhance/modify applications, configure existing systems and provide user support DotNET Full Stack or [Angular 18 + developer + DotNET backend] SQL Server Angular version 18+ (it is nice to have) Angular version 15+ (mandatory)
Posted 2 weeks ago
1.0 - 4.0 years
1 - 3 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Cloud Data Engineer Job Title : Cloud Data Engineer Location : Chennai, Hyderabad, Bangalore Experience : 1-4 Job Summary The Cloud Data Engineer designs and builds scalable data pipelines and architectures in cloud environments. This role supports analytics, machine learning, and business intelligence initiatives by ensuring reliable data flow and transformation. Key Responsibilities Develop and maintain ETL/ELT pipelines using cloud-native tools. Design data models and storage solutions optimized for performance and scalability. Integrate data from various sources (APIs, databases, streaming platforms). Ensure data quality, consistency, and security across pipelines. Collaborate with data scientists, analysts, and business teams. Monitor and troubleshoot data workflows and infrastructure. Automate data engineering tasks using scripting and orchestration tools. Required Skills Experience with cloud data platforms (AWS Glue, Azure Data Factory, Google Cloud Dataflow). Proficiency in SQL and programming languages (Python, Scala, Java). Knowledge of big data technologies (Spark, Hadoop, Kafka). Familiarity with data warehousing solutions (Redshift, BigQuery, Snowflake). Understanding of data governance, privacy, and compliance standards. Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 3+ years of experience in data engineering, preferably in cloud environments. Certifications in cloud data engineering (e.g., Google Professional Data Engineer, AWS Data Analytics Specialty) are a plus.
Posted 2 weeks ago
0.0 - 2.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Responsibilities: * Design, develop & maintain full-stack applications using Python, Linux & Deep Learning. * Collaborate with cross-functional teams on data engineering projects.
Posted 2 weeks ago
2.0 - 7.0 years
5 - 14 Lacs
Bengaluru, Karnataka
Work from Office
Proven knowledge of coding in Python Advanced knowledge of regression & linear optimization (Python based, relevant libraries: pandas, numpy, scikit-learn, or-tools) 2+ years of working experience in data analytics with proven project/solution track record Skills in data analysis & visualization (Python based, relevant libraries: pandas, numpy, plotly) Experience as Operations Research analyst or similar: worked with optimisation models (e.g. Integer and Linear Programming) in the past Qualifications : Bachelors degree or Masters Degree in Data Science, Computational Statistics/Mathematics, Computer Science or related field Self driver attitude: Execute tasks self-responsible. Can-do and get-stuff-done attitude is crucial Good communications skills, able to effectively discuss project requirements with different hierarchy levels and disciplines (engineering, business, etc.) Fluent English Special Instructions: Interview Mode : Virtual Work Mode : Hybrid(Bangalore) Duration of Contract : 12 Months Engagement type: Open for both options C2H (Contract-to-Hire) / One-Time Hire
Posted 2 weeks ago
2.0 - 6.0 years
4 - 8 Lacs
Hyderabad, Ahmedabad, Gurugram
Work from Office
About the Role: Grade Level (for internal use): 09 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team you will build and maintain enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. Whats in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical : Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestrationCelery, Redis, Airflow Strong AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestrationDocker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)
Posted 2 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Work from Office
Lead Data Engineer Data Management Job description Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad Role Overview: Accordion is looking for Lead Data Engineer. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in-depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Lead Data Engineer should be able to evaluate existing architectures and recommend way to upgrade and improve the performance of architectures both on-premises and cloud-based solutions. A successful Lead Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Partners with clients to understand their business and create comprehensive business requirements. Develops end-to-end Business Intelligence framework based on requirements including recommending appropriate architecture (on-premises or cloud), analytics and reporting. Works closely with the business and technology teams to guide in solution development and implementation. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Curiosity to learn new tools and technologies to provide futuristic solutions for clients. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. More than 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite.) In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (any one of Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.) Good understanding of Azure (OR) AWS: Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services) or AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.
Posted 2 weeks ago
5.0 - 10.0 years
16 - 20 Lacs
Pune
Work from Office
Job Title: Senior / Lead Data Engineer Company: Synechron Technologies Locations: Pune or Chennai Experience: 5 to 12 years : Synechron Technologies is seeking an accomplished Senior or Lead Data Engineer with expertise in Java and Big Data technologies. The ideal candidate will have a strong background in Java Spark, with extensive experience working with big data frameworks such as Spark, Hadoop, HBase, Couchbase, and Phoenix. You will lead the design and development of scalable data solutions, ensuring efficient data processing and deployment in a modern technology environment. Key Responsibilities: Lead the development and optimization of large-scale data pipelines using Java and Spark. Design, implement, and maintain data infrastructure leveraging Spark, Hadoop, HBase, Couchbase, and Phoenix. Collaborate with cross-functional teams to gather requirements and develop robust data solutions. Lead deployment automation and management using CI/CD tools including Jenkins, Bitbucket, GIT, Docker, and OpenShift. Ensure the performance, security, and reliability of data processing systems. Provide technical guidance to team members and participate in code reviews. Stay updated on emerging technologies and leverage best practices in data engineering. Qualifications & Skills: 5 to 14 years of experience as a Data Engineer or similar role. Strong expertise in Java programming and Apache Spark. Proven experience with Big Data technologiesSpark, Hadoop, HBase, Couchbase, and Phoenix. Hands-on experience with CI/CD toolsJenkins, Bitbucket, GIT, Docker, OpenShift. Solid understanding of data modeling, ETL workflows, and data architecture. Excellent problem-solving, communication, and leadership skills. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 2 weeks ago
3.0 - 8.0 years
15 - 25 Lacs
Noida, Pune, Bengaluru
Work from Office
Job Description - Pre Sales Engineer Job Title: Pre-Sales Engineer - Data & AI Solutions (Industry: Manufacturing, Supply Chain, Retail, CPG & Aviation) Department: Pre-Sales / Industry Solutions Job Summary: We are seeking a dynamic and experienced Pre-Sales Engineer with a strong foundation in Data & AI Solutions and a domain focus on Manufacturing, Supply Chain, Retail, CPG and Aviation . You will be responsible for solution positioning, customer engagement and proposal ownership to drive business growth. The ideal candidate will blend industry knowledge, technical expertise and strategic thinking to craft value-driven solutions aligned with client needs. Key Responsibilities: Pre-Sales & Solution Consulting Lead end-to-end pre-sales lifecycle from requirement discovery, proposal development, solutioning and client presentations. Collaborate with cross-functional teams (Sales, Delivery, Solution, Data Science) to craft tailored solutions leveraging AI/ML, Data Engineering, BI, Cloud and Industry 4.0 technologies. Translate client pain points into value propositions, solution blueprints and technical architectures . Industry-Focused Solutioning Create industry-specific use cases and demos aligned with domain challenges in: Manufacturing (OEE, Predictive Maintenance, Quality Analytics, Digital Twin) Supply Chain (Demand Forecasting, Inventory Optimization, Logistics Analytics) Retail & CPG (Assortment Optimization, Customer Insights, Pricing and Promo Effectiveness) Aviation (MRO Optimization, Delay Prediction, Parts Lifecycle Analytics) Client Engagement & Enablement Conduct client workshops, discovery sessions, PoCs , and technical deep dives to shape solution understanding and buy-in. Act as a trusted advisor to clients and internal stakeholders on data and AI maturity journeys . Proposals, RFPs, and Documentation Own and contribute to proposals, SoWs, RFP/RFI responses and collateral creation (solution decks, value maps, case studies). Ensure all deliverables meet quality, timeline, and alignment expectations. Market and Technology Insights Stay ahead of emerging trends in AI, GenAI, IoT, Digital Twin, LLMs, and cloud platforms (Azure, AWS, GCP). Continuously gather competitive intelligence and benchmarking insights to strengthen solution offerings. Qualifications: Bachelors or Masters degree in Engineering, Business, Data Science or a related field. 3 to 5 years of experience in pre-sales, solutioning or consulting roles with focus on Data & AI solutions . Deep understanding of data architectures , AI/ML workflows and business intelligence platforms . Exposure to industry processes, KPIs and digital transformation use cases in Manufacturing, Retail/CPG, and Supply Chain. Proficient in tools such as Power BI/Tableau, Azure/AWS, Python/SQL , and knowledge of ML lifecycle tools (MLflow, Databricks, SageMaker). Preferred Skills: Experience working with SAP, Oracle, Snowflake , and integrating with enterprise systems. Understanding of IoT protocols (MQTT, OPC UA) and smart manufacturing practices . Strong communication and storytelling skills – ability to influence CXOs and business/technical teams . Agile, collaborative, and customer-first mindset. Why Join Us? Be at the forefront of industry innovation in Data & AI . Work across diverse industries and global clients . Opportunity to lead flagship initiatives and cutting-edge solution launches . Empowerment to contribute to IP creation, accelerators, and playbooks . If interested send your updated resume at chaity.mukherjee@celebaltech.com
Posted 2 weeks ago
4.0 - 6.0 years
18 - 22 Lacs
Noida
Work from Office
Responsibilities : Collaborate with the sales team to understand customer challenges and business objectives and propose solutions, POC etc. Develop and deliver impactful technical presentations and demos showcasing the capabilities of GCP Data and AI , GenAI Solutions. Conduct technical proof of concepts (POCs) to validate the feasibility and value proposition of GCP solutions. Collaborate with technical specialists and solution architects from COE Team to design and configure tailored cloud solutions. Manage and qualify sales opportunities, working closely with the sales team to progress deals through the sales funnel. Stay up to date on the latest GCP offerings, trends, and best practices. Experience : Design and implement a comprehensive strategy for migrating and modernizing existing relational on premise databases to scalable and cost effective solution on Google Cloud Platform ( GCP). Design and Architect the solutions for DWH Modernization and experience with building data pipelines in GCP. Strong Experience in BI reporting tools ( Looker, PowerBI and Tableau). In depth knowledge of Google Cloud Platform (GCP) services, particularly Cloud SQL, Postgres, Alloy DB, BigQuery, Looker Vertex AI and Gemini (GenAI). Strong knowledge and experience in providing the solution to process massive datasets in real time and batch process using cloud native/open source Orchestration techniques. Build and maintain data pipelines using Cloud Dataflow to orchestrate real time and batch data processing for streaming and historical data. Strong knowledge and experience in best practices for data governance, security, and compliance. Excellent Communication and Presentation Skills with ability to tailor technical information as per customer needs. Strong analytical and problem solving skills. Ability to work independently and as part of a team.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
We are seeking a skilled and motivated Data Engineer with hands-on experience in Snowflake , Azure Data Factory (ADF) , and Fivetran . The ideal candidate will be responsible for building and optimizing data pipelines, ensuring efficient data integration and transformation to support analytics and business intelligence initiatives. Key Responsibilities: Design, develop, and maintain robust data pipelines using Fivetran , ADF , and other ETL tools. Build and manage scalable data models and data warehouses on Snowflake . Integrate data from various sources into Snowflake using automated workflows. Implement data transformation and cleansing processes to ensure data quality and integrity. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Monitor pipeline performance, troubleshoot issues, and optimize for efficiency. Maintain documentation related to data architecture, processes, and workflows. Ensure data security and compliance with company policies and industry standards. Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, Information Systems, or a related field. 3+ years of experience in data engineering or a similar role. Proficiency with Snowflake including architecture, SQL scripting, and performance tuning. Hands-on experience with Azure Data Factory (ADF) for pipeline orchestration and data integration. Experience with Fivetran or similar ELT/ETL automation tools. Strong SQL skills and familiarity with data warehousing best practices. Knowledge of cloud platforms, preferably Microsoft Azure. Familiarity with version control tools (e.g., Git) and CI/CD practices. Excellent communication and problem-solving skills. Preferred Qualifications: Experience with Python, dbt, or other data transformation tools. Understanding of data governance, data quality, and compliance frameworks. Knowledge of additional data tools (e.g., Power BI, Databricks, Kafka) is a plus.
Posted 2 weeks ago
6.0 - 11.0 years
8 - 13 Lacs
Bengaluru
Work from Office
As a Senior Data Engineer at JLL Technologies, you will: Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data Interact with internal colleagues and external professionals to determine requirements, anticipate future needs, and identify areas of opportunity to drive data development Develop good understanding of how data will flow & stored through an organization across multiple applications such as CRM, Broker & Sales tools, Finance, HR etc Unify, enrich, and analyze variety of data to derive insights and opportunities Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org Mentor other members in the team and organization and contribute to organizations growth. What we are looking for: 6+ years work experience and bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Hands-on engineer who is curious about technology, should be able to quickly adopt to change and one who understands the technologies supporting areas such as Cloud Computing (AWS, Azure(preferred), etc.), Micro Services, Streaming Technologies, Network, Security, etc. 3 or more years of active development experience as a data developer using Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities Build, test and enhance data curation pipelines integration data from wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity Maintain the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensure high availability of the platform; monitor workload demands; work with Infrastructure Engineering teams to maintain the data platform; serve as an SME of one or more application Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams 3+ years of experience working with source code control systems and Continuous Integration/Continuous Deployment tools Independent and able to manage, prioritize & lead workload What you can expect from us: Our Total Rewards program reflects our commitment to helping you achieve your ambitions in career, recognition, well-being, benefits and pay. Join us to develop your strengths and enjoy a fulfilling career full of varied experiences. Keep those ambitions in sights and imagine where JLL can take you...
Posted 2 weeks ago
6.0 - 10.0 years
15 - 30 Lacs
Indore, Jaipur, Bengaluru
Work from Office
Exp in dashboard story development, dashboard creation, and data engineering pipelines. Manage and organize large volumes of application log data using Google Big Query Exp with log analytics, user engagement metrics, and product performance metrics Required Candidate profile Exp with tool like Tableau Power BI, or ThoughtSpot AI . Understand log data generated by Python-based applications. Ensure data integrity, consistency, and accessibility for analytical purposes.
Posted 2 weeks ago
6.0 - 9.0 years
5 - 9 Lacs
Hyderabad
Work from Office
We are looking for a highly skilled Data Engineer with 6 to 9 years of experience to join our team at BlackBaud, located in [location to be specified]. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement data pipelines and architectures to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems, ensuring scalability, reliability, and performance. Troubleshoot and resolve complex technical issues related to data engineering projects. Participate in code reviews and contribute to the improvement of the overall code quality. Stay up-to-date with industry trends and emerging technologies in data engineering. Job Requirements Strong understanding of data modeling, database design, and data warehousing concepts. Experience with big data technologies such as Hadoop, Spark, and NoSQL databases. Excellent programming skills in languages like Java, Python, or Scala. Strong analytical and problem-solving skills, with attention to detail and ability to work under pressure. Good communication and collaboration skills, with the ability to work effectively in a team environment. Ability to adapt to changing priorities and deadlines in a fast-paced IT Services & Consulting environment.
Posted 2 weeks ago
5.0 - 8.0 years
11 - 15 Lacs
Noida
Work from Office
We are looking for a highly skilled and experienced professional to join our team as an Associate Manager in Data Engineering, Data Modelling, or Data Science. The ideal candidate will have a strong background in data analysis and engineering, with excellent problem-solving skills. Roles and Responsibility Design and develop scalable data pipelines and architectures to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data models and databases to support business decision-making. Analyze complex data sets to identify trends and patterns, and provide actionable insights. Work closely with stakeholders to understand business needs and develop solutions that meet those needs. Stay up-to-date with industry trends and emerging technologies in data engineering and analytics. Job Requirements Strong understanding of data structures, algorithms, and software design patterns. Experience with big data technologies such as Hadoop, Spark, and NoSQL databases. Excellent problem-solving skills and the ability to analyze complex data sets. Strong communication and collaboration skills, with the ability to work with cross-functional teams. Ability to design and implement scalable data pipelines and architectures. Strong understanding of data modelling concepts and techniques. Educational qualifications: Any Graduate/Postgraduate degree.
Posted 2 weeks ago
3.0 - 8.0 years
55 - 60 Lacs
Bengaluru
Work from Office
Our vision for the future is based on the idea that transforming financial lives starts by giving our people the freedom to transform their own. We have a flexible work environment, and fluid career paths. We not only encourage but celebrate internal mobility. We also recognize the importance of purpose, well-being, and work-life balance. Within Empower and our communities, we work hard to create a welcoming and inclusive environment, and our associates dedicate thousands of hours to volunteering for causes that matter most to them. Chart your own path and grow your career while helping more customers achieve financial freedom. Empower Yourself. Job Summary: At Empower, a Sr. Architect is a mix of leadership position and thought leadership role. A Sr. Architect works with enterprise architects, and both business and IT teams, to align solutions to the technology vision they help create. This role supports enterprise Architects in the development of technology strategies, reference architectures, solutions, best practices and guidance across the entire IT development organization; all the while addressing total cost of ownership, stability, performance and efficiency. The candidate will also be working with Empower Innovation Lab team as the team is experimenting with emerging technologies, such as Generative AI, and Advanced Analytics. In this rapid paced environment, the person must possess a "can-do" attitude while demonstrating a strong work ethic. This person should have a strong aptitude to help drive decisions. He or she will be actively involved in influencing the strategic direction of technology at Empower Retirement. There will be collaboration across all teams including IT Infrastructure, PMO office, Business, and third-party integrators in reviewing, evaluating, designing and implementing solutions. The Architect must understand available technology options and educate and influence technology teams to leverage them where appropriate. The Architect will recognize and propose alternatives, make recommendations, and describe any necessary trade-offs. In some cases, particularly on key initiatives, the Architect will participate on the design and implementation of end-to-end solutions directly with development teams. The ideal candidate will leverage their technical leadership/direction-setting skills with the development organization to be able to prove technical concepts quickly using a variety of tools, methods, & frameworks. Responsibilities: Help Enterprise Architect, work with peer Sr. Architects and more junior resources to define and execute on the business aligned IT strategy and vision. Develop, document, and provide input into the technology roadmap for Empower. Create reference architectures that demonstrate an understanding of technology components and the relationships between them. Design and modernize complex systems into cloud compatible or cloud native applications where applicable. Create strategies and designs for migrating applications to cloud systems. Participate in the evaluation of new applications, technical options, challenge the status quo, create solid business cases, and influence direction while establishing partnerships with key constituencies. Implement best practices, standards & guidance, then subsequently provide coaching of technology team members. Make leadership recommendations regarding strategic architectural considerations related to process design and process orchestration. Provide strong leadership and direction in development/engineering practices. Collaborate with other business and technology teams on architecture and design issues. Respond to evolving and changing security conditions. Implement and recommend security guidelines. Provide thought-leadership, advocacy, articulation, assurance, and maintenance of the enterprise architecture discipline. Provide solution, guidance, and implementation assistance within full stack development teams. Recommend long term scalable and performant architecture changes keeping cost in control. Preferred Qualifications: 12+ years of experience in the development and delivery of data systems. This experience should be relevant to roles such as Data Analyst, ETL (Extract, Transform and Load) Developer (Data Engineer), Database Administrator (DBA), Business Intelligence Developer (BI Engineer), Machine Learning Developer (ML Engineer), Data Scientist, Data Architect, Data Governance Analyst, or a managerial position overseeing any of these functions. 3+ years of experience creating solution architectures and strategies across multiple architecture domains (business, application, data, integration, infrastructure and security). Solid experience with the following technology disciplines: Python, Cloud architectures, AWS (Amazon Web Services), Bigdata (300+TBs), Advanced Analytics, Advance SQL Skills, Data Warehouse systems(Redshift or Snowflake), Advanced Programming, NoSQL, Distributed Computing, Real-time streaming Nice to have experience in Java, Kubernetes, Argo, Aurora, Google Analytics, META Analytics, Integration with 3rd party APIs, SOA & microservices design, modern integration methods (API gateway/web services, messaging & RESTful architectures). Familiarity with BI tools such as Tableau/QuickSight. Experience with code coverage tools. Working knowledge of addressing architectural cross cutting concerns and their tradeoffs, including topics such as caching, monitoring, operational surround, high availability, security, etc. Demonstrates competency applying architecture frameworks and development methods. Understanding of business process analysis and business process management (BPM). Excellent written and verbal communication skills. Experience in mentoring junior team members through code reviews and recommend adherence to best practices. Experience working with global, distributed teams. Interacts with people constantly, demonstrating strong people skills. Able to motivate and inspire, influencing and evangelizing a set of ideals within the enterprise. Requires a high degree of independence, proactively achieving objectives without direct supervision. Negotiates effectively at the decision-making table to accomplish goals. Evaluates and solves complex and unique problems with strong problem-solving skills. Thinks broadly, avoiding tunnel vision and considering problems from multiple angles. Possesses a general understanding of the wealth management industry, comprehending how technology impacts the business. Stays on top of the latest technologies and trends through continuous learning, including reading, training, and networking with industry colleagues. Data Architecture - Proficiency in platform design and data architecture, ensuring scalable, efficient, and secure data systems that support business objectives. Data Modeling - Expertise in designing data models that accurately represent business processes and facilitate efficient data retrieval and analysis. Cost Management - Ability to manage costs associated with data storage and processing, optimizing resource usage, and ensuring budget adherence. Disaster Recovery Planning - Planning for data disaster recovery to ensure business continuity and data integrity in case of unexpected events. SQL Optimization/Performance Improvements - Advanced skills in optimizing SQL queries for performance, reducing query execution time, and improving overall system efficiency. CICD - Knowledge of continuous integration and continuous deployment processes, ensuring rapid and reliable delivery of data solutions. Data Encryption - Implementing data encryption techniques to protect sensitive information and ensure data privacy and security. Data Obfuscation/Masking - Techniques for data obfuscation and masking to protect sensitive data while maintaining its usability for testing and analysis. Reporting - Experience with static and dynamic reporting to provide comprehensive and up-to-date information to business users. Dashboards and Visualizations - Creating d ashboards and visualizations to present data in an intuitive and accessible manner, facilitating data-driven insights. Generative AI / Machine Learning - Understanding of generative artificial intelligence and machine learning to develop advanced predictive models and automate decision-making processes. Understanding of machine learning algorithms, deep learning frameworks, and AI model architectures. Understanding of ethical AI principles and practices. Experience implementing AI transparency and explainability techniques. Knowledge of popular RAG frameworks and tools (e.g., LangChain, LlamaIndex). Familiarity with fairness metrics and techniques to mitigate bias in AI models. Sample technologies: Cloud Platforms – AWS (preferred) or Azure or Google Cloud Databases - Oracle, Postgres, MySQL(preferred), RDS, DynamoDB(preferred), Snowflake or Redshift(preferred) Data Engineering (ETL, ELT) - Informatica, Talend, Glue, Python(must), Jupyter Streaming – Kafka or Kinesis CICD Pipeline – Jenkins or GitHub or GitLab or ArgoCD Business Intelligence – Quicksight (preferred), Tableau(preferred), Business Objects, MicroStrategy, Qlik, PowerBI, Looker Advanced Analytics - AWS Sagemaker(preferred), TensorFlow, PyTorch, R, scikit learn Monitoring tools – DataDog(preferred) or AppDynamics or Splunk Bigdata technologies – Apache Spark(must), EMR(preferred) Container Management technologies – Kubernetes, EKS(preferred), Docker, Helm Preferred Certifications: AWS Solution Architect AWS Data Engineer AWS Machine Learning Engineer AWS Machine Learning EDUCATION: Bachelor’s and/or master’s degree in computer science or related field (information systems, mathematics, software engineering) . We are an equal opportunity employer with a commitment to diversity. All individuals, regardless of personal characteristics, are encouraged to apply. All qualified applicants will receive consideration for employment without regard to age, race, color, national origin, ancestry, sex, sexual orientation, gender, gender identity, gender expression, marital status, pregnancy, religion, physical or mental disability, military or veteran status, genetic information, or any other status protected by applicable state or local law.
Posted 2 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
The Job Design / Implementation of Data Pipelines for processing large volumes of data Ingesting batch and streaming data from various data sources. Writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc.) Data Modeling: Proficiency in creating both normalized and denormalized database schemas. Developing applications in Python. Developing ETL, OLAP based and Analytical Applications. Working experience in Azure / AWS services Working in Databricks, Snowflake or other cloud data platforms. Working in Agile / Scrum methodologies Good knowledge of cloud security concepts and implementation of different types of authentication methods working on Azure DevOps; Create and manage GIT and code versioning, build CI/CD pipelines and test plans Your Profile Experience with a strong focus on Data Engineering Design, develop, and maintain data pipelines Implement ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes for seamless data integration. Collaborate with cross-functional teams to design and implement large-scale distributed systems for data processing and analytics. Optimize and maintain CI/CD pipelines to ensure smooth deployment and integration of new data solutions. Exposed to python libraries such as NumPy, pandas, beautiful soup, etc Experience Databricks, Snowflake or other cloud data platforms.
Posted 2 weeks ago
10.0 - 17.0 years
35 - 55 Lacs
Bengaluru
Work from Office
Responsibilities We are seeking an experienced and dynamic Senior Manager, Data Science to lead our team in delivering innovative data science solutions to our clients. The ideal candidate will possess a technical background in data engineering, data science, or strategic business analytics, coupled with exceptional leadership and project management skills. As a Senior Manager, Data Science, you will be responsible for overseeing the end-to-end delivery of medium to complex projects, managing geographically distributed teams, and engaging with senior client stakeholders to ensure successful project outcomes. Key Responsibilities: Technical: Previous hands-on experience in the Data Science field Demonstrated ability to convert business problems into technical solutions and technical delivery roadmaps. Overall 9+ years of experience, progressively moving from technical roles to delivery management positions. Lead the end-to-end delivery of medium to complex projects in Data Engineering, Data Science Manage geographically distributed teams of 15-20 people, preferably in an agile delivery model. Conduct technical client presentations with support from Subject Matter Experts (SMEs) and Centers of Excellence (CoEs). Engage with senior stakeholders (Director level and above) from the client side to understand requirements and ensure alignment with project objectives. Ability to drive team to maintain high customer NPS on all parameters of- Quality of Deliverables,Timeliness,Technical Rigor, Business Grasp Demonstrate passion for the role and commitment to the company's objectives. Stay updated with the latest technological trends and demonstrate strong technical acumen. Possess excellent written and verbal communication skills. Utilize analytical and creative thinking skills to solve complex problems. Foster a collaborative team environment and exhibit self-driven initiative. Demonstrate strong problem-solving abilities and ability to navigate challenges effectively. Qualifications: Minimum 4 years Bachelor's degree Proven track record of successfully delivering medium to complex projects in Data Science Experience managing geographically distributed teams in an agile delivery model. Excellent leadership and stakeholder management skills, with the ability to engage with senior stakeholders effectively. If you are passionate about leading teams to deliver impactful data analytics solutions, possess strong technical expertise, and thrive in a fast-paced environment, we encourage you to apply for this exciting opportunity.
Posted 2 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
Bengaluru
Remote
Notice Period : immediate -30 days Role Overview : As a Collibra Expert, you will be responsible for implementing, maintaining, and optimizing the Collibra Data Governance Platform to ensure data quality, governance, and lineage across the organization. You will partner with cross-functional teams to develop data management strategies and integrate Collibra solutions with Google Cloud Platform (GCP) to create a robust, scalable, and efficient data governance framework for the retail domain. Key Responsibilities : - Data Governance Management : Design, implement, and manage the Collibra Data Governance Platform for data cataloging, data quality, and data lineage within the retail domain. - Collibra Expertise : Utilize Collibra for metadata management, data quality monitoring, policy enforcement, and data stewardship across various business units. - Data Cataloging : Lead the implementation and continuous improvement of data cataloging processes to enable a centralized, user-friendly view of the organization's data assets. - Data Quality Management : Collaborate with business and technical teams to ensure that data is high-quality, accessible, and actionable. Define data quality rules and KPIs to monitor data accuracy, completeness, consistency, and timeliness. - Data Lineage Implementation : Build and maintain comprehensive data lineage models to visualize the flow of data from source to consumption, ensuring compliance with data governance standards. - GCP Integration : Architect and implement seamless integrations between Collibra and the Google Cloud Platform (GCP) tools such as BigQuery, Dataflow, and Cloud Storage, ensuring data governance policies are enforced in the cloud environment. - Collaboration & Stakeholder Management : Collaborate with Data Engineers, Analysts, Business Intelligence teams, and leadership to define and implement data governance best practices and standards. - Training & Support : Provide ongoing training and support to business users and technical teams on data governance practices, Collibra platform usage, and GCP-based solutions. - Compliance & Security : Ensure data governance initiatives comply with internal policies, industry standards, and regulations (e.g., GDPR, CCPA). Key Requirements : - Proven Expertise in Collibra : Hands-on experience implementing and managing Collibra Data Governance Platform (cataloging, lineage, data quality). - Google Cloud Platform (GCP) Proficiency : Strong experience with GCP tools (BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc.) and integrating them with Collibra for seamless data governance. - Data Quality and Lineage Expertise : In-depth knowledge of data quality frameworks, metadata management, and data lineage implementation. - Retail Industry Experience : Prior experience in data governance within the retail or eCommerce domain is a plus. - Technical Skills : Strong understanding of cloud data architecture and best practices for managing data at scale in the cloud (preferably in GCP). - Problem-Solving and Analytical Skills : Ability to analyze complex data governance issues and find practical solutions to ensure high-quality data management across the organization. - Excellent Communication Skills : Ability to communicate effectively with both technical and non-technical stakeholders to advocate for data governance best practices. - Certifications : Relevant certifications in Collibra, Google Cloud, or Data Governance are highly desirable. Education & Experience : - Bachelor's degree (B. Tech/BE) mandatory, masters optional - 5+ years of experience in Data Governance, with at least 3 years of specialized experience in Collibra and GCP. - Experience working with data teams in a retail environment is a plus.
Posted 2 weeks ago
5.0 - 9.0 years
7 - 11 Lacs
Noida
Work from Office
Strong on Data Engineering domain, should be aware or handed large Data Volumes testing, Big Data, ETL testing, worked on Hadoop/HDFS related apps, Python Programming must. ETL and Data Validation - Test ETL Processes, ensuring accurate extraction, transformation and loading of data. - Validation source to target mappings, transformation and business rules. - perform data completeness, accuracy and consistency checks. SQL & Database testing - Write and execute complex SQL queries to validate data integrity. - Verify data transformation, joins, aggregations in relation database (Oracle, SQL Server, PostgreSQL, etc.) Automation - Nice to have experience in Robot framework using Python Mandatory Competencies ETL - Tester Database - SQL Python - Python QE - Automation Testing Approaches
Posted 2 weeks ago
15.0 - 18.0 years
37 - 45 Lacs
Pune
Work from Office
Role Description Lead Technology Manager within Archiving Technology product family to take on the responsibility of delivery of large-scale programs, contribute to design & implementation of technical solutions including data engineering and IT transformation. The Technology Manager will lead a delivery team, provide technical leadership, and work in a global team interfacing with business stakeholders. The right candidate will bring a strong combination of software development, data engineering and people management skills. Your key responsibilities Develops, owns, and executes business deliveries end to end. Excellent organizational skills, high attention to detail with the ability to work under pressure and proactive project risk management skills Lead a team of engineers, functional analysts, testers, production support etc. Responsible for innovation and end-to-end launch of products. Collaborates with stakeholders and customers across the organization to inform the product vision, strategy, features, and prioritization. Works with cross-functional teams including analytics, design/user experience, engineering, and user enablement. Provide all required status update to the project team, user and various level of management Manage work of Sprint development team and Business Analysts team Turns data insights into products with actionable outcomes to the ultimate customer. Works in an Agile environment and continuously reviews the business needs, refines priorities, outlines milestones and deliverables, and identifies opportunities and risks. Translates product roadmap features into well-defined product requirements including features, user stories, and acceptance test criteria. Leads the product functional design process based on an intimate knowledge of the users and technology. Well versed in writing and reviewing requirement documents, functional specifications, coordinating workshops and obtaining signoff and review project plans. Your skills and experience 15+ Years of technology experience, continuous hands-on coding exposure, ability to drive solutions and lead technology delivery and teams Strong experience and understanding of data engineering principes Proficiency in coding languages is essential to this role and strong understanding of common programming languages such as Java, SQL, NoSQL, Python, Java, etc. Big data tools: Experience with managing big data, including tools such as Hadoop, MongoDB, Kafka, etc. Cloud computing: Understanding of cloud storage and cloud computing At least 5 years of team building and leading experience working in matrix management environment. 5+ years of technology delivery lead experience Proficient communication skills Fluent in English (written/verbal) Ability to work in virtual teams and in matrixed organisations Ability to design and write code in accordance with provided business requirements Relevant Financial Services experience. Ability to work in a fast paced environment with competing and alternating priorities with a constant focus on delivery. Ability to balance business demands and IT fulfilment in terms of standardisation, reducing risk and increasing IT flexibility. Strong Java understanding with technical expertise and knowledge of below technologies Exposure to other technologies like UNIX, Job Scheduling (Exp: Control-M) etc. Candidate is expected to have high desire to learn new technologies and implement various solutions in fast paced environment.
Posted 2 weeks ago
6.0 - 8.0 years
8 - 10 Lacs
Bengaluru
Work from Office
Position Summary: The Data Science team is building the data platform that will enable us to create products and experiences that solve complex and critical problems for our customers. You will be handling a team and will be part of the Data Science team working across cross-functional groups supporting strategic business decisions with insight into OUR lines of service, customers and products. The role will include playing an active role in facilitating data usability, data standardisation and user acceptance testing initiatives; development and deployment of standard and non-standard metrics; creating analytical solutions. You will leverage your technical skills, business acumen, and creativity to extract and analyse massive data sets, build analytics-ready datasets to surface insights and key business metrics, contribute to metadata that improves data usability, and much more. The ideal candidate is deeply analytical and detailed-oriented, but capable of thinking independently and people oriented. Responsibilities: Possesses strong analytical/logical thinking and communication skills Collaborate with the data warehousing team, ensuring that data infrastructure supports the needs of OUR analytics team and validating data quality Coordinate with business planners and decision makers to translate business questions into verifiable hypothesis and data models Develop clear, concise, actionable models and recommendations from mountains of data Advocate for exploration of interesting data anomalies or patterns that may provide more explanatory detail about customer behaviours or predictive value to the business Partner closely with business and technical teams to understand their project objectives and provide data-driven solutions and recommendations Design and develop data preparation components and processes that extract and transform data across disparate databases for reporting and analytics, ensuring integrity of analysis data by developing the requirement specifications and assisting the development and testing of data tables Ensures solutions are scalable, repeatable, efficient and effective Work hands-on on various analytics problems and provide thought leadership on problems that we are working on Interact with onsite team as well as client on daily/weekly basis to gather requirements/provide updates Be involved in the development of the company through pre-sales/operational support Qualifications: 6-8 years of database experience with advanced SQL skills; experience researching and manipulating complex and large data sets (both distributed and non-distributed) should have experience towards leading team Proficient in one or more scripting languages, such as Python or Scala Experience working with complex analytical tools, such as SAS or R Experience with data visualisation tools, such as Tableau/Spot fire/Qlikview Experience in statistical techniques such as Regression, Clustering & Time Series Forecasting, etc. Proven ability to dig-in and understand the data and to leverage creative thinking and problemsolving skills to create new data models Strong understanding with data infrastructure, data warehouse, or data engineering Coordinate with business planners and decision makers to translate business questions into verifiable data models and hypothesis Work with engineers to develop, test, and maintain the accurate tracking, capturing and reporting of key data Knows MS Excel & PowerPoint Bachelors in Engineering or Masters in Statistics/Economics
Posted 2 weeks ago
5.0 - 8.0 years
12 - 20 Lacs
Hyderabad
Work from Office
Role & responsibilities Data Engineer with Python, Spark, PySpark & HDFS: 9 Positions Preferred candidate profile
Posted 2 weeks ago
6.0 - 8.0 years
16 - 20 Lacs
Hyderabad
Work from Office
Role & responsibilities Key Responsibilities: • Develop real-time data streaming applications using Apache Kafka and Kafka Streams. • Build and optimize large-scale batch and stream processing pipelines with Apache Spark. • Containerize applications and manage deployments using OpenShift and Kubernetes. • Collaborate with DevOps teams to ensure CI/CD pipelines are robust and scalable. • Write unit tests and conduct code reviews to maintain code quality and reliability. • Work closely with Product and Data Engineering teams to understand requirements and translate them into technical solutions. • Troubleshoot and debug production issues across multiple environments. Required qualifications to be successful in this role • Strong programming skills in Java/Python. • Hands-on experience with Apache Kafka, Kafka Streams, and event-driven architecture. • Solid knowledge of Apache Spark (batch and streaming). • Experience with OpenShift, Kubernetes, and container orchestration. • Familiarity with microservices architecture, RESTful APIs, and distributed systems. • Experience with build tools such as Maven or Gradle. • Familiar with Git, Jenkins, CI/CD pipelines, and Agile development practices. • Excellent problem-solving skills and ability to work in a fast-paced environment. Education & Experience: • Bachelor's or Master's degree in Computer Science, Engineering, or related field. • Minimum 6 years of experience in backend development with Java and related technologies. Preferred Skills (Nice to Have): • Knowledge of cloud platforms like AWS, Azure, or GCP. • Understanding of security best practices in cloud-native environments. • Familiarity with SQL/NoSQL databases (e.g., PostgreSQL, Cassandra, MongoDB). • Experience with Scala or Python for Spark jobs is a plus. Preferred candidate profile
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France