Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description Impetus Technologies enables the Intelligent Enterprise™ with innovative data engineering, cloud, and enterprise AI services. Recognized as an AWS Advanced Consulting Partner, Elite Databricks Consulting Partner, Data & AI Solutions Microsoft Partner, and Elite Snowflake Services Partner, Impetus offers a suite of cutting-edge IT services and solutions to drive innovation and transformation for businesses across various industries. With a proven track record with Fortune 500 clients, Impetus drives growth, enhances efficiency, and ensures a competitive edge through continuous innovation and flawless delivery. Role Description This is a full-time on-site role for a Senior BI Engineer. The Senior BI Engineer will be responsible for developing data models, building and maintaining ETL processes, creating and managing data warehouses, and developing dashboards. The role also involves conducting data analysis to generate insights and support business decisions, ensuring data quality and compliance, and collaborating with various teams to meet business requirements. Experience: 4 to 6 years Qualifications Proficient in Data Modeling and Data Warehousing skills Demonstrate strong skills in databases / Datawarehouse such as Oracle, MySQL, DB2, Databricks or Snowflake with expertise in writing complex SQL queries Sound knowledge and experience in developing Power BI semantic modelling for any relevant BI tool Strong Extract, Transform, Load (ETL) skills Experience in creating and managing Dashboards Excellent Analytical Skills Good communication and teamwork abilities Ability to work in a fast-paced, dynamic environment Experience in the IT or consulting industry is a plus Bachelor's degree in technologies, Master of Computer Applications or a related field
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 09 The Team: The team works in an Agile environment and adheres to all basic principles of Agile. As a Quality Engineer, you will work with a team of intelligent, ambitious, and hard-working software professionals. The team is independent in driving all decisions and responsible for the architecture, design and development of our products with high quality. The Impact Achieve Individual objectives and contribute to the achievement of team objectives. Work on problems of moderate scope where analysis of situations or data requires a review of a variety of factors. ETL Testing from various feeds on server (Oracle, SQL, HIVE server, Databricks) using different testing strategy to ensure the data quality and data consistency, timeliness. Achieve the above intelligently and economically using QA best practices. What Is In It For You Be the part of a successful team which works on delivering top priority projects which will directly contributing to Company’s strategy. This is the place to enhance your Testing skills while adding value to the business. As an experienced member of the team, you will have the opportunity to own and drive a project end to end and collaborate with developers, business analysts and product managers who are experts in their domain which can help you to build multiple skillsets. Responsibilities As a Quality Engineer, you are responsible for: Defining Quality Metrics: Defining quality standards and metrics for the current project/product. Working with all stake holders to ensure that the quality metrics is reviewed, closed, and agreed upon. Create a list of milestones and checkpoints and set measurable criteria to check the quality on timely basis. Defining Testing Strategies: Defining processes for test plan and several phases of testing cycle. Planning and scheduling several milestones and tasks like alpha and beta testing. Ensuring all development tasks meet quality criteria through test planning, test execution, quality assurance and issue tracking. Work closely on the deadlines of the project. Keep raising the bar and standards of all the quality processes with every project. Thinking of continuous innovation. Managing Risks: Understanding and defining areas to calculate the overall risk to the project. Creating strategies to mitigate those risks and take necessary measures to control the risks. Communicating or creating awareness to all the stake holders for the various risks Understand & review the current risks and escalate. Process Improvements: Challenge yourself continuously to move towards automation for all daily works and help others in the automation. Create milestones for yearly improvement projects and set. Work with the development team to ensure that the quality engineers get apt support like automation hooks or debug builds wherever and whenever possible. Basic Qualifications What we are looking for: Bachelor's/PG degree in Computer Science, Information Systems or equivalent. 3-6 yrs of intensive experience in Database and ETL testing. Experience in running queries, data management, managing large data sets and dealing with databases. Strong in creating SQL queries that can parse and validate business rules/calculations. Experience in writing complex SQL Scripts, Stored Procedures, Integration packages. Experience in tuning and improving DB performance of complex enterprise class applications. Develop comprehensive test strategy, test plan and test cases to test big data implementation. Proficient with software development lifecycle (SDLC) methodologies like Agile, QA methodologies, defect management system, and documentation. Good at setting Quality standards in various new testing technologies in the industry. Good at identifying and defining areas to calculate the overall risk to the project and creating strategies to mitigate those risks and escalate as necessary. Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies. Preferred Qualifications Strong in ETL and Big Data Testing Proficiency in SQL About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 314958 Posted On: 2025-07-15 Location: Gurgaon, India
Posted 1 week ago
3.0 - 8.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
TCS HIRING !!! Role : Data Scientist Required Technical Skill Set : Data Science Experience: 3-8 years Locations: Kolkata,Hyd,Bangalore,Chennai,Pune Job Description: Must-Have** (Ideally should not be more than 3-5) Proficiency in Python or R for data analysis and modeling. Strong understanding of machine learning algorithms (regression, classification, clustering, etc.). Experience with SQL and working with relational databases. Hands-on experience with data wrangling, feature engineering, and model evaluation techniques. Experience with data visualization tools like Tableau, Power BI, or matplotlib/seaborn. Strong understanding of statistics and probability. Ability to translate business problems into analytical solutions. Good-to-Have Experience with deep learning frameworks (TensorFlow, Keras, PyTorch). Knowledge of big data platforms (Spark, Hadoop, Databricks). Experience deploying models using MLflow, Docker, or cloud platforms (AWS, Azure, GCP). Familiarity with NLP, computer vision, or time series forecasting. Exposure to MLOps practices for model lifecycle management. Understanding of data privacy and governance concepts.
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About us: Where elite tech talent meets world-class opportunities! At Xenon7, we work with leading enterprises and innovative startups on exciting, cutting-edge projects that leverage the latest technologies across various domains of IT including Data, Web, Infrastructure, AI, and many others. Our expertise in IT solutions development and on-demand resources allows us to partner with clients on transformative initiatives, driving innovation and business growth. Whether it's empowering global organizations or collaborating with trailblazing startups, we are committed to delivering advanced, impactful solutions that meet today's most complex challenges. We are building a community of top-tier experts and we're opening the doors to an exclusive group of exceptional AI & ML Professionals ready to solve real-world problems and shape the future of intelligent systems. Structured Onboarding Process We ensure every member is aligned and empowered: Screening - We review your application and experience in Data & AI, ML engineering, and solution delivery Technical Assessment - 2-step technical assessment process that includes an interactive problem-solving test, and a verbal interview about your skills and experience Matching you to Opportunity - We explore how your skills align with ongoing projects and innovation tracks Who We're Looking For As a Data Analyst, you will work closely with business stakeholders, data engineers, and data scientists to analyze large datasets, build scalable queries and dashboards, and provide deep insights that guide strategic decisions. You'll use Databricks for querying, transformation, and reporting across Delta Lake and other data sources.nd act on data with confidence. Requirements 6+ years of experience in data analysis, BI, or analytics roles Strong experience with Databricks Notebooks, SQL, and Delta Lake Proficiency in writing complex SQL queries (joins, CTEs, window functions) Experience with data profiling, data validation, and root-cause analysis Comfortable working with large-scale datasets and performance tuning Solid understanding of data modeling concepts and ETL workflows Experience with business intelligence tools (e.g., Power BI, Tableau) Familiarity with Unity Catalog and data access governance (a plus) Exposure to Python or PySpark for data wrangling (a plus) Benefits At Xenon7, we're not just building AI systems—we're building a community of talent with the mindset to lead, collaborate, and innovate together. Ecosystem of Opportunity: You'll be part of a growing network where client engagements, thought leadership, research collaborations, and mentorship paths are interconnected. Whether you're building solutions or nurturing the next generation of talent, this is a place to scale your influence Collaborative Environment: Our culture thrives on openness, continuous learning, and engineering excellence. You'll work alongside seasoned practitioners who value smart execution and shared growth Flexible & Impact-Driven Work: Whether you're contributing from a client project, innovation sprint, or open-source initiative, we focus on outcomes—not hours. Autonomy, ownership, and curiosity are encouraged here Talent-Led Innovation: We believe communities are strongest when built around real practitioners. Our Innovation Community isn't just a knowledge-sharing forum—it's a launchpad for members to lead new projects, co-develop tools, and shape the direction of AI itself
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are hiring - Data Engineering Specialist About the client Our client is a British multinational general insurance company headquartered in London, England, has major operations in the United Kingdom, Ireland, Scandinavia and Canada. It provides insurance products and services in more than 100 countries through a network of local partners. · Key Responsibilities: The purpose of the Data Engineer is to build and unit test code for Projects and Programmes on Azure Cloud Data and Analytics Platform. Analyse business requirements and support\create design for requirements. Build and deploy new/changes to data mappings, sessions, and workflows in Azure Cloud Platform – key focus area would be Azure Databricks. ·Develop performant code. · Perform ETL routines performance tuning, troubleshooting, support, and capacity estimation. · Conduct thorough testing of ETL code changes to ensure quality deliverables · Provide day-to-day support and mentoring to end users who are interacting with the data · Profile and understand large amounts of source data available, including structured and analyse defects and provide fixes · Provide release notes for deployments · Support Release activities · Problem solving attitude Keep up to date with new skills - Develop technology skills in other areas of Platform · Skills & Experience Required -Experienced in ETL tools, data projects -Recent Azure experience – Strong knowledge of Azure Data Bricks (Python/SQL) -Good knowledge of SQL & Python · Strong Analytical skills · Azure DevOps knowledge -Experience with Azure Databricks, Logic Apps would be highly desirable -Experience with Python programming would be highly desirable -Experience with Azure Functions would be a plus Interested candidates can apply by sharing their resume at techcareers@invokhr.com or apply via LinkedIn job post.
Posted 1 week ago
5.0 - 10.0 years
17 - 32 Lacs
Kochi
Hybrid
We are conducting a Weekday walk-in drive in Kochi from 15th July to 21s t July 2025 (Weekday only). Venue : Neudesic, an IBM Company, 3 rd Floor, Block A, Prestige Cyber Green Phase 1, Smart City, Kakkanad, Ernakulam, Kerala 682030 Time : 2 PM - 6 PM Date : 28 June 2025, Saturday Experience : 5+ yrs Mode of Interview : In-Person Only for candidates can join in 30 days. Azure Data Engineer Skills required : SQL, Python, PySpark, Azure Data Factory, Azure Data Lake Gen2, Azure Databricks, Azure Synapse, NoSQL DBs, Data Warehouses, GenAI (desirable) Strong data engineering skills in data cleansing, transformation, enrichment, semantic analytics, real-time analytics, ML/DL (desirable), streaming, data modeling, and data management.
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Overview: ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation. Job Responsibilities Architect, design, and optimize big data and AI/ML solutions on the Databricks platform. Develop and implement highly scalable ETL pipelines for processing large datasets. Lead the adoption of Apache Spark for distributed data processing and real-time analytics. Define and enforce data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights. Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate data workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability across all data processes. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. 8+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analytics. Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Hands-on experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills in a fast-paced environment. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 1 week ago
3.5 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title - + + Management Level: Location: Kochi, Coimbatore, Trivandrum Must have skills: Big Data, Python or R Good to have skills: Scala, SQL Job Summary A Data Scientist is expected to be hands-on to deliver end to end vis a vis projects undertaken in the Analytics space. They must have a proven ability to drive business results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Responsibilities Roles and Responsibilities Identify valuable data sources and collection processes Supervise preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns for insurance industry. Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Collaborate with engineering and product development teams Hands-on knowledge of implementing various AI algorithms and best-fit scenarios Has worked on Generative AI based implementations Professional And Technical Skills 3.5-5 years’ experience in Analytics systems/program delivery; at least 2 Big Data or Advanced Analytics project implementation experience Experience using statistical computer languages (R, Python, SQL, Pyspark, etc.) to manipulate data and draw insights from large data sets; familiarity with Scala, Java or C++ Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications Hands on experience in Azure/AWS analytics platform (3+ years) Experience using variations of Databricks or similar analytical applications in AWS/Azure Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Strong mathematical skills (e.g. statistics, algebra) Excellent communication and presentation skills Deploying data pipelines in production based on Continuous Delivery practices. Additional Information Multi Industry domain experience Expert in Python, Scala, SQL Knowledge of Tableau/Power BI or similar self-service visualization tools Interpersonal and Team skills should be top notch Nice to have leadership experience in the past About Our Company | Accenture
Posted 1 week ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Looking for Immediate Joiners. Location: Mumbai Expereince-5+ Years As a Senior Databricks Administrator, you will be responsible for the setup, configuration, administration, and optimization of the Databricks Platform on AWS. This role will play a critical part in managing secure, scalable, and high-performing Databricks environments, with a strong focus on governance, user access management, cost optimization, and platform operations. You will collaborate closely with engineering, infrastructure, and compliance teams to ensure that the Databricks platform meets enterprise data and regulatory requirements. Must-have Skills 6+ years of experience in Databricks administration on AWS or multi-cloud environments. Deep understanding of Databricks workspace architecture, Unity Catalog, and cluster configuration best practices. Strong experience in managing IAM policies, SCIM integration, and access provisioning workflows. Hands-on experience with monitoring, cost optimization, and governance of large-scale Databricks deployments. Hands-on experience with infrastructure-as-code (Terraform) and CI/CD pipelines. Experience with ETL orchestration and collaboration with engineering teams (Databricks Jobs, Workflows, Airflow).
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description - Data Analyst About the Role We’re looking for a data-savvy, hands-on analyst who’s comfortable working with Python and Power BI and has a basic understanding of Databricks. You don’t need to be an expert- but you should be curious, eager to learn, and comfortable working with data from multiple systems. This is not just a pricing support role, it’s a key position in shaping how our Databricks environment connects to business reporting across the company. You’ll help the Market Intelligence and Global Pricing teams bridge technical data flows with business reporting and support the continued development of a Python-based pricing algorithm (developed with a top consultancy). The ideal candidate has medium-level Python skills, can confidently navigate Power BI, and is open to learning more about Databricks, ETL, and data science. AI tools like ChatGPT or Copilot are part of your workflow, helping you write or troubleshoot scripts and think creatively about solutions. Key Responsibilities Work with data stored in Databricks, including maintaining and improving a Python-based pricing allocation model (random forest + clustering) and building additional data pipelines as needed for business reporting. Use and adapt Python scripts to transform data from Databricks, ERP systems, and other internal sources into clean, BI-ready datasets. Build and maintain Power BI dashboards using multiple data sources: ERP (M3, BPCS), Excel, and customer inputs. Help manage basic ETL workflows: extract, clean, and transform data even without a dedicated data engineer. Define, configure, and register new data sources (e.g. Excel, CSVs) in Power BI and created structured, reusable models. Collaborate with business and IT stakeholders to identify and transform the right data within Databricks for dashboarding and reporting needs. Translate business questions into scalable datasets and intuitive Power BI dashboards for marketing, pricing, and executive teams. Gradually support more advanced analytics: clustering, A/B testing, predictive modeling, and performance diagnostics. Who You Are Comfortable working with Python (medium level) and able to write, debug, or adapt data scripts. Confident using AI tools like ChatGPT or GitHub Copilot to create or fix Python code, with judgment on when to refine it manually. Familiar with or open to learning Databricks. Basic experience is a plus, and our IT team will help you develop further. Skilled in Power BI, especially with data modeling, DAX, and combining structured and unstructured sources. Comfortable navigating ERP systems, data lakes, and warehouse environments, even when data isn’t clean or standardized. Able to work independently to troubleshoot data issues and build end-to-end reporting solutions. A strong communicator who can explain technical results to commercial or non-technical audiences. Curious, proactive, and eager to grow your skillset in data science, predictive analytics, and data engineering over time. Preferred Qualifications Bachelor’s or master’s degree in data science, Engineering, Business Analytics, Economics, or a related field. 4+ Years of experience in relevant field. Working knowledge of SQL, DAX, or similar query tools. Experience or interest in machine learning, clustering algorithms, random forest models, A/B testing, and hypothesis testing. Understanding of pricing, commercial analytics, or the mining/heavy equipment industry is a plus but not required. Why Join Us? You’ll be part of a growing analytics function at a key moment of transformation. This role gives you the chance to shape how we manage data from raw ERP or customer files to dynamic dashboards in Power BI. With hands-on exposure to Databricks, Python, and AI-assisted scripting, you’ll work across multiple domains and play a visible role in how we make data-driven decisions. Epiroc is a global productivity partner for mining and construction customers, and accelerates the transformation toward a sustainable society. With ground-breaking technology, Epiroc develops and provides innovative and safe equipment, such as drill rigs, rock excavation and construction equipment and tools for surface and underground applications. The company also offers world-class service and other aftermarket support as well as solutions for automation, digitalization and electrification. Epiroc is based in Stockholm, Sweden, had revenues of more than SEK 60 billion in 2023, and has around 18 200 passionate employees supporting and collaborating with customers in around 150 countries. Learn more at www.epiroc.com.
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Veeam, the #1 global market leader in data resilience, believes businesses should control all their data whenever and wherever they need it. Veeam provides data resilience through data backup, data recovery, data portability, data security, and data intelligence. Based in Seattle, Veeam protects over 550,000 customers worldwide who trust Veeam to keep their businesses running. Join us as we move forward together, growing, learning, and making a real impact for some of the world’s biggest brands. The future of data resilience is here - go fearlessly forward with us. This APJ-based role is focused on supporting the multitude of existing enablement programs for our internal and channel selling programs. With a drive for analysis, the successful candidate will have a passion for using data to inform decisions. Utilising strong skills in reporting and metrics, they will be responsible for guiding the creation and utilisation of content. The successful candidate will be involved in identifying, planning and shaping end-user enablement materials and activities. Working closely with the cross-functional APJ teams in the sales, technical and marketing and teams it requires a candidate who is curious about data, ROI and the measurement of program success. As part of the APJ sales acceleration team this role reports to the APJ Sales Acceleration Senior Director and would be office-based 3 days a week (Tue-Thur), as well as including international travel to regions including India, Korea, Australia and Southeast Asia as needed. Responsibilities Be involved in the creation/delivery/ execution and reporting of sales-based programs such as (but not limited to) webinars, partner competency programs, sales training microlearning. Measure the efficacy / ROI of enablement programs. Provide detailed analysis of what works and why and use these findings to guide further programs Create and maintain accurate data required for event attendance such as launchpad and sales training experience in providing detailed reporting on progress with tools such a Tableau, Excel, Monday.com and others Actively manage the onboarding process Experience in working with procurement departments for logistics on hotel based events. Evaluate existing programs to ensure their quality and effectiveness Communicate weekly with stakeholders from enablement team and marketing & sales stakeholders. Be the APJ leader in maintaining content repositories on platforms such as Cornerstone. Qualifications Familiarity with sales methodologies and their adaptation into a sales environment. Awareness of or experience with Salesforce.com or similar CRM preferred. Awareness of or experience with collaboration tools like MS Teams, WebEx, etc. Facilitation & coaching experience. Creation/maintenance of Monday.com boards Advanced Excel skills Advanced Tableau / DataBricks skills Familiarity with basic AI concepts such as LLM, token weighting, etc. Experience in DISC Solid ROI research credentials Proven Veeam portfolio knowledge. Demonstrated experience in either a partner or partner ecosystem training role Excellent communication and interpersonal skills. Proven record of driving programs and projects independently with success. Exceptional organization skills with the ability to manage multiple projects simultaneously. Ability to adapt in a fast-paced work environment; must be a high-energy, motivated self-starter. Able to travel as needed (up to 30%) international travel. Veeam Software is an equal opportunity employer and does not tolerate discrimination in any form on the basis of race, color, religion, gender, age, national origin, citizenship, disability, veteran status or any other classification protected by federal, state or local law. All your information will be kept confidential. Please note that any personal data collected from you during the recruitment process will be processed in accordance with our Recruiting Privacy Notice. The Privacy Notice sets out the basis on which the personal data collected from you, or that you provide to us, will be processed by us in connection with our recruitment processes. By applying for this position, you consent to the processing of your personal data in accordance with our Recruiting Privacy Notice.
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Computer Vision & Machine Learning Lead Engineer We are seeking Computer Vision & Machine Learning Engineer with strong software development expertise who can architect & develop ML based solutions for computer vision applications and deploy them at scale. Minimum qualifications: Bachelor’s or Master’s degree in computer science, Electrical Engineering, Information Systems, or a related field. 7+ years of extensive software development experience in Python, Pytorch, reading/debugging code in Python, C++ & Shell 4+ years of experience directly working on ML based solutions preferable convolutional neural networks applied to computer vision problem statements Proficiency in software design and architecture and Object-Oriented programming. Experience working with docker or similar containerization frameworks along with container orchestration Experience with Linux/Unix or similar systems, from the kernel to the shell, file systems, and client-server protocols. Experience troubleshooting and resolving technical issues in an application tech stack including AI/ML. Solid understanding of common SQL and No SQL databases Experience working with AWS or similar platforms Strong communication skills and ability to work effectively in a team. Preferred qualifications: Experience working with distributed clusters and multi-node environment. Familiar with basics of web technologies and computer networking AWS certifications or similar Formal academic background in Machine Learning Experience working with large image datasets (100K+ images) Responsibilities Architect and develop Machine Learning Based computer vision algorithms for various applications Responsible for delivering software and solutions while meeting all quality standards Design, implement and optimize machine learning training & inference pipelines and algorithms on cloud or on-prem hardware Understand functional and non-functional requirements of features and breakdown tasks for the team Take ownership of delivery for self as well as team Collaborate closely with product owners and domain/technology experts to integrate and validate software within a larger system. Engage with internal teams and provide support to teams located in North America & Europe Base Skillsets Python, Pytorch, one of the cloud platform AWS / GCP / Azure, Linux, Docker, Database Optional Skillsets: Databricks, MLOps, CI/CD
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
We are looking for a Full Stack Developer with at least 8 years of relevant experience to be a part of our dynamic team. The ideal candidate should have a strong background in both front-end and back-end development, focusing on building and managing scalable, high-performance applications. It is essential to have expertise in Front-end React Framework and Backend Python. Proficiency in front-end technologies like HTML, CSS, and strong back-end development skills are required, along with knowledge of SQL. The work location for this position is in Chennai/Bangalore, and the work timing is from 2:00 PM to 11:00 PM. The interview process consists of 3 levels, including a Glider test with a minimum cutoff of 70% and 2 rounds of Technical Interviews. To qualify for this role, you need to exhibit: - Strong communication and interpersonal skills - Ability to collaborate effectively with internal and external stakeholders - Innovative and analytical thinking - Capacity to manage workload under time constraints and shifting priorities - Adaptability and eagerness to learn new technologies and methodologies In terms of technical proficiency, the ideal candidate should have: - Expertise in Front-end React Framework and Backend Python - Proficiency in front-end technologies such as HTML, CSS, and strong back-end development skills - Proficient in GIT and CI/CD practices - Developing and maintaining web applications using modern frameworks and technologies - Assisting in maintaining code quality, organization, and automation - Experience with relational database management systems - Familiarity with cloud services, primarily Azure (AWS, Azure, or Google Cloud) Industry knowledge in the oil and gas sector, particularly in trading operations, is highly desirable. Understanding market data, trading systems, and financial instruments related to oil and gas would be an added advantage. Preferred qualifications include: - Certifications in relevant technologies or methodologies - Proven experience in building, operating, and supporting robust and performant databases and data pipelines - Experience with Databricks and Snowflake - Solid understanding of web performance optimization, security, and best practices - Experience supporting Power BI dashboards - An Individual Contributor role with outstanding communication skills,
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Company: They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. Job Title: Power BI + Knime Location: Bangalore, Pune, Chennai, Hyderabad Work Mode: Hybrid Mode Experience: 7+ years (5 years Relevant) Job Type: Contract to hire (C2H) Notice Period: - Immediate joiners. Mandatory Skills: Power BI, Knime, Japanese language skills Additional Skills : • Develop PBi reports of Medium & High complexity independently. • Experience in developing ETL pipelines using Knime is required • Knowledge in Power BI to import data from various sources such as SQL Server, Excel etc • Experience in Power Platform – Power BI, Power Automate and Power Apps will be added advantage • Should be familiar for Power Bi Gen 2 • Write DAX queries, implement row level securities and configure gateways in Power BI services. • Experience in performance tuning of dashboards, refreshes is a must have • Experience in modeling with Azure Analytics Service would be nice to have • Experience of working in Azure platform is preferred • Power BI Administration & Configuration • Power BI Maintenance (workspace and security, data models and measures in datasets, deployment pipelines, refresh schedules) • Responsible in development and maintenance of the existing applications i and development of any changes or fixes to the current design • Knowledgeable in building data models for reporting analytics solution • Knowledgeable in integrations between back-end and front-end services (e.g., Gateways) • Familiar with cloud technologies primarily in MS Azure, which includes – Databricks, ADF, SQL DB, Storage Accounts, KeyVault, Application Gateways, Vnets, Azure Portal Management. Good to have skill
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position: Senior Data Analyst (Tableau & Databricks) C lient: One of our Prestigious client Locations: Pune/Hyderbad Mode of hiring: Full time/Permanent Experience: 6+ Years. Budget: 28-30 LPA Notice Period: 0-30 days (Only serving notice period) Share your CV 📧: sathish.m@tekgence.com Job Title: Sr Data Analyst Job Summary: We are seeking a highly skilled Visualization Expert with deep expertise in Tableau and practical experience in Databricks integration to join our data team. The ideal candidate will play a pivotal role in transforming complex data into actionable insights through engaging dashboards and reports, while ensuring seamless data pipelines and optimized connections between Databricks and Tableau. Key Responsibilities: Design, develop, and maintain advanced Tableau dashboards and visualizations to support business decision-making. Integrate Tableau with Databricks to access and visualize data efficiently from Delta Lake, Spark, and other sources. Collaborate with data engineers, analysts, and stakeholders to understand business needs and translate them into visualization solutions. Optimize Tableau extracts and live connections for performance and scalability. Develop and document best practices for visualization design, data governance, and dashboard deployment. Ensure data accuracy, reliability, and security in visualizations. Stay current with Tableau and Databricks features, and implement new capabilities where beneficial. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or related field. 5+ years of experience building Tableau dashboards and visualizations. 1+ years of hands-on experience integrating Tableau with Databricks (including SQL, Delta Lake, and Spark environments). Strong understanding of data modeling, ETL processes, and analytics workflows. Proficient in writing optimized SQL queries. Experience with Tableau Server or Tableau Cloud deployment and administration. Ability to work with large datasets and troubleshoot performance issues. Preferred Qualifications: Experience with scripting or automation tools (e.g., Python, DBT). Familiarity with other BI tools and cloud platforms (e.g., Power BI, AWS, Azure). Tableau certification (Desktop Specialist/Professional or Server). Understanding of data privacy and compliance standards (e.g., GDPR, HIPAA). Soft Skills: Strong analytical and problem-solving skills. Excellent communication and stakeholder management abilities. Detail-oriented with a strong focus on data accuracy and user experience. Comfortable working independently and collaboratively in a fast-paced environment.
Posted 2 weeks ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client: Our Client is a multinational IT services and consulting company headquartered in USA, With revenues 19.7 Billion USD, with Global work force of 3,50,000 and Listed in NASDAQ, It is one of the leading IT services firms globally, known for its work in digital transformation, technology consulting, and business process outsourcing, Business Focus on Digital Engineering, Cloud Services, AI and Data Analytics, Enterprise Applications ( SAP, Oracle, Sales Force ), IT Infrastructure, Business Process Out Source. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru. Offices in over 35 countries. India is a major operational hub, with as its U.S. headquarters. Job Title : Machine Learning-Data Scientist Key Skills : Python, ML, NLP (LDA, embeddings, RAG) Job Locations : Pune Experience : 7+ Years. Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: Python, ML, NLP (LDA, embeddings, RAG), AI techniques, LLM-based matching (e.g., GPT/embeddings), timeseries forecasting Django is essential Experience with Databricks, Azure ML Stack, OpenAI API, Spark, and fuzzy matching would be a plus. Builds and deploys ML pipelines (incl. MLOps, API endpoints, CI/CD) Works with Langchain, Azure Synapse, Kubernetes, and modern ML frameworks
Posted 2 weeks ago
12.0 - 18.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for an experienced Manager - Data Engineering with a strong background in Databricks or the Apache data stack to lead the implementation of complex data platforms. In this role, you will be responsible for overseeing impactful data engineering projects for global clients, delivering scalable solutions, and steering digital transformation initiatives. With 12-18 years of overall experience in data engineering, including 3-5 years in a leadership position, you will need hands-on expertise in either Databricks or the core Apache stack (Spark, Kafka, Hive, Airflow, NiFi, etc.). Proficiency in at least one cloud platform such as AWS, Azure, or GCP, ideally with Databricks on the cloud, is required. Strong programming skills in Python, Scala, and SQL are essential, along with experience in constructing scalable data architectures, delta lakehouses, and distributed data processing. Familiarity with modern data governance, cataloging, and data observability tools is also necessary. You should have a proven track record of managing delivery in an onshore-offshore or hybrid model, coupled with exceptional communication, stakeholder management, and team mentoring abilities. As a Manager - Data Engineering, your key responsibilities will include leading the design, development, and deployment of modern data platforms utilizing Databricks, Apache Spark, Kafka, Delta Lake, and other big data tools. You will be tasked with designing and implementing data pipelines (both batch and real-time), data lakehouses, and large-scale ETL frameworks. Furthermore, you will take ownership of delivery accountability for data engineering programs across various industries, collaborating with global stakeholders, product owners, architects, and business teams to drive data-driven outcomes. Ensuring best practices in DevOps, CI/CD, infrastructure-as-code, data security, and governance will be crucial. Additionally, you will be responsible for managing and mentoring a team of 10-25 engineers, conducting performance reviews, capability building, and coaching, as well as supporting presales activities including solutioning, technical proposals, and client workshops. At GlobalLogic, we prioritize a culture of caring where people come first. We offer continuous learning and development opportunities to help you grow personally and professionally. You'll have the chance to work on interesting and meaningful projects that have a real impact. With various career areas, roles, and work arrangements, we believe in providing a balance between work and life. As a high-trust organization, integrity is at the core of everything we do. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for creating innovative digital products and experiences. Join us in collaborating with forward-thinking companies to transform businesses and redefine industries through intelligent products, platforms, and services.,
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Responsibilities Lead 4-8 data scientists to deliver ML capabilities within a Databricks-Azure platform Guide delivery of complex ML systems that align with product and platform goals Balance scientific rigor with practical engineering Define model lifecycle, tooling, and architectural direction Requirements Skills & Experience Advanced ML: Supervised/unsupervised modeling, time-series, interpretability, MLflow, Spark, TensorFlow/PyTorch Engineering: Feature pipelines, model serving, CI/CD, production deployment Leadership: Mentorship, architectural alignment across subsystems, experimentation strategy Communication: Translate ML results into business impact Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Responsibilities Build and maintain secure, scalable data pipelines using Databricks and Azure Handle ingestion from diverse sources (files, APIs, streaming), data transformation, and quality validation Collaborate with subsystem data science and product teams for ML readiness Requirements Skills & Experience Technical: Notebooks (SQL, Python), Delta Lake, Unity Catalog, ADLS/S3, job orchestration, APIs, structured logging, IaC (Terraform) Delivery: Trunk-based development, TDD, Git, CI/CD for notebooks and pipelines Integration: Familiar with JSON, CSV, XML, Parquet, SQL/NoSQL/graph databases Communication: Able to justify decisions, document architecture, and align with enabling teams Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Responsibilities Act as both a hands-on tech lead and product manager Deliver data/ML platforms and pipelines in a Databricks-Azure environment Lead a small delivery team and coordinate with enabling teams for product, architecture, and data science Translate business needs into product strategy and technical delivery with a platform-first mindset Requirements Skills & Experience Technical: Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, Azure Product: Agile delivery, discovery cycles, outcome-focused planning, trunk-based development Collaboration: Able to coach engineers, work with cross-functional teams, and drive self-service platforms Communication: Clear in articulating decisions, roadmap, and priorities Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda
Posted 2 weeks ago
5.0 years
0 Lacs
India
Remote
Job Description We are seeking a skilled Azure Databricks Developer with strong Terraform expertise to join our data engineering or cloud team. This role involves building, automating, and maintaining scalable data pipelines and infrastructure in the Azure cloud environment using Databricks and Infrastructure as Code (IaC) practices. The ideal candidate has hands-on experience with data processing in Databricks and cloud provisioning using Terraform. As an Azure Databricks Developer with Terraform , your responsibilities include: Designing and optimizing data pipelines using Azure Databricks (Spark, Delta Lake, notebooks, jobs) Automating infrastructure provisioning on Azure through Terraform Collaborating with data engineers, analysts, and cloud architects to integrate Databricks with Azure services such as Data Lake, Synapse, and Key Vault Maintaining CI/CD pipelines for deploying Databricks solutions and Terraform configurations Applying best practices in security, scalability, cost efficiency, and performance tuning Monitoring and troubleshooting Databricks jobs and infrastructure components Documenting architecture designs, operational processes, and configuration standards Profile Requirements For this position of Azure Databricks Developer with Terraform, we are looking for someone with: (Required) 5+ years of experience in Azure Databricks, including PySpark, notebooks, cluster management, Delta Lake (Required) Strong hands-on experience in Terraform for managing cloud infrastructure (especially Azure) (Required) Proficiency in Python and SQL (Required) Experience with Azure services: Azure Data Lake, Azure Data Factory, Azure Key Vault, Azure DevOps (Required) Familiarity with CI/CD pipelines and version control (e.g., Git) (Required) Good understanding of data engineering concepts and cloud-native architecture (Good to Have) Azure certifications (e.g., DP-203, AZ-104, or AZ-400) Adastra APAM Culture Manifesto Servant Leadership Managers are servants to employees. Managers are elected to make sure that employees have all the processes, resources, and information they need to provide services to clients in an efficient manner. Any manager up to the CEO is visible and reachable for a chat regardless their title. Decisions are taken with a consent in an agile manner and executed efficiently in no overdue time. We accept that wrong decisions happen and we appreciate the learning before we adjust the process for a continuous improvement. Employees serve clients. Employees listen attentively to client needs and collaborate internally as a team to cater to them. Managers and employees work together to get things done and are accountable to each other. Corporate KPIs are transparently reviewed on monthly company events with all employees. Performance Driven Compensation We recognize and accept that some of us are more ambitious, more gifted, or more hard-working. We also recognize that some of us look for a stable income and lesser hassle at a different stage of their careers. There is a place for everyone, we embrace and need this diversity. Grades in our company are not based on number of years of experience, they are value driven based on everyone’s ability to deliver independently their work to clients and/or lead others. There is no “annual indexation” of salaries, you may be upgraded several times within the year, or none, based on your own pace of progress, ambitions, relevant skillset and recognition by clients. Work-Life Integration We challenge the notion of work-life balance, we embrace the notion of work-life integration instead. This philosophy looks into our lives a single whole where we serve ourselves, our families and our clients in an integrated manner. We encourage 100% flexible working hours where you arrange your day. This means you are free when you have little work, but this also means extra effort if you are behind schedule. Working for clients that may be in different time zones means we give you the flexibility to design how your day will look like in accordance to personal and project preferences and needs. We appreciate time and we minimize time spent on Adastra meetings. We are also a remote-first company. While we have our collaboration offices and social events, we encourage people to work 100% remote from home whenever possible. This means saving time and money on commute, staying home with elderly and little ones, not missing the special moments in life. This also means you can work from any of our other offices in Europe, North America or Australia, or move to a place with lower cost of living without impacting your income. We trust you by default until you fail our trust. Global Diversity Adastra is an international organization. We hire globally and our biggest partners and clients are in Europe, North America and Australia. We work on teams with individuals from different culture, ethnicity, sexual preference, political views or religion. We have zero tolerance to anyone who doesn’t pay respect to others or is abusive in any way. We speak different languages to one another, but we speak English when we are together or with clients. Our company is a safe space where communication is encouraged but boundaries regarding sensitive topics are respected. We accept and converge together to serve our teams and clients and ultimately have good time at work. Lifelong Learning On annual average we invest 25% of our working hours to personal development and upskilling outside project work, regardless of seniority or role. We feature hundreds of courses on our Training Repo, and we continue to actively purchase or tailor hands-on content. We certify people on our expense. We like to say we are technology agnostic; we learn the principles of data management and we apply it on different use cases and different technology stacks. We believe that the juniors today are the seniors tomorrow, we treat everyone with respect and mentor them into the roles they deserve. We encourage seniors to give back to the IT community through leadership and mentorship. On your last day with us we may give you an open-dated job offer so that you feel welcome to return home as others did before you. More About Adastra: Visit Adastra (adastracorp.com) and/or contact us: at HRIN@adastragrp.com
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
The Machine Learning Engineer (Azure Databricks) position is available in Navi Mumbai, Bengaluru, Pune, Gurugram, and Chandigarh. We are looking for a skilled individual with at least 3 years of relevant experience in Python, data science, ETL, advanced Python, machine learning, ML frameworks, Azure Cloud, and Databricks. Familiarity with deep learning, NLP, computer vision, and Python for image processing is also desired. As a Machine Learning Engineer, you will be responsible for leading machine learning projects, developing and optimizing algorithms, preparing and transforming datasets, evaluating model performance, and deploying models in production environments. Collaboration with cross-functional teams and basic understanding of DevOps practices are essential for success in this role. The ideal candidate will have hands-on experience in Python, advanced Python coding, machine learning model development, ML frameworks like Scikit-learn or TensorFlow, Azure cloud services, Databricks, and basic DevOps knowledge. An understanding of deep learning principles, NLP, computer vision, and image processing libraries is beneficial. If you join our team, you can expect opportunities for learning and certification, comprehensive medical coverage, a flexible work environment, and a fun, collaborative, and innovative workplace culture. We are committed to your professional growth and well-being, offering a supportive and dynamic environment to thrive in the field of AI and ML.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be joining a fast-growing global technology services company, Papigen, that specializes in providing innovative digital solutions through industry expertise and cutting-edge technology. As a Senior Data QA Analyst, your primary responsibility will be to ensure the quality, accuracy, and reliability of data workflows in enterprise-scale systems, particularly focusing on Azure Data Bricks and ETL pipelines. This role will require close collaboration with data engineers, business analysts, and stakeholders to validate data integration, transformation, and reporting. Your key responsibilities will include collaborating with Business Analysts and Data Engineers to understand requirements and translate them into test scenarios and test cases. You will develop and execute comprehensive test plans and scripts for data validation, log and manage defects using tools like Azure DevOps, and support UAT and post-go-live smoke testing. Additionally, you will be responsible for understanding data architecture, writing and executing complex SQL queries, validating data accuracy, completeness, and consistency, and ensuring correctness of data transformations based on business logic. In terms of report testing, you will validate the structure, metrics, and content of BI reports, perform cross-checks of report outputs against source systems, and ensure that reports reflect accurate calculations and align with business requirements. To be successful in this role, you should have a Bachelor's degree in IT, Computer Science, MIS, or a related field, along with 8+ years of experience in QA, especially in data validation or data warehouse testing. Strong hands-on experience with SQL and data analysis is essential, and experience working with Azure Data Bricks, Python, and PySpark is preferred. Familiarity with data models like Data Marts, EDW, and Operational Data Stores, as well as knowledge of data transformation, mapping logic, and BI validation, will be beneficial. Experience with test case documentation, defect tracking, and Agile methodologies is also required, along with strong verbal and written communication skills to work effectively in a cross-functional environment. Joining Papigen will provide you with the opportunity to work with leading global clients, exposure to modern technology stacks and tools, a supportive and collaborative team environment, and continuous learning and career development opportunities.,
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
Karnataka
On-site
Location Karnataka Bengaluru Experience Range 7 - 15 Years Job Description Spark/Scala Job Description As a Software Development Engineer 2 you will be responsible for expanding and optimising our data and data pipeline architecture as well as optimising data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline design and data wrangler who enjoys optimising data systems and building them from the ground up. The Data Engineer will lead our software developers on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams systems and products. The right candidate will be excited by the prospect of optimising or even re-designing our company’s data architecture to support our next generation of products and data initiatives. Responsibilities Create and maintain optimal data pipeline architecture Assemble large complex data sets that meet functional / non-functional business requirements. Identify design and implement internal process improvements: automating manual processes optimising data delivery, coordinating to re-design infrastructure for greater scalability etc. Work with stakeholders including the Executive Product Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure Work with data and analytics experts to strive for greater functionality in our data systems. Support PROD systems Qualifications Must have About 5 - 11 years and at least 3 years relevant experience with Bigdata. Must have Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with huge amount of data. Must have experience in Hadoop, Hive, Spark with Scala with good experience in performance tuning and debugging issues. Good to have any stream processing Spark/Java Kafka. Must have experience in design and development of Big data projects. Good knowledge in Functional programming and OOP concepts, SOLID principles, design patterns for developing scalable applications. Familiarity with build tools like Maven. Must have experience with any RDBMS and at least one NoSQL database preferably PostgresSQL Must have experience writing unit and integration tests using scaliest Must have experience using any versioning control system - Git Must have experience with CI / CD pipeline – Jenkins is a plus Basic hands-on experience in one of the cloud provider (AWS/Azure) is a plus Databricks Spark certification is a plus.
Posted 2 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Responsibilities Build and maintain secure, scalable data pipelines using Databricks and Azure Handle ingestion from diverse sources (files, APIs, streaming), data transformation, and quality validation Collaborate with subsystem data science and product teams for ML readiness Requirements Skills & Experience Technical: Notebooks (SQL, Python), Delta Lake, Unity Catalog, ADLS/S3, job orchestration, APIs, structured logging, IaC (Terraform) Delivery: Trunk-based development, TDD, Git, CI/CD for notebooks and pipelines Integration: Familiar with JSON, CSV, XML, Parquet, SQL/NoSQL/graph databases Communication: Able to justify decisions, document architecture, and align with enabling teams Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi