Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
9.0 - 11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonalds: One of the worlds largest employers with locations in more than 100 countries, McDonalds Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald&aposs global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: We are seeking an experienced Data Architect to design, implement, and optimize scalable data solutions on Amazon Web Services (AWS) and / or Google Cloud Platform (GCP). The ideal candidate will lead the development of enterprise-grade data architectures that support analytics, machine learning, and business intelligence initiatives while ensuring security, performance, and cost optimization. Who we are looking for: Primary Responsibilities: Key Responsibilities Architecture & Design: Design and implement comprehensive data architectures using AWS or GCP services Develop data models, schemas, and integration patterns for structured and unstructured data Create solution blueprints, technical documentation, architectural diagrams, and best practice guidelines Implement data governance frameworks and ensure compliance with security standards Design disaster recovery and business continuity strategies for data systems Technical Leadership: Lead cross-functional teams in implementing data solutions and migrations Provide technical guidance on cloud data services selection and optimization Collaborate with stakeholders to translate business requirements into technical solutions Drive adoption of cloud-native data technologies and modern data practices Platform Implementation: Implement data pipelines using cloud-native services (AWS Glue, Google Dataflow, etc.) Configure and optimize data lakes and data warehouses (S3 / Redshift, GCS / BigQuery) Set up real-time streaming data processing solutions (Kafka, Airflow, Pub / Sub) Implement automated data quality monitoring and validation processes Establish CI/CD pipelines for data infrastructure deployment Performance & Optimization: Monitor and optimize data pipeline performance and cost efficiency Implement data partitioning, indexing, and compression strategies Conduct capacity planning and scaling recommendations Troubleshoot complex data processing issues and performance bottlenecks Establish monitoring, alerting, and logging for data systems Skill: Bachelors degree in computer science, Data Engineering, or related field 9+ years of experience in data architecture and engineering 5+ years of hands-on experience with AWS or GCP data services Experience with large-scale data processing and analytics platforms AWS Redshift, S3, Glue, EMR, Kinesis, Lambda AWS Data Pipeline, Step Functions, CloudFormation Big Query, Cloud Storage, Dataflow, Dataproc, Pub/Sub GCP Cloud Functions, Cloud Composer, Deployment Manager IAM, VPC, and security configurations SQL and NoSQL databases Big data technologies (Spark, Hadoop, Kafka) Programming languages (Python, Java, SQL) Data modeling and ETL/ELT processes Infrastructure as Code (Terraform, CloudFormation) Container technologies (Docker, Kubernetes) Data warehousing concepts and dimensional modeling Experience with modern data architecture patterns Real-time and batch data processing architectures Data governance, lineage, and quality frameworks Business intelligence and visualization tools Machine learning pipeline integration Strong communication and presentation abilities Leadership and team collaboration skills Problem-solving and analytical thinking Customer-focused mindset with business acumen Preferred Qualifications: Masters degree in relevant field Cloud certifications (AWS Solutions Architect, GCP Professional Data Engineer) Experience with multiple cloud platforms Knowledge of data privacy regulations (GDPR, CCPA) Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonalds is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonalds provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonalds Capability Center India Private Limited (McDonalds in India) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonalds in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonalds in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment. Show more Show less
Posted 2 days ago
0.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role: We are seeking a highly skilled and experienced Data Architect with expertise in designing and building data platforms in cloud environments. The ideal candidate will have a strong background in either AWS Data Engineering or Azure Data Engineering, along with proficiency in distributed data processing systems like Spark. Additionally, proficiency in SQL, data modeling, building data warehouses, and knowledge of ingestion tools and data governance are essential for this role. TheData Architect will also need experience with orchestration tools such as Airflow or Dagster and proficiency in Python, with knowledge of Pandas being beneficial. ? Why Choose Ideas2IT Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (seehereandhere). We are following suit. ? Whats in it for you? You will get to work on impactful products instead of back-office applications for the likes of customers like Facebook, Siemens, Roche, and more You will get to work on interesting projects like the Cloud AI platform for personalized cancer treatment Opportunity to continuously learn newer technologies Freedom to bring your ideas to the table and make a difference, instead of being a small cog in a big wheel Showcase your talent in Shark Tanks and Hackathons conducted in the company ? ?Heres what youll bring? Experience in designing and building data platforms in any cloud. Strong expertise in either AWS Data Engineering or Azure Data Engineering Develop and optimize data processing pipelines using distributed systems like Spark. Create and maintain data models to support efficient storage and retrieval. Build and optimize data warehouses for analytical and reporting purposes, utilizing technologies such as Postgres, Redshift, Snowflake, etc. Knowledge of ingestion tools such as Apache Kafka, Apache Nifi, AWS Glue, or Azure DataFactory. Establish and enforce data governance policies and procedures to ensure data quality and security. Utilize orchestration tools like Airflow or Dagster to schedule and manage data workflows. Develop scripts and applications in Python to automate tasks and processes. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Communicate technical solutions effectively to clients and stakeholders. Familiarity with multiple cloud ecosystems such as AWS, Azure, and Google Cloud Platform(GCP). Experience with containerization and orchestration technologies like Docker and Kubernetes. Knowledge of machine learning and data science concepts. Experience with data visualization tools such as Tableau or Power BI. Understanding of DevOps principles and practices. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Title: AI Research Engineer Intern (Fresher) Reporting to: Lead – Research & Innovation Lab Location: remote/ Hybrid (Chennai, India) Engagement: 6-month, full-time paid internship with pre-placement-offer track 1. Why this role exists Stratsyn AI Technology Services is turbo-charging Stratsyn’s cloud-native Enterprise Intelligence & Management Suite —a modular SaaS ecosystem that fuses advanced AI, low-code automation, multimodal search, and next-generation “Virtual workforce” agents. The platform unifies strategic planning, document intelligence, workflow orchestration, and real-time analytics, empowering C-suite leaders to simulate scenarios, orchestrate execution, and convert insight into action with unmatched speed and scalability. To keep pushing that frontier, we need sharp, curious minds who can translate cutting-edge research into production-grade capabilities for this suite. This internship is our talent-funnel into future Research Engineer and Product Scientist roles. 2. What you’ll do (core responsibilities) % FocusKey Responsibility 30 %Rapid Prototyping & Experimentation – implement state-of-the-art papers (LLMs, graph learning, causal inference, agents), design ablation studies, benchmark against baselines, and iterate fast. 25 %Data Engineering for Research – build reproducible datasets, craft synthetic data when needed, automate ETL pipelines, and enforce experiment tracking (MLflow / Weights & Biases). 20 %Model Evaluation & Explainability – create evaluation harnesses (BLEU, ROUGE, MAPE, custom KPIs), visualize error landscapes, and generate executive-ready insights. 15 %Collaboration & Documentation – author tech memos, well-annotated notebooks, and contribute to internal knowledge bases; present findings in weekly research stand-ups. 10 %Innovation Scouting – scan arXiv, ACL, NeurIPS, ICML, and startup ecosystems; summarize high-impact research and propose areas for IP creation within the Suite. 3. What you will learn / outcomes to achieve Master the end-to-end research workflow: literature review → hypothesis → prototype → validation → deployment shadow. Deliver one peer-review-quality technical report and two production-grade proof-of-concepts for the Suite. Achieve a measurable impact (e.g., 8-10 % forecasting-accuracy lift or 30 % latency reduction) on a live micro-service. 4. Minimum qualifications (freshers welcome) B.E./B.Tech/M.Sc./M.Tech in CS, Data Science, Statistics, EE, or related (2024-2026 pass-out). Fluency in Python and at least one deep-learning framework (PyTorch preferred). Solid grasp of linear algebra, probability, optimization, and algorithms. Hands-on academic or personal projects in NLP, CV, time-series, or RL (GitHub links highly valued). 5. Preferred extras Publications or Kaggle/ML-competition record. Experience with distributed training (GPU clusters, Ray, Lightning) and experiment-tracking tools. Familiarity with MLOps (Docker, CI/CD, Kubernetes) or data-centric AI. Domain knowledge in supply-chain, fintech, climate, or marketing analytics. 6. Key attributes & soft skills First-principles thinker – questions assumptions, proposes novel solutions. Bias for action – prototypes in hours, not weeks; embraces agile experimentation. Storytelling ability – explains complex models in clear, executive-friendly language. Ownership mentality – treats the prototype as a product, not just a demo. 7. Tech stack you’ll touch Python | PyTorch | Hugging Face | TensorRT | LangChain | Neo4j/GraphDB | PostgreSQL | Airflow | MLflow | Weights & Biases | Docker | GitHub Actions | JAX (exploratory) 8. Internship logistics & perks Competitive monthly stipend + performance bonus. High-end workstation + GPU credits on our private cloud. Dedicated mentor and 30-60-90-day learning plan. Access to premium research portals and paid conference passes. Culture of radical candor, weekly brown-bag tech talks, and hack days. Fast-track to full-time AI Research Engineer upon successful completion. 9. Application process Apply via email: Send résumé, brief statement of purpose, and GitHub/portfolio links to HR@stratsyn.ai . Online coding assessment: algorithmic + ML fundamentals. Technical interview (2 rounds): deep dive into projects, math, and research reasoning. Culture-fit discussion: with Research Lead & CPO. Offer & onboarding – target turnaround < 3 weeks.
Posted 2 days ago
2.0 - 4.0 years
0 Lacs
Gandhinagar, Gujarat, India
On-site
Job Title: DevOps Engineer Location: GIFT City, Gandhinagar Shift Timings: 9:00 AM - 6:00 PM About Us We are a dynamic and fast-paced organization operating in the algorithmic trading domain , seeking a highly motivated DevOps Engineer who can quickly grasp new concepts and take charge in managing our backend infrastructure. You will play a crucial role in ensuring smooth trading operations by optimizing and maintaining scalable infrastructure. Key Responsibilities Design and Implementation: Develop robust solutions for monitoring and metrics infrastructure to support algorithmic trading. Technical Support: Provide hands-on support for live trading environments on Linux servers, ensuring seamless trading activities. Problem Resolution: Leverage technical and analytical skills to identify and resolve issues related to application and business functionality promptly. Database Management: Administer databases and execute SQL queries to perform on-demand analytics, driving data-informed decision-making. Application & Network Management: Manage new installations, troubleshoot network issues, and optimize network performance for a stable trading environment. Python Infrastructure Management: Oversee Python infrastructure including Airflow, logs, monitoring, and alert systems; address any operational issues that arise. Airflow Management: Develop new DAGs (Python scripts) and optimize existing workflows for efficient trading operations. Infrastructure Optimization: Manage and optimize infrastructure using tools such as Ansible, Docker, and automation technologies. Cross-Team Collaboration: Collaborate with various teams to provide operational support for different projects. Proactive Monitoring: Implement monitoring solutions to detect and address potential issues before they impact trading activities. Documentation: Maintain comprehensive documentation of trading systems, procedures, and workflows for reference and training. Regulatory Compliance: Ensure full adherence to global exchange rules and regulations, maintaining compliance with all legal requirements. Global Market Trading: Execute trades on global exchanges during night shifts, utilizing algorithmic strategies to optimize trading performance. Requirements Experience: 2-4 years in a DevOps role, preferably in a trading environment. Education: Bachelor’s degree in Computer Science or a related field. Technical Skills Proficiency in Linux, Python, and SQL Hands-on experience with Ansible, Docker, and automation tools Experience managing CI/CD pipelines for frequent code testing and deployment Good To Have Advanced Python proficiency NISM certification Key Attributes Strong troubleshooting and problem-solving skills Excellent communication skills Ability to handle multiple tasks efficiently with strong time management Proactive and ownership-driven attitude Trading Experience Experience trading in NSE and global markets is required Perks And Benefits Competitive salary and attractive perks Five-day work week (Monday to Friday) Four weeks of paid leave annually plus festival holidays Annual/biannual offsites and team gatherings Transparent and flat hierarchy fostering growth opportunities Opportunity to work in a dynamic, fast-paced environment with a passionate team of professionals
Posted 2 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team The mission of Roku's Data Engineering team is to develop a world-class big data platform so that internal and external customers can leverage data to grow their businesses. Data Engineering works closely with business partners and Engineering teams to collect metrics on existing and new initiatives that are critical to business success. As Senior Data Engineer working on Device metrics, you will design data models & develop scalable data pipelines to capturing different business metrics across different Roku Devices. About the role Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetise large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and Roku TV™ models are available around the world through direct retail sales and licensing arrangements with TV brands and pay-TV operators.With tens of million players sold across many countries, thousands of streaming channels and billions of hours watched over the platform, building scalable, highly available, fault-tolerant, big data platform is critical for our success.This role is based in Bangalore, India and requires hybrid working, with 3 days in the office. What you'll be doing Build highly scalable, available, fault-tolerant distributed data processing systems (batch and streaming systems) processing over 10s of terabytes of data ingested every day and petabyte-sized data warehouse Build quality data solutions and refine existing diverse datasets to simplified data models encouraging self-service Build data pipelines that optimise on data quality and are resilient to poor quality data sources Own the data mapping, business logic, transformations and data quality Low level systems debugging, performance measurement & optimization on large production clusters Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects Maintain and support existing platforms and evolve to newer technology stacks and architectures We're excited if you have Extensive SQL Skills Proficiency in at least one scripting language, Python is required Experience in big data technologies like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto, etc. Proficiency in data modeling, including designing, implementing, and optimizing conceptual, logical, and physical data models to support scalable and efficient data architectures. Experience with AWS, GCP, Looker is a plus Collaborate with cross-functional teams such as developers, analysts, and operations to execute deliverables 5+ years professional experience as a data or software engineer BS in Computer Science; MS in Computer Science preferred AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Perform general application development activities, including unit testing, code deployment to development environment and technical documentation. Work on one or more projects, making contributions to unfamiliar code written by team members. Diagnose and resolve performance issues. Participate in the estimation process, use case specifications, reviews of test plans and test cases, requirements, and project planning. Document code/processes so that any other developer is able to dive in with minimal effort. Develop, and operate high scale applications from the backend to UI layer, focusing on operational excellence, security and scalability. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit engineering team employing agile software development practices. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Write, debug, and troubleshoot code in mainstream open source technologies Lead effort for Sprint deliverables, and solve problems with medium complexity Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years working experience software development using multiple versions of Python. Experience and familarity with the various Python frameworks currently in use to leverage software development processes. Develop, test, and deploy high-quality Python code for AI/ML applications, data pipelines, and backend services. Design, implement, and optimizem Machine Learning models and algorithms for various business problems. Collaborate with data scientists to transition experimental models into production-ready systems. Build and maintain robust data ingestion and processing pipelines to feed data into ML models. Perform code reviews, provide constructive feedback, and ensure adherence to best coding practices. Troubleshoot, debug, and optimize existing ML systems and applications for performance and scalability. Stay up-to-date with the latest advancements in Python, machine learning, and related technologies. Document technical designs, processes, and operational procedures. Experience with Cloud technology: GCP or AWS What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others. Source code control management systems (e.g. Git, Github). Agile environments (e.g. Scrum, XP). Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern Python versions
Posted 2 days ago
6.0 - 8.0 years
19 - 35 Lacs
Hyderabad
Work from Office
We are hiring a Senior Data Engineer with 6- 8 years of experience Education:- Candidates from premier institutes like IIT, IIM, IISc, NIT, IIIT top- ranked institutions in India are highly encouraged to apply.
Posted 2 days ago
4.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Job Posting TitleSR. DATA SCIENTIST Band/Level5-2-C Education ExperienceBachelors Degree (High School +4 years) Employment Experience5-7 years At TE, you will unleash your potential working with people from diverse backgrounds and industries to create a safer, sustainable and more connected world. Job Overview Solves complex problems and help stakeholders make data- driven decisions by leveraging quantitative methods, such as machine learning. It often involves synthesizing large volume of information and extracting signals from data in a programmatic way. Roles & Responsibilities Key Responsibilities Design, train, and evaluate supervised & unsupervised models (regression, classification, clustering, uplift). Apply automated hyperparameter optimization (Optuna, HyperOpt) and interpretability techniques (SHAP, LIME). Perform deep exploratory data analysis (EDA) to uncover patterns & anomalies. Engineer predictive features from structured, semistructured, and unstructured data; manage feature stores (Feast). Ensure data quality through rigorous validation and automated checks. Build hierarchical, intermittent, and multiseasonal forecasts for thousands of SKUs. Implement traditional (ARIMA, ETS, Prophet) and deeplearning (RNN/LSTM, TemporalFusion Transformer) approaches. Reconcile forecasts across product/category hierarchies; quantify accuracy (MAPE, WAPE) and bias. Establish model tracking & registry (MLflow, SageMaker Model Registry). Develop CI/CD pipelines for automated retraining, validation, and deployment (Airflow, Kubeflow, GitHub Actions). Monitor data & concept drift; trigger retuning or rollback as needed. Design and analyze A/B tests, causal inference studies, and Bayesian experiments. Provide statisticallygrounded insights and recommendations to stakeholders. Translate business objectives into datadriven solutions; present findings to exec & nontech audiences. Mentor junior data scientists, review code/notebooks, and champion best practices. Desired Candidate Minimum Qualifications M.S. in Statistics (preferred) or related field such as Applied Mathematics, Computer Science, Data Science. 5+ years building and deploying ML models in production. Expertlevel proficiency in Python (Pandas, NumPy, SciPy, scikitlearn), SQL, and Git. Demonstrated success delivering largescale demandforecasting or timeseries solutions. Handson experience with MLOps tools (MLflow, Kubeflow, SageMaker, Airflow) for model tracking and automated retraining. Solid grounding in statistical inference, hypothesis testing, and experimental design. Preferred / NicetoHave Experience in supplychain, retail, or manufacturing domains with highgranularity SKU data. Familiarity with distributed data frameworks (Spark, Dask) and cloud data warehouses (BigQuery, Snowflake). Knowledge of deeplearning libraries (PyTorch, TensorFlow) and probabilistic programming (PyMC, Stan). Strong datavisualization skills (Plotly, Dash, Tableau) for storytelling and insight communication. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more atwww.te.com and onLinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). WHAT TE CONNECTIVITY OFFERS: We are pleased to offer you an exciting total package that can also be flexibly adapted to changing life situations - the well-being of our employees is our top priority! Competitive Salary Package Performance-Based Bonus Plans Health and Wellness Incentives Employee Stock Purchase Program Community Outreach Programs / Charity Events IMPORTANT NOTICE REGARDING RECRUITMENT FRAUD TE Connectivity has become aware of fraudulent recruitment activities being conducted by individuals or organizations falsely claiming to represent TE Connectivity. Please be advised that TE Connectivity never requests payment or fees from job applicants at any stage of the recruitment process. All legitimate job openings are posted exclusively on our official careers website at te.com/careers, and all email communications from our recruitment team will come only from actual email addresses ending in @te.com . If you receive any suspicious communications, we strongly advise you not to engage or provide any personal information, and to report the incident to your local authorities. Across our global sites and business units, we put together packages of benefits that are either supported by TE itself or provided by external service providers. In principle, the benefits offered can vary from site to site.
Posted 2 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for MLops Engineer@ Gurgaon Location Job Responsibilities: Design and implement CI/CD pipelines for machine learning workflows. Develop and maintain production-grade ML pipelines using tools like MLflow, Kubeflow, or Airflow. Automate model training, testing, deployment, and monitoring processes. Collaborate with Data Scientists to operationalize ML models, ensuring scalability and performance. Monitor deployed models for drift, degradation, and bias, and trigger retraining as needed. Maintain and improve infrastructure for model versioning, artifact tracking, and reproducibility. Integrate ML solutions with microservices/APIs using FastAPI or Flask. Work on containerized environments using Docker and Kubernetes. Implement logging, monitoring, and alerting for ML systems (e.g., Prometheus, Grafana). Champion best practices in code quality, testing, and documentation. Required Skills: 7+ years of experience in Python development and ML/AI-related engineering roles. Strong experience in ML Ops tools like MLflow, Kubeflow, Airflow, or similar. Deep understanding of Docker, Kubernetes, and container orchestration for ML workflows. Hands-on experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code (Terraform/CDK). Familiarity with model deployment and serving frameworks (e.g., Seldon, TorchServe, TensorFlow Serving). Good understanding of DevOps practices and CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). Experience with data versioning tools (e.g., DVC) and model lifecycle management. Exposure to monitoring tools for ML and infrastructure healt Experience:7-12 Yrs Job Location :Gurgaon Interested candidates can share your CV to mangani.paramanandhan@bounteous.com, i will call you shortly. Please share your CV.
Posted 2 days ago
7.0 years
30 - 45 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for a customer-obsessed, analytical Sr. Staff Engineer to lead the development and growth of our Tax Compliance product suite . In this role, you’ll shape innovative digital solutions that simplify and automate tax filing, reconciliation, and compliance workflows for businesses of all sizes. You will join a fast-growing company where you’ll work in a dynamic and competitive market, impacting how businesses meet their statutory obligations with speed, accuracy, and confidence. As the Sr. Staff Engineer, you’ll work closely with product, DevOps, and data teams to architect reliable systems, drive engineering excellence , and ensure high availability across our platform. We’re looking for a technical leader who’s not just an expert in building scalable systems, but also passionate about mentoring engineers and shaping the future of fintech. Responsibilities Lead, mentor, and inspire a high-performing engineering team (or operate as a hands-on technical lead). Drive the design and development of scalable backend services using Python. Experience in Django, FastAPI, Task Orchestration Systems. Own and evolve our CI/CD pipelines with Jenkins, ensuring fast, safe, and reliable deployments. Architect and manage infrastructure using AWS and Terraform with a DevOps-first mindset. Collaborate cross-functionally with product managers, designers, and compliance experts to deliver features that make tax compliance seamless for our users. Set and enforce engineering best practices, code quality standards, and operational excellence. Stay up-to-date with industry trends and advocate for continuous improvement in engineering processes. Experience in fintech, tax, or compliance industries. Familiarity with containerization tools like Docker and orchestration with Kubernetes. Background in security, observability, or compliance automation. Requirements 7+ years of software engineering experience, with at least 2+ years in a leadership or principal-level role. Deep expertise in Python, including API development, performance optimization, and testing. Experience in Event-driven architecture, Kafka/RabbitMQ-like systems. Strong experience with AWS services (e.g., ECS, Lambda, S3, RDS, CloudWatch). Solid understanding of Terraform for infrastructure as code. Proficiency with Jenkins or similar CI/CD tooling. Comfortable balancing technical leadership with hands-on coding and problem-solving. Strong communication skills and a collaborative mindset. Skills:- Python, Django, FastAPI, PostgreSQL, MongoDB, Redis, Apache Kafka, RabbitMQ, AWS Simple Notification Service (SNS), AWS Simple Queuing Service (SQS), Amazon Web Services (AWS), Systems design, Apache Airflow and Celery
Posted 2 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Big Data Engineer (AWS-Scala Specialist) Location: Greater Noida/Hyderabad Experience: 5-10 Years About the Role- We are seeking a highly skilled Senior Big Data Engineer with deep expertise in Big Data technologies and AWS Cloud Services. The ideal candidate will bring strong hands-on experience in designing, architecting, and implementing scalable data engineering solutions while driving innovation within the team. Key Responsibilities- Design, develop, and optimize Big Data architectures leveraging AWS services for large-scale, complex data processing. Build and maintain data pipelines using Spark (Scala) for both structured and unstructured datasets. Architect and operationalize data engineering and analytics platforms (AWS preferred; Hortonworks, Cloudera, or MapR experience a plus). Implement and manage AWS services including EMR, Glue, Kinesis, DynamoDB, Athena, CloudFormation, API Gateway, and S3. Work on real-time streaming solutions using Kafka and AWS Kinesis. Support ML model operationalization on AWS (deployment, scheduling, and monitoring). Analyze source system data and data flows to ensure high-quality, reliable data delivery for business needs. Write highly efficient SQL queries and support data warehouse initiatives using Apache NiFi, Airflow, and Kylo. Collaborate with cross-functional teams to provide technical leadership, mentor team members, and strengthen the data engineering capability. Troubleshoot and resolve complex technical issues, ensuring scalability, performance, and security of data solutions. Mandatory Skills & Qualifications- ✅ 5+ years of solid hands-on experience in Big Data Technologies (AWS, Scala, Hadoop and Spark Mandatory) ✅ Proven expertise in Spark with Scala ✅ Hands-on experience with: AWS services (EMR, Glue, Lambda, S3, CloudFormation, API Gateway, Athena, Lake Formation) Share your resume at Aarushi.Shukla@coforge.com if you have experience with mandatory skills and you are an early.
Posted 2 days ago
10.0 - 15.0 years
35 - 50 Lacs
Bengaluru
Hybrid
What the job involves You'll be joining a fast-growing, motivated, and talented team of engineers who are building innovative products that are transforming the mobile marketing industry. Our solutions enable clients to measure the effectiveness of their campaigns in novel and impactful ways. Working closely with our existing team of software engineers, you will contribute to the ongoing enhancement of our product suite. This includes adding new features to existing systems and helping to develop new systems that support upcoming product offerings. As a Backend Technical Lead , you will play a pivotal role in designing, building, and maintaining scalable, observable backend systems using Golang and modern cloud-native architectures . You will lead hands-on development while also guiding the team in adopting best practices in monitoring, logging, tracing , and system reliability . Operational excellence is also a part of this role. To ensure the continued stability and performance of our systems, you will be expected to participate in the on-call rotation. This responsibility includes responding to incidents, troubleshooting production issues, and working with the team to implement long-term fixes. Your involvement will be critical in maintaining uptime and providing a seamless experience for our customers. Additionally, you will help drive improvements to our alerting systems and incident response processes to reduce noise and enhance efficiency. Who you are Required Skills: 10+ years of software engineering experience , including 4+ years with Golang , focused on building high-performance backend systems. Hands-on experience with messaging platforms such as Apache Pulsar , NATS JetStream , Kafka , or similar pub/sub or streaming technologies. Strong knowledge of observability practices , including instrumentation, OpenTelemetry , Prometheus , Grafana , logging pipelines (e.g., Loki , ELK stack ), and distributed tracing . Proficient in REST API development , goroutines , and Go concurrency patterns . Deep understanding of microservices architecture and containerized deployments using Docker and Kubernetes . Experience with cloud platforms such as GCP , AWS , or Azure . Strong database skills: MySQL (mandatory), with additional exposure with additional exposure to Distributed databases (e.g., Spanner) Proven ability to optimize applications for maximum performance and scalability . Solid experience in maintaining production systems , with the ability to quickly debug and resolve issues both during development and in live environments. Optional Skills: Understanding of adtech , programmatic advertising, or mobile attribution systems. Experience using AI tools (e.g., GitHub Copilot and Claude Code) to assist development. Soft Skills: Demonstrates strong ownership and the ability to work independently within a distributed, remote team. Possesses excellent problem-solving skills and a deep appreciation for clean, testable, and maintainable code . Eager to learn new technologies and explore unfamiliar domains. Comfortable mentoring team members and leading by example. Cares deeply about code quality ; understands the importance of thorough testing and maintaining high standards. Collaborates effectively with a remote, international team.
Posted 2 days ago
3.0 - 8.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Greetings !!! Hiring for GCP Data Engineer for Hyderabad Location. Experience - 3 to 8 Skills :- GCP, Pyspark, DAG, Airflow, Python, Teradata (Good to Have) Job location - Hyderabad (WFO) Interested one can share their profiles to anmol.bhatia@incedoinc.com
Posted 2 days ago
3.0 - 7.0 years
15 - 30 Lacs
Bengaluru
Hybrid
What the job involves You will be joining a fast-growing team of motivated and talented engineers, helping us to build and enhance a suite of innovative products that are changing the mobile marketing industry by enabling our clients to measure the effectiveness of their campaigns in a completely novel way. Working closely with our existing team of software engineers, you will contribute to improving our product suite. You will do this by adding new features to our existing systems and helping create new systems to facilitate new product offerings. You'll work under the mentorship of a lead software engineer who will support you and manage your onboarding and continuous professional development needs. We operate a blameless culture with a flat organizational structure where all input and ideas are welcome; we make decisions fast and value good ideas over seniority, so everyone in the team can make a real difference in product evolution. Who you are Required Skills: You have at least 3-7 years of commercial experience with software engineering in Golang, including REST API development and a strong understanding of data structures and concurrency using goroutines. Hands-on experience with messaging platforms such as Apache Pulsar , Kafka, Pubsub. Good knowledge of observability practices , including instrumentation, OpenTelemetry , Prometheus , Grafana , logging pipelines Experience in relational databases (e.g. MySQL) is a must, exposure to GraphQL, NoSQL databases (e.g. Mongo) would be an advantage. Youve worked with microservice architectures with a good appreciation of performance and quality requirements, Hands-on experience with Docker containers, Kubernetes. Experience working with any cloud platforms such as GCP, AWS, Azure. Experience in data engineering technologies like ELT/ETL workflows, Kafka, Airflow, etc. Optional Skills: Any experience with C# or Python or NodeJS Understanding of the adtech landscape and familiarity with mobile advertising measurement solutions is highly desirable. Experience using AI-powered coding assistants like GitHub CoPilot, etc. Soft Skills: You enjoy new challenges and gain satisfaction from solving interesting problems in a wide range of areas You care deeply about the quality of your work; you are sensitive to the importance of testing your code thoroughly and maintaining it to a high standard You dont need to be micromanaged; Youll ask for help when you need it but you can apply initiative to solve problems on your own You are enthusiastic about broadening your skill set; you are willing and able to quickly learn new techniques and technologies You know how to collaborate effectively with a remote, international team
Posted 2 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Manager Software Engineer Overview We are the global technology company behind the world’s fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities. Our Team Within Mastercard – Data & Services The Data & Services team is a key differentiator for Mastercard, providing the cutting-edge services that are used by some of the world's largest organizations to make multi-million dollar decisions and grow their businesses. Focused on thinking big and scaling fast around the globe, this agile team is responsible for end-to-end solutions for a diverse global customer base. Centered on data-driven technologies and innovation, these services include payments-focused consulting, loyalty and marketing programs, business Test & Learn experimentation, and data-driven information and risk management services. Targeting Analytics Program Within the D&S Technology Team, the Targeting Analytics program is a relatively new program that is comprised of a rich set of products that provide accurate perspectives on Credit Risk, Portfolio Optimization, and Ad Insights. Currently, we are enhancing our customer experience with new user interfaces, moving to API-based data publishing to allow for seamless integration in other Mastercard products and externally, utilizing new data sets and algorithms to further analytic capabilities, and generating scalable big data processes. We are seeking an innovative Lead Software Engineer to lead our team in designing and building a full stack web application and data pipelines. The goal is to deliver custom analytics efficiently, leveraging machine learning and AI solutions. This individual will thrive in a fast-paced, agile environment and partner closely with other areas of the business to build and enhance solutions that drive value for our customers. Engineers work in small, flexible teams. Every team member contributes to designing, building, and testing features. The range of work you will encounter varies from building intuitive, responsive UIs to designing backend data models, architecting data flows, and beyond. There are no rigid organizational structures, and each team uses processes that work best for its members and projects. Here are a few examples of products in our space: Portfolio Optimizer (PO) is a solution that leverages Mastercard’s data assets and analytics to allow issuers to identify and increase revenue opportunities within their credit and debit portfolios. Audiences uses anonymized and aggregated transaction insights to offer targeting segments that have high likelihood to make purchases within a category to allow for more effective campaign planning and activation. Credit Risk products are a new suite of APIs and tooling to provide lenders real-time access to KPIs and insights serving thousands of clients to make smarter risk decisions using Mastercard data. Help found a new, fast-growing engineering team! Position Responsibilities As a Lead Software Engineer, you will: Lead the scoping, design and implementation of complex features Lead and push the boundaries of analytics and powerful, scalable applications Design and implement intuitive, responsive UIs that allow issuers to better understand data and analytics Build and maintain analytics and data models to enable performant and scalable products Ensure a high-quality code base by writing and reviewing performant, well-tested code Mentor junior software engineers and teammates Drive innovative improvements to team development processes Partner with Product Managers and Customer Experience Designers to develop a deep understanding of users and use cases and apply that knowledge to scoping and building new modules and features Collaborate across teams with exceptional peers who are passionate about what they do Ideal Candidate Qualifications 10+ years of engineering experience in an agile production environment. Experience leading the design and implementation of complex features in full-stack applications. Proficiency with object-oriented languages, preferably Java/ Spring. Proficiency with modern front-end frameworks, preferably React with Redux, Typescript. High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi, Scoop) Fluent in the use of Git, Jenkins. Solid experience with RESTful APIs and JSON/SOAP based API Solid experience with SQL, Multi-threading, Message Queuing. Experience in building and deploying production-level data-driven applications and data processing workflows/pipelines and/or implementing machine learning systems at scale in Java, Scala, or Python and deliver analytics involving all phases. Desirable Capabilities Hands on experience of cloud native development using microservices. Hands on experience on Kafka, Zookeeper. Knowledge of Security concepts and protocol in enterprise application. Expertise with automated E2E and unit testing frameworks. Knowledge of Splunk or other alerting and monitoring solutions. Core Competencies Strong technologist eager to learn new technologies and frameworks. Experience coaching and mentoring junior teammates. Customer-centric development approach Passion for analytical / quantitative problem solving Ability to identify and implement improvements to team development processes Strong collaboration skills with experience collaborating across many people, roles, and geographies Motivation, creativity, self-direction, and desire to thrive on small project teams Superior academic record with a degree in Computer Science or related technical field Strong written and verbal English communication skills #AI3 Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : AWS Architecture Good to have skills : Python (Programming Language) Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions that enhance data accessibility and usability. AWS Data Architect to lead the design and implementation of scalable, cloud-native data platforms. The ideal candidate will have deep expertise in AWS data services, along with hands-on proficiency in Python and PySpark for building robust data pipelines and processing frameworks. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve data processes to ensure efficiency and effectiveness. - Design and implement enterprise-scale data lake and data warehouse solutions on AWS. - Lead the development of ELT/ETL pipelines using AWS Glue, EMR, Lambda, and Step Functions, with Python and PySpark. - Work closely with data engineers, analysts, and business stakeholders to define data architecture strategy. - Define and enforce data modeling, metadata, security, and governance best practices. - Create reusable architectural patterns and frameworks to streamline future development. - Provide architectural leadership for migrating legacy data systems to AWS. - Optimize performance, cost, and scalability of data processing workflows. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Architecture. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with data warehousing concepts and technologies. - Knowledge of programming languages such as Python or Java for data processing. - AWS Services: S3, Glue, Athena, Redshift, EMR, Lambda, IAM, Step Functions, CloudFormation or Terraform - Languages: Python ,PySpark .SQL - Big Data: Apache Spark, Hive, Delta Lake - Orchestration & DevOps: Airflow, Jenkins, Git, CI/CD pipelines - Security & Governance: AWS Lake Formation, Glue Catalog, encryption, RBAC - Visualization: Exposure to BI tools like QuickSight, Tableau, or Power BI is a plus Additional Information: - The candidate should have minimum 5 years of experience in AWS Architecture. - This position is based at our Pune office. - A 15 years full time education is required.
Posted 2 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a skilled Data Engineer with a solid background in building and maintaining scalable data pipelines and systems. You will work closely with data analysts, engineering teams, and business stakeholders to ensure seamless data flow across platforms. Responsibilities Design, build, and optimize robust, scalable data pipelines (batch and streaming). Develop ETL/ELT processes using tools like Airflow, DBT, or custom scripts. Integrate data from various sources (e. g., APIs, S3 databases, SaaS tools). Collaborate with analytics and product teams to ensure high-quality datasets. Monitor pipeline performance and troubleshoot data quality or latency issues. Work with cloud data warehouses (e. g., Redshift, Snowflake, BigQuery). Implement data validation, error handling, and alerting for production jobs. Maintain documentation for pipelines, schemas, and data sources. Requirements 3+ years of experience in Data Engineering or similar roles. Strong in SQL and experience with data modeling and transformation. Hands-on experience with Python or Scala for scripting/data workflows. Experience working with Airflow, AWS (S3 Redshift, Lambda), or equivalent cloud tools. Knowledge of version control (Git) and CI/CD workflows. Strong problem-solving and communication skills. Good To Have Experience with DBT, Kafka, or real-time data processing. Familiarity with BI tools(e. g., Tableau, Looker, Power BI). Exposure to Docker, Kubernetes, or DevOps practices. This job was posted by Harika K from Invictus.
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a skilled AWS Data Engineer to design, develop, and maintain scalable data pipelines and cloud-based data infrastructure on Amazon Web Services (AWS). The ideal candidate will work closely with data scientists, analysts, and software engineers to ensure high availability and performance of data solutions across the organization. Responsibilities Build/support applications using speech-to-text AWS services like Transcribe, Comprehend, along with Bedrock. Experience working with BI tools like QuickSight. Design, build, and manage scalable data pipelines using AWS services (e. g., Glue, Lambda, Step Functions, S3 EMR, Kinesis, Snowflake). Optimize data storage and retrieval for large-scale datasets in data lakes or data warehouses. Monitor, debug, and optimize the performance of data jobs and workflows. Ensure data quality, consistency, and security across environments. Collaborate with analytics, engineering, and business teams to understand data needs. Automate infrastructure deployment using IaC tools like CloudFormation or Terraform. Apply best practices for cloud cost optimization, data governance, and DevOps. Stay current with AWS services and recommend improvements to data architecture. Understanding machine learning pipelines and MLOps (nice to have). Requirements Bachelor's degree in computer science or a related field. 5+ years of experience as a Data Engineer, with at least 3 years focused on AWS. Strong experience with AWS services, including Transcribe, Bedrock, and QuickSight. Familiarity with Glue, S3 Snowflake, Lambda, Step Function, Kinesis, Athena, EC2/EMR, Power BI, or Tableau. Proficient in Python, PySpark, or Scala for data engineering tasks. Hands-on experience with SQL and data modeling. Familiarity with CI/CD pipelines and version control (e. g., Git, CodePipeline). Experience with orchestration tools (e. g., Airflow, Step Functions). Knowledge of data security, privacy, and compliance standards (GDPR, HIPAA, etc. ). Good To Have Skills AWS certifications (e. g., AWS Certified Data Analytics - Specialty, AWS Certified Solutions Architect). Experience with containerization (Docker, ECS, EKS). Experience working in Agile/Scrum environments. This job was posted by Shailendra Singh from PearlShell Softech.
Posted 2 days ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
As a Senior Data Engineer, you will architect, build, and maintain our data infrastructure that powers critical business decisions. You will work closely with data scientists, analysts, and product teams to design and implement scalable solutions for data processing, storage, and retrieval. Your work will directly impact our ability to leverage data for business intelligence, machine learning initiatives, and customer insights. Responsibilities Design, build, and maintain our end-to-end data infrastructure on AWS and GCP cloud platforms. Develop and optimize ETL/ELT pipelines to process large volumes of data from multiple sources. Build and support data pipelines for reporting, analytics, and machine learning applications. Implement and manage streaming data solutions using Kafka and other technologies. Design and optimize database schemas and data models in ClickHouse and other databases. Develop and maintain data workflows using Apache Airflow and similar orchestration tools. Write efficient, maintainable, and scalable code using PySpark and other data processing frameworks. Collaborate with data scientists to implement ML infrastructure for model training and deployment. Ensure data quality, reliability, and security across all data platforms. Monitor data pipelines and implement proactive alerting systems. Troubleshoot and resolve data infrastructure issues. Document data flows, architectures, and processes. Stay current with industry trends and emerging technologies in data engineering. Requirements Bachelor's degree in Computer Science, Engineering, or related technical field (Master's preferred). 5+ years of experience in data engineering roles. Strong expertise in AWS and/or GCP cloud platforms and services. Proficiency in building data pipelines using modern ETL/ELT tools and frameworks. Experience with stream processing technologies such as Kafka. Hands-on experience with ClickHouse or similar analytical databases. Strong programming skills in Python and experience with PySpark. Experience with workflow orchestration tools like Apache Airflow. Solid understanding of data modeling, data warehousing concepts, and dimensional modeling. Knowledge of SQL and NoSQL databases. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work in cross-functional teams. Experience in D2C, e-commerce, or retail industries. Knowledge of data visualization tools (Tableau, Looker, Power BI). Experience with real-time analytics solutions. Familiarity with CI/CD practices for data pipelines. Experience with containerization technologies (Docker, Kubernetes). Understanding of data governance and compliance requirements. Experience with MLOps or ML engineering Technologies. Cloud Platforms: AWS (S3 Redshift, EMR, Lambda), GCP (BigQuery, Dataflow, Dataproc). Data Processing: Apache Spark, PySpark, Python, SQL. Streaming: Apache Kafka, Kinesis. Data Storage: ClickHouse, S, 3 BigQuery, PostgreSQL, MongoDB. Orchestration: Apache Airflow. Version Control: Git. Containerization: Docker, Kubernetes (optional). This job was posted by Sidharth Patra from Traya Health.
Posted 2 days ago
2.0 - 4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
We are looking for a highly skilled and hands-on Senior Data Engineer to join our growing data engineering practice in Mumbai. This role requires deep technical expertise in building and managing enterprise-grade data pipelines, with a primary focus on Amazon Redshift, AWS Glue, and data orchestration using Airflow or Step Functions. You will be responsible for building scalable, high-performance data workflows that ingest and process multi-terabyte-scale data across complex, concurrent environments. The ideal candidate is someone who thrives in solving performance bottlenecks, has led or participated in data warehouse migrations (e. g., Snowflake to Redshift), and is confident in interfacing with business stakeholders to translate requirements into robust data solutions. Responsibilities Design, develop, and maintain high-throughput ETL/ELT pipelines using AWS Glue (PySpark), orchestrated via Apache Airflow or AWS Step Functions. Own and optimize large-scale Amazon Redshift clusters and manage high concurrency workloads for a very large user base: Lead and contribute to migration projects from Snowflake or traditional RDBMS to Redshift, ensuring minimal downtime and robust validation. Integrate and normalize data from heterogeneous sources, including REST APIs, AWS Aurora (MySQL/Postgres), streaming inputs, and flat files. Implement intelligent caching strategies, leverage EC2 and serverless compute (Lambda, Glue) for custom transformations and processing at scale. Write advanced SQL for analytics, data reconciliation, and validation, demonstrating strong SQL development and tuning experience. Implement comprehensive monitoring, alerting, and logging for all data pipelines to ensure reliability, availability, and cost optimization. Collaborate directly with product managers, analysts, and client-facing teams to gather requirements and deliver insights-ready datasets. Champion data governance, security, and lineage, ensuring data is auditable and well-documented across all environments. Requirements 2-4 years of core data engineering experience, especially focused on Amazon Redshift hands-on performance tuning and large-scale management capacity. Demonstrated experience handling multi-terabyte Redshift clusters, concurrent query loads, and managing complex workload segmentation and queue priorities. Strong experience with AWS Glue (PySpark) for large-scale ETL jobs. Solid understanding and implementation experience of workflow orchestration using Apache Airflow or AWS Step Functions. Strong proficiency in Python, advanced SQL, and data modeling concepts. Familiarity with CI/CD pipelines, Git, DevOps processes, and infrastructure-as-code concepts. Experience with Amazon Athena, Lake Formation, or S3-based data lakes. Hands-on participation in Snowflake, BigQuery, or Teradata migration projects. AWS Certifications such as: AWS Certified Data Analytics - Specialty. AWS Certified Solutions Architect - Associate/Professional. Exposure to real-time streaming architectures or Lambda architectures. Soft Skills & Expectations Excellent communication skills enable able to confidently engage with both technical and non-technical stakeholders, including clients. Strong problem-solving mindset and a keen attention to performance, scalability, and reliability. Demonstrated ability to work independently, lead tasks, and take ownership of large-scale systems. Comfortable working in a fast-paced, dynamic, and client-facing environment. This job was posted by Rituza Rani from Oneture Technologies.
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role - Python Developer Exp - 5 to 8 yrs Location - Bengaluru / Chennai Mode - 100% WFO NP - Immediate Joiners to Serving till 15th Aug (Need to provide supporting docs for Last working date) and bench candidates will not be considered Candidate should be available for Virtual Interview * Skills - Python Development With API Integration And Airflow Contact - Grace Call / WhatsApp - 6385810755 Email - grace.h@cortexconsultants.com
Posted 2 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: We are looking for a skilled and experienced Data Engineer with over 5 years of experience in data engineering and data migration projects. The ideal candidate should possess strong expertise in SQL, Python, data modeling, data warehousing, and ETL pipeline development. Experience with big data tools like Hadoop and Spark, along with AWS services such as Redshift, S3, Glue, EMR, and Lambda, is essential. This role provides an excellent opportunity to work on large-scale data solutions, enabling data-driven decision-making and operational excellence. Key Responsibilities: • Design, build, and maintain scalable data pipelines and ETL processes. • Develop and optimize data models and data warehouse architectures. • Implement and manage big data technologies and cloud-based data solutions. • Perform data migration, data transformation, and integration from multiple sources. • Collaborate with data scientists, analysts, and business teams to understand data needs and deliver solutions. • Ensure data quality, consistency, and security across all data pipelines and storage systems. • Optimize performance and manage cost-efficient AWS cloud resources. Basic Qualifications: • Master's degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or equivalent. • 5+ years of experience in Data Engineering and data migration projects. • Proficient in SQL and Python for data processing and analysis. • Strong experience in data modeling, data warehousing, and building data pipelines. • Hands-on experience with big data technologies like Hadoop, Spark, etc. • Expertise in AWS services including Redshift, S3, AWS Glue, EMR, Kinesis, Firehose, Lambda, and IAM. • Understanding of ETL development best practices and principles. Preferred Qualifications: • Knowledge of data security and data privacy best practices. • Experience with DevOps and CI/CD practices related to data workflows. • Familiarity with data lake architectures and real-time data streaming. • Strong problem-solving abilities and attention to detail. • Excellent verbal and written communication skills. • Ability to work independently and in a team-oriented environment. Good to Have: • Experience with orchestration tools like Airflow or Step Functions. • Exposure to BI/Visualization tools like QuickSight, Tableau, or Power BI. • Understanding of data governance and compliance standards.
Posted 2 days ago
6.0 - 11.0 years
6 - 10 Lacs
Hyderabad
Work from Office
About the Role In this opportunity, as Senior Data Engineer, you will: Develop and maintain data solutions using resources such as dbt, Alteryx, and Python. Design and optimize data pipelines, ensuring efficient data flow and processing. Work extensively with databases, SQL, and various data formats including JSON, XML, and CSV. Tune and optimize queries to enhance performance and reliability. Develop high-quality code in SQL, dbt, and Python, adhering to best practices. Understand and implement data automation and API integrations. Leverage AI capabilities to enhance data engineering practices. Understand integration points related to upstream and downstream requirements. Proactively manage tasks and work towards completion against tight deadlines. Analyze existing processes and offer suggestions for improvement. About You Youre a fit for the role of Senior Data Engineer if your background includes: Strong interest and knowledge in data engineering principles and methods. 6+ years of experience developing data solutions or pipelines. 6+ years of hands-on experience with databases and SQL. 2+ years of experience programming in an additional language. 2+ years of experience in query tuning and optimization. Experience working with SQL, JSON, XML, and CSV content. Understanding of data automation and API integration. Familiarity with AI capabilities and their application in data engineering. Ability to adhere to best practices for developing programmatic solutions. Strong problem-solving skills and ability to work independently. #LI-SS6 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 2 days ago
4.0 - 6.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisionsThe Go-To-Markets Data Analytics team is looking for a skilled Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Data Engineer, you will Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data thats both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You Youre a fit for the role of Data Engineer, if your background includes Must have at least 4-6 years of total work experience with at least 2+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. #LI-GS2 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 2 days ago
6.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisionsThe Go-To-Markets Data Analytics team is looking for a skilled Senior Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Senior Data Engineer, you will Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data thats both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You Youre a fit for the role of Senior Data Engineer, if your background includes Must have at least 6-7 years of total work experience with at least 3+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. #LI-GS2 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough