Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
4 - 8 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Principal consultant- Data Engineer In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams. Design and develop data pipelines: Create efficient data pipelines to collect, process, and store large volumes of data from various sources. Implement data solutions: Develop and implement scalable data solutions using technologies like Hadoop, Spark, and SQL databases. Ensure data quality: Monitor and improve data quality by implementing validation processes and error handling. Collaborate with teams: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Optimize performance: Continuously optimize data systems for performance, scalability, and cost-effectiveness. Experience in GenAI project Qualifications we seek in you! Minimum Qualifications / Skills Bachelor's degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master’s degree is a plus Integration – APIs, micro- services and ETL/ELT patterns DevOps (Good to have) – Ansible, Jenkins, ELK Containerization – Docker, Kubernetes etc Orchestration – Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and ML tooling – Sagemaker , ML Studio Execution Paradigm – low latency/Streaming, batch Preferred Qualifications/ Skills Data platforms – Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 8, 2025, 6:53:51 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 2 weeks ago
0 years
4 - 8 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Lead Consultant-Data Engineer! In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams. Design and develop data pipelines: Create efficient data pipelines to collect, process, and store large volumes of data from various sources. Implement data solutions: Develop and implement scalable data solutions using technologies like Hadoop, Spark, and SQL databases. Ensure data quality: Monitor and improve data quality by implementing validation processes and error handling. Collaborate with teams: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Optimize performance: Continuously optimize data systems for performance, scalability, and cost-effectiveness. Experience in GenAI project Qualifications we seek in you! Minimum Qualifications / Skills Bachelor's degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master’s degree is a plus Integration – APIs, micro- services and ETL/ELT patterns DevOps (Good to have) – Ansible, Jenkins, ELK Containerization – Docker, Kubernetes etc Orchestration – Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and ML tooling – Sagemaker , ML Studio Execution Paradigm – low latency/Streaming, batch Preferred Qualifications/ Skills Data platforms – Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 8, 2025, 6:56:06 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 2 weeks ago
5.0 years
15 - 25 Lacs
Hyderābād
On-site
Role - Data Engineer Location - Hyderabad, INDIA [Hybrid] Responsibilities: ● Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform ● Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing ● Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing ● Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions ● Design and implement data warehouse solutions that support analytical needs and machine learning applications ● Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features ● Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability ● Optimize query performance across various database systems through indexing, partitioning, and query refactoring ● Develop and maintain documentation for data models, pipelines, and processes ● Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. ● Stay current with emerging technologies and best practices in data engineering Requirements: ● 5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure work exp on enterprise SAAS is mandatory. ● Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL ● Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB ● Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airflow, or similar technologies ● Experience with data warehousing concepts and technologies ● Solid understanding of data modeling principles and best practices for both operational and analytical systems ● Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning ● Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack ● Proficiency in at least one programming language (Python, Node.js, Java) ● Experience with version control systems (Git) and CI/CD pipelines ● Bachelor's degree in Computer Science, Engineering, or related field Job Description Preferred Qualifications: ● Experience with graph databases (Neo4j, Amazon Neptune) ● Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures ● Experience working with streaming data technologies and real-time data processing ● Familiarity with data governance and data security best practices ● Experience with containerization technologies (Docker, Kubernetes) ● Understanding of financial back-office operations and FinTech domain ● Experience working in a high-growth startup environment ● Master's degree in Computer Science, Data Engineering, or related field Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹2,500,000.00 per year Schedule: Day shift Monday to Friday Work Location: In person
Posted 2 weeks ago
5.0 years
5 - 18 Lacs
Hyderābād
On-site
Job Title: AWS Data Engineer – IoT Industry Location: [Hyderabad] Employment Type: Full-time Experience Level: [Mid / Senior] Department: Data Engineering / IoT Solutions Job Summary: We are seeking a skilled AWS Data Engineer to join our team focused on IoT data solutions . In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and architectures in the AWS cloud ecosystem. Your work will directly support real-time and batch processing of data collected from IoT devices and sensors across multiple environments. Key Responsibilities: Design and implement scalable ETL/ELT pipelines for ingesting, processing, and storing data from IoT devices. Build and maintain data lakes and data warehouses on AWS using services such as S3, Glue, Redshift, Athena, and Lake Formation . Work with real-time streaming data using services like Kinesis Data Streams, Kinesis Data Analytics, and Kafka . Optimize data storage and access patterns for both structured and unstructured data. Implement data quality, governance, and monitoring best practices. Collaborate with Data Scientists, IoT Engineers, and DevOps teams to ensure reliable data delivery and infrastructure automation. Ensure data security and compliance with industry standards. - Design Step Functions state machines (or MWAA) that ingest & process data from IoT telemetry, manufacturing ERP/MES dumps, field-service CRM exports, and finance CSVs/SaaS APIs. - Trigger via EventBridge, manage retries, and alert through SNS/Slack. - IoT Core MQTT rules Aurora Postgres for telemetry. - AWS Glue jobs / AppFlow / Lambda to pull data from SAP, Oracle, Salesforce / Service Now, S3 uploads, etc. - dbt + Python in CodeBuild to cleanse, dedupe, enrich, and unify data into Raw Silver Gold - All pipelines deployed with CodePipeline + CDK, monitored in CloudWatch Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5–12 years of experience in data engineering, preferably in IoT or industrial domains . Proficient in AWS cloud services , including but not limited to S3, Glue, Lambda, Redshift, Athena, DynamoDB, and Kinesis. Strong programming skills in Python, Scala, Experience with SQL and NoSQL databases. Familiarity with data modeling, data warehousing, and big data tools (e.g., Apache Spark, Hive). Ingestion : 1. AWS IoT Core (MQTT) 2. AWS Glue Jobs / Workflows for ERP/CSV 3. AWS AppFlow or custom Lambda for SaaS (Salesforce, Service Now) Job Type: Full-time Pay: ₹501,241.62 - ₹1,841,015.39 per year Work Location: In person
Posted 2 weeks ago
1.0 years
4 - 10 Lacs
Hyderābād
Remote
- Experience defining requirements and using data and metrics to draw business insights - Knowledge of SQL - Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages - Knowledge of Python, VBA, Macros, Selenium scripts - 1+ year of experience working in Analytics / Business Intelligence environment with prior experience of design and execution of analytical projects Want to join the Earth’s most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations – Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment. Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites. We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often. Key job responsibilities - Design and develop highly available dashboards and metrics using SQL and Excel/Tableau - Execute high priority (i.e. cross functional, high impact) projects to create robust, scalable analytics solutions and frameworks with the help of Analytics/BIE managers - Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area - Creates and maintains comprehensive business documentation including user stories, acceptance criteria, and process flows that help the BIE understand the context for developing ETL processes and visualization solutions. - Performs user acceptance testing and business validation of delivered dashboards and reports, ensuring that BIE-created solutions meet actual operational needs and can be effectively utilized by site managers and operations teams. - Monitors business performance metrics and operational KPIs to proactively identify emerging analytical requirements, working with BIEs to rapidly develop solutions that address real-time operational challenges in the dynamic AI-enhanced fulfillment environment. About the team The Global Operations – Artificial Intelligence (GO-AI) team remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams. Experience in using AI tools Experience in Amazon Redshift and other AWS technologies for large datasets Analytical mindset and ability to see the big picture and influence others Detail-oriented and must have an aptitude for solving unstructured problems. The role will require the ability to extract data from various sources and to design/construct/execute complex analyses to finally come up with data/reports that help solve the business problem Good oral, written and presentation skills combined with the ability to be part of group discussions and explaining complex solutions Ability to apply analytical, computer, statistical and quantitative problem solving skills is required Ability to work effectively in a multi-task, high volume environment Ability to be adaptable and flexible in responding to deadlines and workflow fluctuations Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
0 years
8 - 8 Lacs
Chennai
On-site
Job Summary: We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications: Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 weeks ago
4.0 - 6.0 years
2 - 8 Lacs
Noida
On-site
Position: Data Engineer (AWS QuickSight, Glue, PySpark) (Noida) (CE46SF RM 3386) Education Required: Bachelor’s / Masters / PhD: Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer) Must have skills: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies Strong experience with PySpark Expertise in SQL and data modeling for relational and non-relational databases Familiarity with business intelligence and visualization tools, especially Amazon QuickSight Good to have: Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch). Understanding of MLOps and model deployment best practices. Hands-on experience with AWS services for ML. Experience or familiarity with HVAC domain is a plus Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance5. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools1. Troubleshoot and resolve data pipeline issues, optimizing performance and reliability as needed Required Skills: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies Strong experience with PySpark for large-scale data processing and transformation Expertise in SQL and data modeling for relational and non-relational databases Experience building and optimizing ETL pipelines and data integration workflows Familiarity with business intelligence and visualization tools, especially Amazon QuickSight Knowledge of data governance, security, and compliance best practices. Strong programming skills in Python; experience with automation and scripting Ability to work collaboratively in agile environments and manage multiple priorities effectively Excellent problem-solving and communication skills. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: Noida Experience: 4-6 years Notice period: 0-15 days
Posted 2 weeks ago
8.0 - 12.0 years
3 - 5 Lacs
Noida
On-site
Posted On: 8 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description General Roles & Responsibilities: Technical Leadership: Demonstrate leadership, and ability to guide business and technology teams in adoption of best practices and standards Design & Development: Design, develop, and maintain robust, scalable, and high-performance data estate Architecture: Architect and design robust data solutions that meet business requirements & include scalability, performance, and security. Quality: Ensure the quality of deliverables through rigorous reviews, and adherence to standards. Agile Methodologies: Actively participate in agile processes, including planning, stand-ups, retrospectives, and backlog refinement. Collaboration: Work closely with system architects, data engineers, data scientists, data analysts, cloud engineers and other business stakeholders to determine optimal solution & architecture that is future-proof too. Innovation: Stay updated with the latest industry trends and technologies, and drive continuous improvement initiatives within the development team. Documentation: Create and maintain technical documentation, including design documents, and architectural user guides. Technical Responsibilities: Optimize data pipelines for performance and efficiency. Work with Databricks clusters and configuration management tools. Use appropriate tools in the cloud data lake development and deployment. Developing/implementing cloud infrastructure to support current and future business needs. Provide technical expertise and ownership in the diagnosis and resolution of issues. Ensure all cloud solutions exhibit a higher level of cost efficiency, performance, security, scalability, and reliability. Manage cloud data lake development and deployment on AWS /Databricks. Manage and create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions in Databricks Required Technical Skills: Experience & Proficiency with Databricks platform - Delta Lake storage, Spark (PySpark, Spark SQL). Must be well versed with Databricks Lakehouse, Unity Catalog concept and its implementation in enterprise environments. Familiarity of data design pattern - medallion architecture to organize data in a Lakehouse. Experience & Proficiency with AWS Data Services – S3, Glue, Athena, Redshift etc.| Airflow scheduling Proficiency in SQL and experience with relational databases. Proficiency in at least one programming language (e.g., Python, Java) for data processing and scripting. Experience with DevOps practices - AWS DevOps for CI/CD, Terraform/CDK for infrastructure as code Good understanding of data principles, Cloud Data Lake design & development including data ingestion, data modeling and data distribution. Jira: Proficient in using Jira for managing projects and tracking progress. Other Skills: Strong communication and interpersonal skills. Collaborate with data stewards, data owners, and IT teams for effective implementation Understanding of business processes and terminology – preferably Logistics Experienced with Scrum and Agile Methodologies Qualification Bachelor’s degree in information technology or a related field. Equivalent experience may be considered. Overall experience of 8-12 years in Data Engineering Mandatory Competencies Cloud - AWS Data Science - Databricks Database - SQL Data on Cloud - Azure Data Lake (ADL) Agile - Agile Data Analysis - Data Analysis Big Data - PySpark Data on Cloud - AWS S3 Data on Cloud - Redshift ETL - AWS Glue Python - Python DevOps - CI/CD Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based data Exposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn) Familiarity with big data technologie s like Apache Spark, Kafka Experience with data visualization tool s (Tableau, Power BI, AWS QuickSight) Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databrew AWS Certifications (Data Analytics, Solutions Architect
Posted 2 weeks ago
6.0 - 9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description ocation: Any UST Location Experience: 6 to 9 years Mandatory Skills: PySpark, GCP, Hadoop, Hive, AWS, GCP Good to Have: CI/CD and DevOps experience Job Description We are seeking a highly skilled Senior Big Data Engineer to join our team at UST. The ideal candidate will have solid experience in Big Data technologies, cloud platforms, and data processing frameworks with a strong focus on PySpark and Google Cloud Platform (GCP). Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL workflows using PySpark, Hadoop, and Hive. Deploy and manage big data workloads on cloud platforms like GCP and AWS. Work closely with cross-functional teams to understand data requirements and deliver high-quality solutions. Optimize data processing jobs for performance and cost-efficiency on cloud infrastructure. Implement automation and CI/CD pipelines to streamline deployment and monitoring of data workflows. Ensure data security, governance, and compliance in cloud environments. Troubleshoot and resolve data issues, monitoring job executions and system health. Mandatory Skills PySpark: Strong experience in developing data processing jobs and ETL pipelines. Google Cloud Platform (GCP): Hands-on experience with BigQuery, Dataflow, Dataproc, or similar services. Hadoop Ecosystem: Expertise with Hadoop, Hive, and related big data tools. AWS: Familiarity with AWS data services like S3, EMR, Glue, or Redshift. Strong SQL and data modeling skills. Good To Have Experience with CI/CD tools and DevOps practices (Jenkins, GitLab, Terraform, etc.). Containerization and orchestration knowledge (Docker, Kubernetes). Experience with Infrastructure as Code (IaC). Knowledge of data governance and data security best practices. Skills Spark,Hadoop,Hive,Gcp
Posted 2 weeks ago
3.0 - 8.0 years
5 - 14 Lacs
Hyderabad
Work from Office
Position: Data Analyst | Interview: Walk-in | Type: Full-time | Location: Hyderabad | Exp: 3–8 yrs | Work: 5 Days WFO Data Analysis & Insights Reporting & Visualization Data Extraction & ETL Collaboration & Management Contact:6309124068 Manoj Required Candidate profile Looking for Data Analysts with 3–8 yrs exp in SQL, BI tools (Tableau/Power BI), Python/AppScript. Should have experience in ETL, dashboarding, A/B testing, Contact:6309124068 Manoj
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description Zimetrics is a technology services and solutions provider specializing in Data, AI, and Digital. We help enterprises leverage the economic potential and business value of data from systems, machines, connected devices, and human-generated content. Our core principles are Integrity, Intellect, and Ingenuity, guiding our value system, engineering expertise, and organizational behavior. We are problem solvers and innovators who challenge conventional wisdom and believe in possibilities. Key Responsibilities: Design scalable and secure cloud-based data architecture solutions Lead data modeling, integration, and migration strategies across platforms Engage directly with clients to understand business needs and translate them into technical solutions Support sales/pre-sales teams with solution architecture, technical presentations, and proposals Collaborate with cross-functional teams including engineering, BI, and product Ensure best practices in data governance, security, and performance optimization Key Requirements: Strong experience with Cloud platforms (AWS, Azure, or GCP) Deep understanding of Data Warehousing concepts and tools (Snowflake, Redshift, BigQuery, etc.) Proven expertise in data modeling (conceptual, logical, and physical) Excellent communication and client engagement skills Experience in pre-sales or solution consulting is a strong advantage Ability to present complex technical concepts to non-technical stakeholders
Posted 2 weeks ago
3.0 - 5.0 years
0 - 3 Lacs
Hyderabad
Work from Office
Analyze large datasets to identify trends, patterns, and business insights. Perform exploratory data analysis (EDA) using SQL, Python, PySpark or R. Support A/B testing design, execution, and analysis for product/marketing initiatives. Required Candidate profile Strong expertise in SQL, BI tools, and scripting -Python Optimize query performance and dashboard efficiency Extract, clean, and transform data from structured/unstructured sources
Posted 2 weeks ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
🚀 We’re Hiring: Sr. Data Engineer (AWS/Azure) 📍 Location: Ahmedabad, Gujarat (Hybrid/Onsite) 📅 Experience: 4–8 Years 🕒 Type: Full-Time Are you passionate about designing scalable, cloud-native data solutions? Ready to work on cutting-edge tech with a global engineering team? Join Simform — a top-tier digital engineering partner for AWS, Microsoft, Google Cloud, and Databricks — and help us power the data behind next-gen digital products. 🔍 What You’ll Do As a Senior Data Engineer , you’ll design and build high-performance data pipelines using AWS and Azure services. You'll work closely with ML engineers, data scientists, and product teams to develop robust data infrastructure that supports real-time analytics, large-scale processing, and machine learning workflows. 🛠️ Tech You’ll Work With Cloud: AWS Glue, S3, Redshift, Kinesis, Lambda / Azure Data Factory, Synapse, Databricks, Microsoft Fabric Big Data & Streaming: Spark, Kafka, Flink, Airflow Databases: PostgreSQL, MongoDB, MySQL, SQL Server, Cassandra, Neptune Data Ops: ETL/ELT, data lake/lakehouse design, real-time + batch pipelines ✅ What We’re Looking For Strong hands-on experience with end-to-end data pipelines on AWS and/or Azure Proficiency in ETL/ELT , data modelling, and optimizing large-scale datasets (100GB+) Solid foundation in distributed data processing and data integration Bonus: Experience with ML pipeline integration , CI/CD for data , or data observability tools 💼 Why Join Simform? 🌱 A growth-driven, engineering-first culture 🤝 Flat hierarchy & transparent leadership 🧠 Learning & certification sponsorship 🧘 Free health insurance & flexible work options 🎮 Game zone, free snacks, subsidized lunch 🌍 Global exposure across North America & Europe If you’re ready to engineer real impact with cloud data solutions, let’s connect! 📩 Apply now or refer someone great! 👉🏿 yash.b@simformsolutions.com
Posted 2 weeks ago
7.0 - 12.0 years
20 - 35 Lacs
Pune, Gurugram
Hybrid
Python,PySpark,SQL / Queries ,AWS Elastic MapReduce (EMR),Amazon Managed Workflow for Apache Airflow (MWAA),AWS CDK, Cloud-formation, Lambda, Step-funtion,Athena,Redshift,Glue Catalog,S3 ,CI/CD: Github Actions
Posted 2 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Title: Tableau Developer Job Type: Full-Time About Us: Torrent Technologies is the leading vendor for the National Flood Insurance program. We provide Flood insurance services to many large US insurance companies. Our platform is a complete policy management system, including all policy life cycle workflows (quoting, policy management, accounting, claims, agency management) as well as a robust reporting and analytics platform. Job Description: We are seeking a skilled Tableau Developer with strong SQL Server experience and a background in analytics presentation to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining interactive dashboards and reports that provide actionable insights to drive business decisions. You will collaborate with cross-functional teams to understand data requirements and deliver high-quality analytics solutions. Key Responsibilities: Design and develop interactive dashboards and visualizations using Tableau to present complex data in a clear and concise manner. Utilize SQL Server to extract, manipulate, and analyze data from various sources to support reporting and analytics needs. Collaborate with stakeholders to gather requirements and translate business needs into technical specifications for Tableau reports and dashboards. Optimize Tableau workbooks for performance and usability, ensuring a seamless user experience. Conduct data analysis to identify trends, patterns, and insights that can inform business strategies. Create and maintain documentation for Tableau dashboards, data sources, and processes. Provide training and support to end-users on Tableau tools and best practices. Stay updated on the latest Tableau features and industry trends to continuously improve analytics capabilities. Qualifications: Bachelor’s degree in Computer Science, Information Technology, Data Science, or a related field. Proven experience as a Tableau Developer with a strong portfolio of dashboards and reports. Proficient in SQL Server, including writing complex queries, stored procedures, and data manipulation. Strong analytical skills with the ability to interpret and present data effectively. Experience in data visualization best practices and analytics presentation techniques. Familiarity with data warehousing concepts and ETL processes is a plus. Excellent communication and collaboration skills, with the ability to work effectively in a team environment. Detail-oriented with strong problem-solving skills and the ability to manage multiple projects simultaneously. Preferred Qualifications: Tableau certification (e.g., Tableau Desktop Specialist, Tableau Desktop Certified Associate) is a plus. Experience with other data visualization tools or programming languages (e.g., Python, R) is a plus. Knowledge of business intelligence concepts and methodologies. Experience with AWS Redshift, AWS Glue. What We Offer: Competitive salary and benefits package. Opportunities for professional development and career growth. A collaborative and innovative work environment.
Posted 2 weeks ago
5.0 years
0 Lacs
India
On-site
Job Summary: We are seeking a skilled and analytical Data Architect & Business Intelligence Specialist to design, model, and implement robust data architectures, pipelines, and reporting frameworks. This role will be responsible for building and maintaining data models, overseeing data migrations, and developing scalable data warehouse solutions to support business intelligence and analytics initiatives. Key Responsibilities: 2. Data Pipelines & ETL/ELT 3. Data Migration 4. Data Warehousing 5. Business Intelligence & Reporting Data Architecture & Modeling Design and maintain the enterprise data architecture aligned with business and technical requirements. Develop logical and physical data models using industry best practices. Establish and maintain metadata standards and data dictionaries. Ensure data consistency, quality, and governance across all systems. Design and build efficient and scalable data pipelines for structured and unstructured data. Develop ETL/ELT processes using tools like Apache Airflow, Talend, Informatica, or Azure Data Factory. Optimize data ingestion, transformation, and loading procedures to support analytics. Plan and execute data migration projects from legacy systems to modern data platforms. Ensure data integrity and minimal downtime during migration activities. Collaborate with stakeholders to map old data structures to new architecture. Design, implement, and manage modern data warehouses (e.g., Snowflake, Redshift, BigQuery, Synapse). Ensure high performance, scalability, and security of data warehousing environments. Implement data partitioning, indexing, and performance tuning techniques. Collaborate with business stakeholders to gather reporting and analytics requirements. Build interactive dashboards and reports using tools like Power BI, Tableau, Looker, or Qlik. Enable self-service reporting and ensure data accuracy in BI platforms. Monitor data usage, performance, and drive continuous improvement in reporting frameworks. Requirements Requirements: Education & Experience: Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or a related field. 5+ years of experience in data architecture, modeling, pipelines, and BI/reporting. Technical Skills: Strong expertise in SQL, data modeling (3NF, dimensional, star/snowflake schemas). Experience with data warehouse technologies and cloud platforms (AWS, Azure, GCP). Proficiency in BI/reporting tools and data visualization best practices. Knowledge of Python, Scala, or other scripting languages is a plus. Familiarity with data governance, security, and compliance standards. Soft Skills: Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills with both technical and non-technical stakeholders. Ability to translate complex technical concepts into business language
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About The Role: Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support development of data infrastructure on Databricks for our clients by participating in activities which may include starting from up- stream and down-stream technology selection to designing and building of different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Location: Chennai Qualifications: B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of Distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Data proc etc. Exposure to Hadoop& Shell scripting a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, remove data model, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, troubleshoot ETL’s Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct and document reported data defects. Create and maintain technical specification documentation. Eucloid offers a high growth path along with great compensation, which is among the best in the industry. Please reach out to chandershekhar.verma@eucloid.com if you want to apply.
Posted 2 weeks ago
3.0 - 4.0 years
0 Lacs
Greater Bengaluru Area
On-site
Job Title: Senior Data Analyst Location: Bangalore Experience: 3-4 Years Department: Analytics / Business Intelligence Employment Type: Full-time Job Summary: We are seeking a highly skilled and detail-oriented Senior Data Analyst to join our data-driven team. The ideal candidate will have strong expertise in SQL, Tableau, and MS Excel , with a foundational understanding of Python for data analysis and automation . You will play a key role in turning data into actionable insights that influence strategic decisions across the business. Key Responsibilities: Design, develop, and maintain SQL queries to extract and analyze large datasets from multiple sources. Build interactive Tableau dashboards and reports to visualize business trends and performance metrics. Perform advanced data analysis in MS Excel including pivot tables, lookups, and complex formulas. Use Python for data cleaning, automation, and basic exploratory data analysis. Collaborate with cross-functional teams to understand business requirements and translate them into data solutions. Conduct root cause analysis and identify key insights to support business decisions. Ensure data accuracy, consistency, and integrity across all reporting and analytics deliverables. Exposure to AI-driven analytics or interest in learning AI-based tools will be an added advantage. Required Qualifications: Bachelor’s degree in Computer Science, Statistics, Mathematics, Economics, or a related field. Proficiency in SQL for data extraction and transformation. Strong expertise in Tableau for building reports and dashboards. Advanced skills in Microsoft Excel , including macros, charts, and data modeling. Working knowledge of Python for scripting and data manipulation (Pandas, NumPy preferred). Previous experience in E-commerce Industry (Mandatory) Strong problem-solving abilities and attention to detail. Excellent communication and data storytelling skills. Preferred Qualifications: Experience with data warehousing tools like Snowflake, Redshift, or BigQuery. Exposure to cloud platforms (AWS, Azure, GCP). Familiarity with ETL tools and processes. Background in A/B testing or statistical modeling is a plus. Join us if you're passionate about turning data into insights and want to drive real business impact!
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Role Description This is a Full-Time position with one of our clients for a Redshift Database Administrator. The Redshift Database Administrator will be responsible for managing, designing, and troubleshooting Redshift databases. Daily tasks include performing database administration, ensuring database design is optimal, troubleshooting issues, managing database replication, and ensuring database performance and integrity. Qualifications Proficiency in Database Administration and Database Design Strong Troubleshooting skills Experience with Databases and Replication Strong understanding of database design, performance tuning, and optimization techniques Proficiency in SQL and experience with database scripting languages (e.g., Python, Shell) Experience with database backup and recovery, security, and high availability solutions Familiarity with AWS services and tools, including S3, EC2, IAM, and CloudWatch Excellent problem-solving abilities and analytical skills Ability to work independently and remotely Advanced knowledge of AWS Redshift is a plus Bachelor's degree in computer science, Information Technology, or related field
Posted 2 weeks ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Who we are? Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo. The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday. What are we looking for? Are you a keen learner? Excellent mentor? Passionate coach? We’re looking for someone who’s all three! We’re on the lookout for someone who can design and implement our data processing pipelines for all kinds of data sources. What you'll do as a Manager - Data Engineering with us? 1. You have worked in environments of different shapes and sizes. On-premise, private cloud, public cloud, Hybrid, all windows / linux / healthy mix. Thanks to this experience, you can connect the dots quickly and understand client pain points. 2. You are curious. You keep up with the breakneck speed of innovation on Public cloud. When something new gets released or an existing service changes - you try it out and you learn. 3. You have Strong database background - relational and non-relational alike. a. MySQL, PostgreSQL, SQL Server, Oracle. b. MongoDB, Cassandra and other NoSQL databases. c. Strong SQL query writing experience. d. HA, DR, Performance tuning, Migrations. e. Experience with the cloud offerings - RDS, Aurora, CloudSQL, Azure SQL. 4. You have hands-on experience with designing, deploying, migrating enterprise data warehouses and data lakes. a. Familiarity with migrations from the likes of Netezza, Greenplum, Oracle to BigQuery/RedShift/Azure Data warehouse. b. Dimensional data modelling, reporting & analytics. c. Designing ETL pipelines. 5. You have experience with Advanced Analytics - Ability to work with the Applied AI team and assist in delivering predictive analytics, ML models etc. 6. You have experience with BigData ecosystem a. Self managed Hadoop clusters, distributions like Hortonworks and the cloud equivalents like EMR, Dataproc, HDInsight b. Apache Hudi, Hive, Presto, Spark, Flink, Kafka etc 7. You have hands-on experience with Tools: Apache Airflow, Talend, Tableau, Pandas, DataFlow, Kinesis, Stream Analytics etc. What are the must-haves to join us? 1. Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So we might have to look for a Bachelor's or Master's Degree in engineering from a reputed institute or you should have been coding since your 6th grade. And the later is better. We will find you faster if you specify the latter in some manner. :) 2. 8-10+ years of overall IT experience with a strong data engineering and business intelligence background. 3. Minimum 3 years of experience on projects with GCP / AWS / Azure. 4. Minimum 3+ years of experience in data & analytics delivery and management consulting working with Data Migration, ETL, Business Intelligence, Data Quality, Data Analytics and AI tools. 5. 4+ years of hands-on experience with Python & SQL. 6. Experience across data solutions including data lake, warehousing, ETL, streaming, reporting and analytics tools. 7. Prior experience in recruitment, training & grooming of geeks. 8. Great to have certifications: a. GCP and/or AWS, professional level b. Your contributions to the community - tech blogs, stackover ow etc. 9. Strong communication skills to communicate across a diverse audience with varying levels of business and technical expertise. So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
Job Title: AWS Data Engineer 📍 Location: Remote (India) 🕒 Experience: 3+ Years 💼 Employment Type: Full-Time About the Role: We’re looking for a skilled AWS Data Engineer with 3+ years of hands-on experience in building and managing robust, scalable data pipelines using AWS services. The ideal candidate will have a strong foundation in processing both structured and unstructured data, particularly from IoT/sensor sources. Experience in the energy sector and with time-series data is highly desirable. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Integrate and process structured, unstructured, and real-time sensor/IoT data Ensure pipeline performance, reliability, and fault tolerance Collaborate with data scientists, analysts, and engineering teams to build analytics-ready solutions Transform data using Python, Pandas , and SQL Enforce data integrity, quality, and security standards Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Monitor workflows, troubleshoot pipeline issues, and implement solutions Explore and contribute to the use of modern AWS tools like Bedrock, Textract, Rekognition , and GenAI applications Required Skills & Qualifications: Bachelor’s/Master’s in Computer Science, IT, or related field Minimum 3 years of experience in AWS data engineering Proficient in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Strong knowledge of data modeling, warehousing, and pipeline orchestration Experience with Git and Infrastructure as Code using Terraform Preferred Skills: Experience with energy sector data or sensor-based/IoT data Exposure to ML tools like SageMaker, TensorFlow, Scikit-learn Familiarity with Apache Spark, Kafka Experience with data visualization tools: Tableau, Power BI, AWS QuickSight Awareness of data governance tools like AWS Data Quality, Collibra, Databrew AWS certifications (e.g., Data Analytics Specialty, Solutions Architect Associate)
Posted 2 weeks ago
2.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title - + + Management Level: Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Responsibilities Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional And Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture , Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Software Engineer – Integration (Linux) Skills To be successful in this role as a Linux focused Integration “Software Engineer – OSS Platform Engineering", you should possess the following skillsets: Strong Linux proficiency and expertise with containerization and Kubernetes with programming expertise in one of the high-level languages like Python, Java, Golang and NetDevOps automation. Hands-on expertise with IaC, Cloud Platforms, CI/CD Pipelines for Data, Containerization & Orchestration and SRE principles; Strong Knowledge and demonstrable hands-on experience with middleware technologies (Kafka, API gateways etc) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Some Other Highly Valued Skills Include Expertise building ELT pipelines and cloud/storage integrations - data lake/warehouse integrations (redshift, BigQuery, Snowflake etc). Solid understanding of DevOps tooling, GitOps, CI/CD, config management, Jenkins, build pipelines and source control systems. Working knowledge of cloud infrastructure services: compute, storage, networking, hybrid connectivity, monitoring/logging, security and IAM. SRE Experience. Expertise building and defining KPI’s (SLI/SLO’s) using open-source tooling like ELK, Prometheus and various other instrumentation, telemetry, and log analytics. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Pune office.
Posted 2 weeks ago
7.0 years
0 Lacs
India
Remote
Position: Senior Database Administrator Job Type: Remote - Full Time Experience: 7+ years About the role: We are looking for a highly skilled Senior Database Administrator (DBA) with expertise across both cloud and on-premise environments to join our Production DBA Team. The ideal candidate will ensure high availability, performance, and security of critical database systems while driving root cause analysis, automation, and proactive monitoring. You will take end-to-end ownership of issues, maintain clear communication with stakeholders, and collaborate with cross-functional teams to drive timely resolution—all while adhering to change management and production governance protocols. Key Responsibilities: Administer, maintain, and optimize databases across on-premise and cloud platforms including AWS (RDS MySQL/Aurora/Postgres, Redshift) and Oracle Cloud Infrastructure (OCI - DBCS, ADW) . Manage and tune MySQL, PostgreSQL, Oracle (CDB/PDB), Redshift , and Hadoop environments. Perform advanced performance tuning, capacity planning, and health checks using tools such as SQL Developer, OCI Metrics, Performance Hub, and SolarWinds DPA . Implement monitoring and alerting systems (CloudWatch, Opsgenie, OCI Alarms), and proactively resolve CPU, memory, I/O, and storage issues. Handle database backup, recovery, replication, and housekeeping tasks , ensuring minimal downtime and data integrity. Troubleshoot issues related to tablespaces, indexes, mounting failures, blocking/deadlocks, and data replication . Work with command line tools (OCI CLI) and develop automation scripts in Shell, Python, or Perl . Administer wallets and password-less authentication , and manage Oracle ADW services. Collaborate with vendors (Oracle, AWS, SolarWinds) to resolve escalated issues efficiently. Maintain detailed documentation and communicate effectively across technical and non-technical teams. Requirements: 6–10 years of hands-on DBA experience with increasing responsibility in enterprise environments. Strong experience with MySQL, PostgreSQL, Oracle , and cloud-based databases (AWS RDS, Redshift, OCI DBCS/ADW) . Solid scripting skills in Python, Shell, or Perl for automation and operational efficiency. Experience with database performance tuning, capacity planning, and backup strategies . Working knowledge of Hadoop ecosystems is a strong plus. Familiarity with wallet management, password-less auth, and Oracle multi-tenant architecture (CDB/PDB) . Excellent problem-solving, interpersonal, and communication skills. Ability to work within SLAs and maintain high levels of ownership and accountability.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi