Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Amazon’s Spectrum Analytics team is looking for a Business Intelligence Engineer to help build the next generation of analytics solutions for Selling Partner Developer Services. This is an opportunity to get in on the ground floor as we transform from a reactive, request-directed team to a proactive, roadmap-driven organization that accelerates the business. We need someone who is passionate about data and the insights that large amounts of data can provide. In addition to broad experience with data technologies from ingestion to visualization and consumption (e.g. data pipelines, ETL, reporting and dashboarding), the ideal candidate will have strong analysis skills and an insatiable curiosity to answer the question "why?". You will also be able to articulate the story the data is telling with compelling verbal and written communication. Key job responsibilities Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases. Development of dashboards and reports. Development of minimally to moderately complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.), collaborating with Data Engineers as needed. Collaborate with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations. Manage the deliverables of projects, anticipate risks and resolve issues. Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. About The Team Spectrum offers a world-class suite of data products and experiences to empower the creation of innovative solutions on behalf of Partners. Our foundational systems and tools solve for cross-cutting Builder needs in externalizing data, and are easily extensible using federated policy and reusable technology. Basic Qualifications Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field. Demonstrated data analysis and visualization skills. Highly proficient with SQL. Knowledge of AWS products such as Redshift, Quicksight, and Lambda. Excellent verbal/written communication & data presentation skills; ability to succinctly summarize key findings and effectively communicate with both business and technical teams. Preferred Qualifications Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3001103
Posted 1 day ago
8.0 years
0 Lacs
India
On-site
At The Institute of Clever Stuff (ICS), we don’t just solve problems—we revolutionise results. Our mission is to empower a new generation of Future Makers today, to revolutionise results and create a better tomorrow. Our vision is to pioneer a better future together. We are a consulting firm with a difference, powered by AI, driving world-leading results from data and change. We partner with visionary organisations to solve their toughest challenges, drive transformation, and deliver high-impact results. We combine a diverse network of data professionals, designers, software developers, and rebel consultants alongside our virtual AI consultant, fortu.ai, who combine human ingenuity with fortu.ai’s AI-powered intelligence to deliver smarter, faster and more effective results. Meet fortu.ai Used by some of the world’s leading organisations as a business question pipeline generator, ROI tracker, and innovation engine all in one. Trained on 400+ accelerators and 8 years of solving complex problems with global organisations. With fortu.ai, we’re disrupting a $300+ billion industry, turning traditional consulting on its head. Context of work: The client is a global energy company undergoing a significant transformation to support the energy transition. We work within their Customers & Products (C&P) division, serving both B2C and B2B customers across key markets such as the UK, US, Germany, Spain, and Poland. This business unit includes mobility (fuel and EV), convenience retail, and loyalty. Scope of the work: Client project to deliver: Data Pipeline Development: Building new pipelines for data models using AWS Glue and PySpark. Leading on end-to-end data pipeline creation and execution Data Pipeline Management: Deploying new features into core data models that require re-deployment of the pipeline through staging environments (dev, pre-prod, prod). Supporting regular refreshes of the data. Data Model Performance: Leading on finding opportunities to optimise and automate data ingestion, data refreshes, and data validation steps for the data models. Data Modelling: Supporting the team in building new data models and solutions, working closely with data scientists. Data Quality Assurance: Establish processes to monitor data pipelines for data loss, corruption, or duplication and take corrective action. Requirements: Capable and confident in data engineering concepts: designing data models, building data warehouses, automating data pipelines, and managing large datasets. Strong background in data modelling, creating relational data models, data warehousing and ETL processes. The ability to design, build and manage efficient and reliable data pipelines. Strong coding best practices, including version control. Experience working in agile sprint-based delivery environments. Experience working with customer and transactional data. Experience collaborating with a mixed team of permanent client colleagues and other partners and vendors – working with: Data Scientists, Data Engineers, Data Analysts, Software Engineers, UI/UX Designers and internal Subject Matter Experts. Experience delivering to a large enterprise of stakeholders. Core Technologies: SQL, Python, PySpark/Spark SQL, AWS (Redshift, Athena, Glue, Lambda, RDS), AWS Serverless Data Lake Framework (SDLF), SQL client software (e.g. Dbeaver), Bazel (automated testing), Git. Nice-to-have Technologies: Databricks, Amazon SageMaker, Jupyter Notebook, MLOps, ML model development, and ML engineering would be advantageous.
Posted 1 day ago
0 years
7 - 8 Lacs
Chennai
On-site
Job Title: Engineer - ETL Testing Career Level: C3 Introduction to role: Are you ready to redefine an industry and change lives? As an Advanced Level Test Engineer, you'll conduct various testing activities within IT projects, working in line with set test strategies and good testing practices. Your role involves testing software products to a highly capable level and documenting the output of testing activities in a clear and concise manner. You'll engage in requirements analysis, test design, test case and test charter creation, test execution, automation, tooling, test documentation, and test reporting. Dive into a dynamic environment where your work has a direct impact on patients, redefining our ability to develop life-changing medicines. Accountabilities: Understand project scope and architecture to accurately define testing requirements. Translate business requirements into technical requirements and document them in a Requirements Traceability Matrix (RTM). Develop and implement ETL test scripts using manual methods or relevant testing tools. Maintain process and documentation field in line with established company standards. Analyze priority issues from production environments and assist in root cause analysis. Participate in end-to-end testing activities including planning, design, execution, and results reporting. Review test scripts to ensure accuracy and completeness prior to execution. Document and report defects clearly while collaborating with development teams to support resolution efforts. Provide regular updates on test execution status and progress during the testing phase. Identify and communicate risks associated with testing activities to relevant partners. Communicate effectively with multi-functional teams and partners to support testing and quality assurance objectives. Essential Skills/Experience: Experience on ETL Testing with strong experience working in Data Engineering projects and Enterprise Datawarehouse systems. Experience writing complex SQL queries and validation of Enterprise Data Warehouse Applications. Working in a complete Agile model. Strong in ETL Testing with any industry ETL tool like DBT and Talend. Good knowledge in any RDBMS like AWS Redshift/Postgres/Snowflake. Identifies, manages, and resolves defects during testing cycles demonstrating a test management tool like JIRA, HP ALM. Should be strong in testing fundamentals. Technical or Business Degree or proven relevant experience. Highly proficient skills and experience in testing software products. Strong Experience working within global testing teams. Solid knowledge of Agile methodologies and experience working in Agile environments. Desirable Skills/Experience: Automated testing using DBT Frameworks and knowledge on Snowflake. Experience working with global partners. Hands-on experience in Python using packages such as Pandas and Great Expectations. Hands-on experience of implementing tests in CI/CD Pipeline. Experience in validation of Visual Analytics reports using tools like Power BI or Spotfire. Masters degree. GxP Validation experience. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. As part of a purpose-led global organization, we push the boundaries of science to discover and develop life-changing medicines. Our work combines cutting-edge science with leading digital technology platforms and data, empowering the business to perform at its peak. With a passion for data, analytics, AI, machine learning, and more, we drive cross-company change to disrupt the entire industry. Here, you can innovate, take ownership, and explore new solutions in a modern technology environment. Ready to make a meaningful impact? Apply now to join our team!
Posted 1 day ago
0 years
6 - 8 Lacs
Chennai
On-site
Must have Experience working in AWS platform Have sound knowledge on AWS Services like EC2, S3, Lambda, Glue, CloudWatch ETL Services (Glue Job, Redshift, Athena) Good knowledge in AirFlow, Pyspark Experience with DB Queries (basic joins) Have good understanding of ETL processes Should have very good knowledge on Production Support Process L1, L2, L3 & SLA's Should be able to Analyze logs, configuration and data to troubleshoot. Should have good knowledge on Vulnerability and Patch management resolution. Should be able to perform Release Support: Sanity Checks after patch and co-ordinate with multiple stake holders for major releases. Should perform Minor Enhancements & follow Change management process. Should have good knowledge on Preparation and support for production deployment. Nice to have Strong communication skills Should be open to work on all three shifts About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 day ago
5.0 years
3 - 5 Lacs
Chennai
On-site
Summary/Objective Reveleer is a healthcare data and analytics company that uses Artificial Intelligence to give health plans across all business lines greater control over their Quality Improvement, Risk Adjustment, and Member Management programs. With one transformative solution, the Reveleer platform enables plans to independently execute and manage every aspect of enrolment, provider outreach and data retrieval, coding, abstraction, reporting, and submissions. Leveraging proprietary technology, robust data sets, and subject matter expertise, Reveleer provides complete record retrieval and review services so health plans can confidently plan and execute risk, quality, and member management programs to deliver more value and improved outcomes. Job Overview We are looking for a highly skilled Database Administrator (DBA) to manage, maintain, and optimize our databases across multiple platforms. The ideal candidate will have extensive experience with AWS RDS, Microsoft SQL Server, and MongoDB , along with a strong understanding of database security, performance tuning, and high-availability architectures . This role is crucial in ensuring data integrity, security, and efficiency for our SaaS applications while meeting HIPAA and other healthcare compliance requirements . Key Responsibilities Database Management & Administration Design, configure, and maintain AWS RDS (PostgreSQL, MySQL, SQL Server), Microsoft SQL Server, and MongoDB databases. Ensure high availability, performance, and scalability of all databases Implement backup and disaster recovery strategies, including point-in-time recovery (PITR) and failover mechanisms. Monitor and optimize database performance using tools like AWS CloudWatch, SQL Profiler, and MongoDB Atlas Performance Advisor Manage database provisioning, patching, and version upgrades in production and non-production environments Security & Compliance Enforce data security best practices , including encryption, access controls (IAM, RBAC), and compliance with HIPAA and other healthcare regulations Perform regular security audits and vulnerability assessments using tools like AWS Security Hub and Tenable Implement and maintain database auditing, logging, and monitoring to detect and prevent unauthorized access Optimization & Automation Analyze and optimize query performance, indexing strategies, and database schema design. Automate database maintenance tasks using Terraform, AWS Lambda, PowerShell, or Python scripts. Work with DevOps to integrate CI/CD pipelines for database changes (e.g., Flyway, Liquibase). Optimize storage and resource utilization in AWS to reduce costs while maintaining performance Collaboration & Support Work closely with DevOps, Engineering, and Security teams to ensure database reliability and security. Provide guidance and best practices to developers on database design, indexing, and query performance tuning. Support application teams with troubleshooting, query optimization, and data modeling. Participate in on-call rotation for database-related incidents and outages. Required Qualifications & Experience 5+ years of experience as a Database Administrator in a SaaS or cloud environment. Strong expertise in AWS RDS (PostgreSQL, MySQL, or SQL Server). Proficient in Microsoft SQL Server, including T-SQL, SSMS, and high-availability configurations. Experience with NoSQL databases like MongoDB (Atlas preferred). Deep understanding of performance tuning, query optimization, indexing strategies, and partitioning. Familiarity with Terraform, AWS CloudFormation, or other Infrastructure-as-Code (IaC) tools. Experience with backup and disaster recovery strategies in AWS and on-prem environments. Knowledge of database replication, clustering, and high-availability architectures. Proficiency in scripting (Python, PowerShell, Bash) for automation. Strong knowledge of security best practices (IAM, RBAC, data encryption, audit logging). Familiarity with healthcare compliance requirements (HIPAA, HITRUST) is a plus. Preferred Skills & Certifications AWS Certified Database – Specialty Microsoft Certified: Azure Database Administrator Associate MongoDB Certified DBA Associate Experience with AI/ML-driven database performance optimization tools Exposure to data warehousing and analytics (Redshift, Snowflake, or BigQuery) Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties, or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without advance notice. Any changes may be for an indeterminate time frame. EEO Statement Reveleer provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristics protected by federal, state, or local laws.
Posted 1 day ago
9.0 years
0 Lacs
Andhra Pradesh
On-site
Data Engineer Must have 9+ years of experience in below mentioned skills. Must Have: Big Data Concepts Python(Core Python- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
🚀 We're Hiring: Sr. Solution Architect – Cloud 📍 Location: Delhi & Hyderabad 🕐 Experience: 5+ years 🌐 Expertise: AWS, Cloud Architecture, Automation, Security Are you passionate about building scalable, secure, and cost-efficient cloud solutions? We're looking for a hands-on Senior Solution Architect to lead cloud design, strategy, and automation for mission-critical systems. What You’ll Do: ✅ Design secure, scalable AWS architectures (compute, storage, networking, data, AI/ML) ✅ Create detailed architecture docs, migration plans, and cost models ✅ Drive cloud strategy, CI/CD, and automation using Terraform, CloudFormation, Jenkins ✅ Implement cloud security best practices (IAM, VPC, encryption, compliance frameworks) ✅ Lead architecture reviews, RCA sessions, and major incident management ✅ Mentor junior engineers and guide cross-functional teams ✅ Work with cutting-edge tools – SageMaker, Redshift, EKS, GuardDuty, and more What We’re Looking For: 🔹 5+ years of hands-on cloud architecture experience 🔹 Deep AWS knowledge: EC2, EKS, RDS, S3, VPC, Lambda, etc. 🔹 Strong scripting and automation (Python, Bash, PowerShell) 🔹 Excellent troubleshooting on Windows/Linux 🔹 AWS Certified Solutions Architect – Associate (mandatory) 🔹 Professional or DevOps Pro cert preferred 🔹 Strong communication and leadership skills 🔹 Bonus: Familiarity with Azure, Puppet, Chef, ServiceNow 👉 Ready to design the future of cloud with us? 📩 Apply now or connect with us to learn more!
Posted 1 day ago
6.0 - 8.0 years
0 Lacs
Andhra Pradesh, India
Remote
We are seeking an experienced Data Engineer to join our team in an offshore capacity. The ideal candidate will have 6-8 years of hands-on experience in building, deploying, and maintaining scalable data pipelines and processing frameworks using PySpark, AWS EMR, and Apache Airflow. This role will require collaborating with cross-functional teams, understanding business needs, and designing robust solutions for large-scale data processing. Key Responsibilities Design, develop, and maintain efficient, scalable data pipelines for batch and real-time processing. Use PySpark to process large datasets and perform transformations, ensuring high performance and optimized workflows. Build and manage data workflows with Apache Airflow, ensuring smooth scheduling and execution of ETL pipelines. Implement AWS EMR clusters for big data processing, ensuring efficient scaling, cost optimization, and high availability. Develop automated solutions for data extraction, transformation, and loading (ETL) across various sources and sinks. Collaborate with data architects, analysts, and other stakeholders to gather requirements and ensure smooth integration of data solutions. Monitor and troubleshoot data pipelines, ensuring the system runs efficiently and without disruptions. Optimize complex queries, algorithms, and processing logic to meet performance and scalability requirements. Perform data validation and quality checks to ensure the accuracy and consistency of the data. Stay updated with the latest advancements in big data technologies and cloud infrastructure to suggest improvements in processes. Required Skills 5-8 years of experience in data engineering, with strong expertise in data pipeline design and big data processing. Proficiency in PySpark for distributed data processing. Experience working with AWS EMR for big data processing and managing clusters. Hands-on experience with Apache Airflow for orchestration and scheduling of data workflows. Solid understanding of data warehousing concepts, ETL processes, and data integration. Strong experience with SQL for querying and optimizing large datasets. Familiarity with other AWS services like Lambda, RDS, and Glue is a plus. Strong troubleshooting, debugging, and problem-solving skills. Ability to work independently in an offshore setup, collaborating effectively with remote teams. Preferred Skills Experience in Data Lakes, Redshift, or other cloud-based data storage and processing systems. Understanding of data security and privacy best practices for handling sensitive data. Familiar with machine learning concepts and data science workflows. Education Bacheloor Master degree in Computer Science, Engineering, or a related field. Additional Information This position is offshore (India) and will require remote collaboration with teams based in other regions. Opportunity to work on challenging data engineering projects with a global team.
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Andhra Pradesh, India
Remote
We are seeking a Data Quality Analyst with Business Analyst expertise to support data engineering and governance initiatives. This hybrid role involves ensuring data accuracy and integrity through systematic QA processes, while also analyzing and documenting business workflows, data lineage, and functional requirements. The ideal candidate will act as a bridge between technical and business teams, playing a crucial role in validating data pipelines and ensuring process transparency. Key Responsibilities Quality Assurance for Data Engineering: Validate data pipelines, ETL processes, and data transformations to ensure accuracy and completeness. Design and execute test cases to verify source-to-target data mapping and transformation logic. Identify data anomalies, inconsistencies, and quality issues, and collaborate with data engineers to resolve them. Work with large datasets using SQL and other data analysis tools for validation and troubleshooting. Business Analysis & Documentation Collaborate with stakeholders to gather, analyze, and document data requirements, business rules, and workflows. reate clear and concise documentation including data flow diagrams, process maps, and requirement specifications. as-is and to-be states of data processes, ensuring alignment with business objectives and compliance requirements. Maintain traceability between business requirements, technical specifications, and QA validation. Workflow And Process Management end-to-end understanding of data processes and usage across systems. to data governance efforts by defining data quality KPIs, validation rules, and reporting metrics. Participate in UAT and data reviews, ensuring business needs are met with high data quality. Required Skills And Qualifications 5 8 years of experience in QA andor Business Analysis, ideally supporting data engineering or analytics teams. understanding of ETL/ELT workflows, data warehousing, and data lifecycle management. Proficiency in SQL for data validation and analysis. Experience working with tools such as JIRA, Confluence, Airflow (preferred), or similar workflow/documentation tools. Excellent skills in writing test plans, test cases, and business/technical documentation. Ability to interpret and document complex business processes and data flows. Strong communication and stakeholder management skills across technical and non-technical teams. Preferred Skills Familiarity with cloud platforms such as AWS, Snowflake, Redshift, or BigQuery. Exposure to data catalog, lineage, or governance tools (e.g., Collibra, Alation) is a plus. Understanding of data privacy and compliance (e.g., GDPR, HIPAA) is a bonus.Edcation Bachelordegree in Computer Science, Information Systems, Business Analytics, or related field. Additional Information This is an offshore (India-based) remote role with opportunities to work with global data and analytics teams. Ideal for QA professionals who have transitioned into or supported business analysis, especially in data-focused projects.
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
I am thrilled to share an exciting opportunity with one of our esteemed clients! 🚀 Join me in exploring new horizons and unlocking potential. If you're ready for a challenge and growth,. Exp: 7+yrs Location: Chennai, Hyderabad Immediate joiner only, WFO Mandatory skills: SQL, Python, Pyspark, Databricks (strong in core databricks), AWS (AWS is mandate) JD: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Regards R Usha usha@livecjobs.com
Posted 1 day ago
10.0 years
0 Lacs
India
On-site
About Fresh Gravity: Founded in 2015, Fresh Gravity helps businesses make data-driven decisions. We are driven by data and its potential as an asset to drive business growth and efficiency. Our consultants are passionate innovators who solve clients' business problems by applying best-in-class data and analytics solutions. We provide a range of consulting and systems integration services and solutions to our clients in the areas of Data Management, Analytics and Machine Learning, and Artificial Intelligence. In the last 10 years, we have put together an exceptional team and have delivered 200+ projects for over 80 clients ranging from startups to several fortune 500 companies. We are on a mission to solve some of the most complex business problems for our clients using some of the most exciting new technologies, providing the best of learning opportunities for our team. We are focused and intentional about building a strong corporate culture in which individuals feel valued, supported, and cared for. We foster an environment where creativity thrives, paving the way for groundbreaking solutions and personal growth. Our open, collaborative, and empowering work culture is the main reason for our growth and success. To know more about our culture and employee benefits, visit out website https://www.freshgravity.com/employee-benefits/ . We promise rich opportunities for you to succeed, to shine, to exceed even your own expectations. We are data driven. We are passionate. We are innovators. We are Fresh Gravity. Requirements What you'll do: Solid hands-on experience with Talend Open Studio for Data Integration, Talend Administration Centre, and Talend Data Quality ETL Process Design: Able to develop and design ETL jobs, ensuring they meet business requirements and follow best practices. Knowledge of SCD, normalization jobs Talend Configuration: Proficiency in Configuring Talend Studio, Job Server, and other Talend components. Data Mapping: Proficiency in creating and refining Talend mappings for data extraction, transformation, and loading. SQL: Possess Strong knowledge of SQL and experience. Able to develop complex SQL queries for data extraction and loading, especially when working with databases like Oracle, RedShift, Snowflake. Custom Scripting: Knowledge to implement custom Talend components using scripting languages like Python or Java. Shell scripting to automate tasks Reusable Joblets: Working knowledge to Design and create reusable joblets for various ETL tasks. ESB Integration and real-time data integration : Able to implement and manage integrations with ESB (Enterprise Service Bus) systems - Kafka/Azure Event Hub, including REST and SOAP web services. Desirable Skills and Experience: Experience with ETL/ELT, data transformation, data mapping, and data profiling Strong analytical and problem-solving skills Ability to work independently and as part of a team Ability to work with cross-functional teams to understand business requirements and design data Troubleshoot and resolve data integration issues in a timely manner Mentor junior team members and help them improve their Talend development skills Stay up to date with the latest Talend and data integration trends and technologies, integration solutions that meet those requirements Benefits In addition to a competitive package, we promise rich opportunities for you to succeed, to shine, to exceed even your own expectations. In keeping with Fresh Gravity's challenger ethos, we have developed the 5Dimensions (5D) benefits program. This program recognizes the multiple dimensions within each of us and seek to provide opportunities for deep development across these dimensions. Enrich Myself; Enhance My Client; Build my Company, Nurture My Family; and Better Humanity.
Posted 1 day ago
2.0 - 7.0 years
40 - 45 Lacs
Chandigarh, Bengaluru
Work from Office
As the Data Engineer, you will play a pivotal role in shaping our data infrastructure and executing against our strategy. You will ideate alongside engineering, data and our clients to deploy data products with an innovative and meaningful impact to clients. You will design, build, and maintain scalable data pipelines and workflows on AWS. Additionally, your expertise in AI and machine learning will enhance our ability to deliver smarter, more predictive solutions. Key Responsibilities Collaborate with other engineers, customers to brainstorm and develop impactful data products tailored to our clients. Leverage AI and machine learning techniques to integrate intelligent features into our offerings. Develop, and optimize end-to-end data pipelines on AWS Follow best practices in software architecture and development. Implement effective cost management and performance optimization strategies. Develop and maintain systems using Python, SQL, PySpark, and Django for front-end development. Work directly with clients and end-users and address their data needs Utilize databases and tools including and not limited to, Postgres, Redshift, Airflow, and MongoDB to support our data ecosystem. Leverage AI frameworks and libraries to integrate advanced analytics into our solutions. Qualifications Experience: Minimum of 3 years of experience in data engineering, software development, or related roles. Proven track record in designing and deploying AWS cloud infrastructure solutions At least 2 years in data analysis and mining techniques to aid in descriptive and diagnostic insights Extensive hands-on experience with Postgres, Redshift, Airflow, MongoDB, and real-time data workflows. Technical Skills: Expertise in Python, SQL, and PySpark Strong background in software architecture and scalable development practices. Tableau, Metabase or similar viz tools experience Working knowledge of AI frameworks and libraries is a plus. Leadership & Communication: Demonstrates ownership and accountability for delivery with a strong commitment to quality. Excellent communication skills with a history of effective client and end-user engagement. Startup & Fintech Mindset: Adaptability and agility to thrive in a fast-paced, early-stage startup environment. Passion for fintech innovation and a strong desire to make a meaningful impact on the future of finance. Location: Remote
Posted 1 day ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
AWS Data Engineering _6yrs_Gurgaon Primary skills: AWS Services, SQL or Python, Data bricks or Snowflake • Hands on development experience in Data Warehousing, and or Software Development • AWS (S3, Redshift, Airflow), DevOps and DataOps tools (Jenkins, Git, Erwin), SQL or Python Secondary skills: • Knowledge of UNIX, Spark and Databricks Primary skills: AWS Services, SQL or Python, Data bricks or Snowflake Secondary skills: • Experience with data solutions in Cloud (preferably with AWS) as well as on-premises assets like Oracle • Demonstrated capability with the development of and performance-tuning skills for RDBMS (e.g. Oracle, Teradata, Snowflake, Redshift etc) databases
Posted 1 day ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Us: Chryselys is a Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. We specialize in digital technologies and advanced data science techniques that provide strategic and operational insights. Who we are: People - Our team of industry veterans, advisors and senior strategists have diverse backgrounds and have worked at top tier companies. Quality - Our goal is to deliver the value of a big five consulting company without the big five cost. Technology - Our solutions are Business centric built on cloud native technologies. Key Responsibilities and Core Competencies: · You will be responsible for managing and delivering multiple Pharma projects. · Leading a team of atleast 8 members, resolving their technical and business related problems and other queries. · Responsible for client interaction; requirements gathering, creating required documents, development, quality assurance of the deliverables. · Good collaboration with onshore and Senior folks. · Should have fair understanding of Data Capabilities (Data Management, Data Quality, Master and Reference Data). · Exposure to Project management methodologies including Agile and Waterfall. · Experience working in RFPs would be a plus. Required Technical Skills: · Proficient in Python, Pyspark, SQL · Extensive hands-on experience in big data processing and cloud technologies like AWS and Azure services, Databricks etc . · Strong experience working with cloud data warehouses like Snowflake, Redshift, Azure etc. · Good experience in ETL, Data Modelling, building ETL Pipelines. · Conceptual knowledge of Relational database technologies, Data Lake, Lake Houses etc. · Sound knowledge in Data operations, quality and data governance. Preferred Qualifications: · Bachelor’s or master’s Engineering/ MCA or equivalent degree. · 7+ years of experience as Data Engineer , with atleast 2 years in managing medium to large scale programs. · Minimum 5 years of Pharma and Life Science domain exposure in IQVIA, Veeva, Symphony, IMS etc. · High motivation, good work ethic, maturity, self-organized and personal initiative. · Ability to work collaboratively and providing the support to the team. · Excellent written and verbal communication skills. · Strong analytical and problem-solving skills. Location · Preferably Hyderabad, India How to Apply: Ready to make an impact? Apply now by clicking [here] or visit our careers page at https://chryselys.com/chryselys-career/ Please include your resume and a cover letter detailing why you’re the perfect fit for this role. Equal Employment Opportunity: Chryselys is proud to be an Equal Employment Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Connect with Us: Follow us for updates and more opportunities: https://www.linkedin.com/company/chryselys/mycompany/ Discover more about our team and culture: www.chryselys.com
Posted 1 day ago
10.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
About Rearc Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers. Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together! As a Lead Data Engineer at Rearc, you'll play a pivotal role in establishing and maintaining technical excellence within our data engineering team. Your deep expertise in data architecture, ETL processes, and data modelling will be instrumental in optimizing data workflows for efficiency, scalability, and reliability. You'll collaborate closely with cross-functional teams to design and implement robust data solutions that meet business objectives and adhere to best practices in data management. Building strong partnerships with both technical teams and stakeholders will be essential as you drive data-driven initiatives and ensure their successful implementation. What You Bring With 10+ years of experience in data engineering, data architecture, or related fields, you offer a wealth of expertise in managing and optimizing data pipelines and architectures. Extensive experience in writing and testing Java and/or Python Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue. Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask. Proficiency with Spark and Databricks is highly desirable. You have a proven track record of leading complex data engineering projects, including designing and implementing scalable data solutions. Your hands-on experience with ETL processes, data warehousing, and data modeling tools allows you to deliver efficient and robust data pipelines. You possess in-depth knowledge of data integration tools and best practices. Your strong understanding of cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery). You bring strong strategic and analytical skills to the role, enabling you to solve intricate data challenges and drive data-driven decision-making. Proven proficiency in implementing and optimizing data pipelines using modern tools and frameworks, including Databricks for data processing and Delta Lake for managing large-scale data lakes. Your exceptional communication and interpersonal skills facilitate collaboration with cross-functional teams and effective stakeholder engagement at all levels. What You’ll Do As a Lead Data Engineer at Rearc, your role is pivotal in driving the success of our data engineering initiatives. You will lead by example, fostering trust and accountability within your team while leveraging your technical expertise to optimize data processes and deliver exceptional data solutions. Here's what you'll be doing: Understand Requirements and Challenges: Collaborate with stakeholders to deeply understand their data requirements and challenges, enabling the development of robust data solutions tailored to the needs of our clients. Implement with a DataOps Mindset: Embrace a DataOps mindset and utilize modern data engineering tools and frameworks, such as Apache Airflow, Apache Spark, or similar, to build scalable and efficient data pipelines and architectures. Lead Data Engineering Projects: Take the lead in managing and executing data engineering projects, providing technical guidance and oversight to ensure successful project delivery. Mentor Data Engineers: Share your extensive knowledge and experience in data engineering with junior team members, guiding and mentoring them to foster their growth and development in the field. Promote Knowledge Sharing: Contribute to our knowledge base by writing technical blogs and articles, promoting best practices in data engineering, and contributing to a culture of continuous learning and innovation. At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place! Our approach is simple — empower engineers with the best tools possible to make an impact within their industry. We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on-keyboard leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing.
Posted 1 day ago
20.0 years
0 Lacs
India
Remote
Company: AA GLOBUSDIGITAL INDIA PRIVATE LIMITED AA GLOBUSDIGITAL INDIA PRIVATE LIMITED, is a wholly owned subsidiary of Globus Systems Inc US, Globus Systems was founded by industry executives who have been part of the IT services industry for the past 20 years and have seen it evolve and mature. We understand the challenges faced by organizations as they prepare for the future. As a technology delivery company, we are focused on helping organizations lay a foundation for their "Tomorrow-Roadmap". At the heart of any business is the data that drives decisions. Data integrity and security are key drivers for growth. Smart and timely use of technology can help build, streamline and enable data driven decisions that become the backbone of an organization. Business leaders are constantly searching for new solutions, services and partners they can trust with enabling these drivers. Location: PAN India NP: Immediate Joiners required Experience: 6 to 9 years Work Mode: WFH CTC: Market Standard - Expert SQL (6+ years), including stored procedures - Expert git (6+ years) - Expert Python (5+ years) - dbt, including custom packages, macros, and other core functionality - Apache Airflow/Dagster - Postgres/MySQL experience required - Snowflake/Redshift/BigQuery experience required - Knowledge of character set differences and character set conversion techniques Additional Preferred Experience - MS SQL Server and SSIS experience - Knowledgeable about applying A.I. technology to data engineering Responsibilities - Analyze raw data containing hundreds of millions of records to answer questions for business users - Prepare data sets for various production use cases, including production of direct mail, custom reports, and data science purposes - Create testable, high-performance ETL and ELT pipelines using modern technologies
Posted 1 day ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description IES Prime is building a team to take Prime experience of customers to the next level by building capabilities that are relevant for Prime as well as non-Prime customers in IN and other EMs. Our Development team plays a pivotal role in this program, with the mission to build a comprehensive solution for the India Prime business. This is a rare opportunity to be part of a team that will be responsible for building a successful, sustainable and strategic business for Amazon Prime and to expand the coverage of recurring payments for Prime in India and take it to new emerging markets. The candidate will be instrumental in shaping the product direction and will be actively involved in defining key product features that impact the business. You will work with Sr. and Principal Engineers at Amazon Prime to evolve the design and architecture of the products owned by this team. You will be responsible to set up and hold a high software quality bar besides providing technical direction to a highly technical team of Software Engineers. As part of this team you will work to ensure Amazon.in Prime experience is seamless and has the best shopping experience. It's a great opportunity to develop and enhance experiences for Mobile devices first. You will work on analyzing the latency across the various Amazon.in pages using RedShift, DynamoDB, S3, Java, and Spark. You will get the opportunity to code on almost all key pages on retail website building features and improving business metrics. You will also contribute reducing latency for customers by reducing the bytes on wire and adapting the UX based on network bandwidth. You will be part of a team that obsesses about the performance of our customer's experience and enjoy flexibility to pursue what makes sense. Come enjoy an exploratory and research oriented team of Cowboys working in a fast paced environment, who are always eager to take on big challenges. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Posted 1 day ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About ProcDNA ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge technology to create game-changing Commercial Analytics and Technology solutions for our clients. We're a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won't be stuck in a cubicle - you'll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn't just encouraged; it's ingrained in our DNA. Ready to join our epic growth journey? What We Are Looking For You’ll be driving the adoption of the latest technologies in our solutions, bringing in thought leadership to guide clients on complex data management problems, and driving business performance. You will work with the leadership team to bring subject matter expertise in areas such as Big Data, ETL, Reporting, CRM, Data Warehousing, MDM, DevOps, Software Development, etc. We are seeking an individual who not only possesses the requisite expertise but also thrives in the dynamic landscape of a fast-paced global firm. What You’ll Do Leading end-to-end data management solution projects for multiple clients across data engineering and BI technologies. Responsible for creating a project management plan and ensuring adherence to project timelines. Integrate multiple data sources into one visualization to tell a story. Interact with customers to understand their business problems and provide best-in-class analytics solutions. Interact with Data Platform leaders and understand data flows that integrate into Tableau/analytics. Understand data governance, quality, security, and integrate analytics with these enterprise platforms. Interact with UX/UI global functions and design best-in class visualization for customers, harnessing all product capabilities. Must have 7 - 10 years of data warehousing and data engineering. Experience in interacting with Life Science clients directly, discussing requirements, and stakeholder management. Experience in requirement gathering and designing enterprise warehouse solutions from scratch. Hands-on experience with ETL tools like ADF, Databricks, and Informatica; experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc; experience in data warehouse: SQL/NoSQL, Amazon Redshift, Snowflake, Apache Hive, HDFS, etc. BI tools knowledge and experience in leading the implementation of dashboards. Deep understanding of data governance and data quality management frameworks. Strong communication and presentation skills with a strong problem-solving attitude. Excellent analytical, problem-solving, and debugging skills, with a strong ability to quickly learn and comprehend business processes and problems to effectively develop technical solutions to their requirements. Skills: mdm,sql,hdfs,data warehousing,big data,devops,cloud,amazon redshift,snowflake,pharmaceutical consulting,data management,apache hive,azure,reporting,problem-solving,luigi,informatica,analytical skills,presentation skills,data governance,adf,data engineering,crm,databricks,bi technologies,airflow,team management,business technology,aws,azkaban,software development,etl,client management,data quality management,life science
Posted 1 day ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About ProcDNA ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge technology to create game-changing Commercial Analytics and Technology solutions for our clients. We're a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won't be stuck in a cubicle - you'll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn't just encouraged; it's ingrained in our DNA. Ready to join our epic growth journey? What We Are Looking For You’ll be driving the adoption of the latest technologies in our solutions, bringing in thought leadership to guide clients on complex data management problems, and driving business performance. You will work with the leadership team to bring subject matter expertise in areas such as Big Data, ETL, Reporting, CRM, Data Warehousing, MDM, DevOps, Software Development, etc. We are seeking an individual who not only possesses the requisite expertise but also thrives in the dynamic landscape of a fast-paced global firm. What You’ll Do Leading end-to-end data management solution projects for multiple clients across data engineering and BI technologies. Responsible for creating a project management plan and ensuring adherence to project timelines. Integrate multiple data sources into one visualization to tell a story. Interact with customers to understand their business problems and provide best-in-class analytics solutions. Interact with Data Platform leaders and understand data flows that integrate into Tableau/analytics. Understand data governance, quality, security, and integrate analytics with these enterprise platforms. Interact with UX/UI global functions and design best-in class visualization for customers, harnessing all product capabilities. Must have 7 - 10 years of data warehousing and data engineering. Experience in interacting with Life Science clients directly, discussing requirements, and stakeholder management. Experience in requirement gathering and designing enterprise warehouse solutions from scratch. Hands-on experience with ETL tools like ADF, Databricks, and Informatica; experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc; experience in data warehouse: SQL/NoSQL, Amazon Redshift, Snowflake, Apache Hive, HDFS, etc. BI tools knowledge and experience in leading the implementation of dashboards. Deep understanding of data governance and data quality management frameworks. Strong communication and presentation skills with a strong problem-solving attitude. Excellent analytical, problem-solving, and debugging skills, with a strong ability to quickly learn and comprehend business processes and problems to effectively develop technical solutions to their requirements. Skills: mdm,sql,hdfs,data warehousing,big data,devops,cloud,amazon redshift,snowflake,pharmaceutical consulting,data management,apache hive,azure,reporting,problem-solving,luigi,informatica,analytical skills,presentation skills,data governance,adf,data engineering,crm,databricks,bi technologies,airflow,team management,business technology,aws,azkaban,software development,etl,client management,data quality management,life science
Posted 1 day ago
7.0 - 12.0 years
15 - 25 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Job Summary: We are seeking a highly motivated and experienced Senior Data Engineer to join our team. This role requires a deep curiosity about our business and a passion for technology and innovation. You will be responsible for designing and developing robust, scalable data engineering solutions that drive our business intelligence and data-driven decision-making processes. If you thrive in a dynamic environment and have a strong desire to deliver top-notch data solutions, we want to hear from you. Key Responsibilities: Collaborate with agile teams to design and develop cutting-edge data engineering solutions. Build and maintain distributed, low-latency, and reliable data pipelines ensuring high availability and timely delivery of data. Design and implement optimized data engineering solutions for Big Data workloads to handle increasing data volumes and complexities. Develop high-performance real-time data ingestion solutions for streaming workloads. Adhere to best practices and established design patterns across all data engineering initiatives. Ensure code quality through elegant design, efficient coding, and performance optimization. Focus on data quality and consistency by implementing monitoring processes and systems. Produce detailed design and test documentation, including Data Flow Diagrams, Technical Design Specs, and Source to Target Mapping documents. Perform data analysis to troubleshoot and resolve data-related issues. Automate data engineering pipelines and data validation processes to eliminate manual interventions. Implement data security and privacy measures, including access controls, key management, and encryption techniques. Stay updated on technology trends, experimenting with new tools, and educating team members. Collaborate with analytics and business teams to improve data models and enhance data accessibility. Communicate effectively with both technical and non-technical stakeholders. Qualifications: Education: Bachelors degree in Computer Science, Computer Engineering, or a related field. Experience: Minimum of 5+ years in architecting, designing, and building data engineering solutions and data platforms. Proven experience in building Lakehouse or Data Warehouses on platforms like Databricks or Snowflake. Expertise in designing and building highly optimized batch/streaming data pipelines using Databricks. Proficiency with data acquisition and transformation tools such as Fivetran and DBT. Strong experience in building efficient data engineering pipelines using Python and PySpark. Experience with distributed data processing frameworks such as Apache Hadoop, Apache Spark, or Flink. Familiarity with real-time data stream processing using tools like Apache Kafka, Kinesis, or Spark Structured Streaming. Experience with various AWS services, including S3, EC2, EMR, Lambda, RDS, DynamoDB, Redshift, and Glue Catalog. Expertise in advanced SQL programming and performance tuning. Key Skills: Strong problem-solving abilities and perseverance in the face of ambiguity. Excellent emotional intelligence and interpersonal skills. Ability to build and maintain productive relationships with internal and external stakeholders. A self-starter mentality with a focus on growth and quick learning. Passion for operational products and creating outstanding employee experiences.
Posted 1 day ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Role: Data Engineer Location: Indore(Hybrid) Experience required : 5+ Years Job Description: Build and maintain data pipelines for ingesting and processing structured and unstructured data. Ensure data accuracy and quality through validation checks and sanity reports. Improve data infrastructure by automating manual processes and scaling systems. Support internal teams (Product, Delivery, Onboarding) with data issues and solutions. Analyze data trends and provide insights to inform key business decisions. Collaborate with program managers to resolve data issues and maintain clear documentation. Must-Have Skills: Proficiency in SQL, Python (Pandas, NumPy), and R Experience with ETL tools (e.g., Apache NiFi, Talend, AWS Glue) Cloud experience with AWS (S3, Redshift, EMR, Athena, RDS) Strong understanding of data modeling, warehousing, and data validation Familiarity with data visualization tools (Tableau, Power BI, Looker) Experience with Apache Airflow, Kubernetes, Terraform, Docker Knowledge of data lake architectures, APIs, and custom data formats (JSON, XML, YAML)
Posted 1 day ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Develop client-facing standardized reports with Business Intelligence tools such as Tableau. Perform data profiling on source data with minimal documentation. Independently troubleshoot data, perform detail data analyses, and develop complex SQL code. Write secure, stable, testable, and maintainable Python code with minimal defects Perform root cause analysis, propose solutions, and take ownership of the next steps for their resolution Create and maintain report specifications and process documentations as part of the required data deliverables. Writes queries to pull/summarize/analyze data from various data sets and platforms Collaborate with Engineering teams to discover and leverage data being introduced into the environment Serve as liaison with business and technical teams to achieve project objectives, delivering cross-functional reporting solutions. Ability to multitask and prioritize an evolving workload in a fast-paced environment. Provide on-call production support Qualifications Minimum 7 years of experience in Tableau development and support 2 years of experience in Business Objects report development and support 2 years of experience in Tableau Server administration 3 years of experience with AWS data ecosystem (Redshift, S3, etc.) 2 years of experience with Python 3 years of experience in an Agile environment Experience with MWAA and Sigma is a plus Excellent customer-facing communication skills between business partners and technical teams Highly motivated self-starter, detail-oriented, and able to work independently to formulate innovative solutions Education: BS degree or higher in MIS or engineering fields Qualifications Experience with MWAA and Sigma Excellent customer-facing communication skills between business partners and technical teams Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here
Posted 1 day ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Develop client-facing standardized reports with Business Intelligence tools such as Sigma and Tableau. Perform data profiling on source data with minimal documentation. Independently troubleshoot data, perform detail data analyses, and develop complex SQL code. Write secure, stable, testable, and maintainable Python code with minimal defects Perform root cause analysis, propose solutions, and take ownership of the next steps for their resolution Create and maintain report specifications and process documentations as part of the required data deliverables. Writes queries to pull/summarize/analyze data from various data sets and platforms Collaborate with Engineering teams to discover and leverage data being introduced into the environment Serve as liaison with business and technical teams to achieve project objectives, delivering cross-functional reporting solutions. Ability to multitask and prioritize an evolving workload in a fast-paced environment. Provide on-call production support Qualifications Minimum 7 years of experience in BI visualization development and support 2 years of experience in Sigma report development and support 2 years of experience in Tableau Server administration 3 years of experience with AWS data ecosystem (Redshift, S3, etc.) 2 years of experience with Python 3 years of experience in an Agile environment Experience with MWAA and Business Objects is a plus Excellent customer-facing communication skills between business partners and technical teams Highly motivated self-starter, detail-oriented, and able to work independently to formulate innovative solutions Education: BS degree or higher in MIS or engineering fields Qualifications Qualifications Minimum 7 years of experience in BI visualization development and support 2 years of experience in Sigma report development and support 2 years of experience with Python Excellent customer-facing communication skills between business partners and technical teams 3 years of experience in an Agile environment 2 years of experience in Tableau Server administration Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Must have Experience working in AWS platform Have sound knowledge on AWS Services like EC2, S3, Lambda, Glue, CloudWatch ETL Services (Glue Job, Redshift, Athena) Good knowledge in AirFlow, Pyspark Experience with DB Queries (basic joins) Have good understanding of ETL processes Should have very good knowledge on Production Support Process L1, L2, L3 & SLA's Should be able to Analyze logs, configuration and data to troubleshoot. Should have good knowledge on Vulnerability and Patch management resolution. Should be able to perform Release Support: Sanity Checks after patch and co-ordinate with multiple stake holders for major releases. Should perform Minor Enhancements & follow Change management process. Should have good knowledge on Preparation and support for production deployment. Nice to have Strong communication skills Should be open to work on all three shifts
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role Grade Level (for internal use): 09 The Team Are you ready to dive into the world of data and uncover insights that shape global commodity markets? We're looking for a passionate BI Developer to join our Business Intelligence team within the Commodity Insights division at S&P Global. At S&P Global, we are on a mission to harness the power of data to unlock insights that propel our business forward. We believe in innovation, collaboration, and the relentless pursuit of excellence. Join our dynamic team and be a part of a culture that celebrates creativity and encourages you to push the boundaries of what’s possible. Key Responsibilities Unlocking the Power of Data Collaborate on the end-to-end data journey, helping collect, cleanse, and transform diverse data sources into actionable insights that shape business strategies for functional leaders. Work alongside senior BI professionals to build powerful ETL processes, ensuring data quality, consistency, and accessibility. Crafting Visual Storytelling Develop eye-catching, impactful dashboards and reports that tell the story of commodity trends, prices, and global market dynamics. Bring data to life for stakeholders across the company, including executive teams, analysts, and developers, by helping to create visually compelling and interactive reporting tools. Mentor and train users on dashboard usage for efficient utilization of insights. Becoming a Data Detective Dive deep into commodities data to uncover trends, patterns, and hidden insights that influence critical decisions in real-time. Demonstrate strong analytical skills to swiftly grasp business needs and translate them into actionable insights. Collaborate with stakeholders to define key metrics and KPIs and contribute to data-driven decisions that impact the organization’s direction. Engaging with Strategic Minds Work together with cross-functional teams within business operations to turn complex business challenges into innovative data solutions. Gather, refine, and translate business requirements into insightful reports and dashboards that push our BI team to new heights. Provide ongoing support to cross-functional teams, addressing issues and adapting to changing business processes. Basic Qualifications 3+ years of professional experience in BI projects, focusing on dashboard development using Power BI or similar tools and deploying them on their respective online platforms for easy access. Proficiency in working with various databases such as Redshift, Oracle, and Databricks, using SQL for data manipulation, and implementing ETL processes for BI dashboards. Ability to identify meaningful patterns and trends in data to provide valuable insights for business decision-making. Skilled in requirement gathering and developing BI solutions. Candidates with a strong background/proficiency in Power BI and Power Platforms tools such as Power Automate/Apps, and intermediate to advanced proficiency in Python are preferred. Essential understanding of data modeling techniques tailored to problem statements. Familiarity with cloud platforms (e.g., Azure, AWS) and data warehousing. Exposure to GenAI concepts and tools such as ChatGPT. Experience with to Agile Project Implementation methods. Excellent written and verbal communication skills. Must be able to self-start and succeed in a fast-paced environment. Additional/Preferred Qualifications Knowledge of Generative AI, Microsoft Copilot, and Microsoft Fabric a plus. Ability to write complex SQL queries or enhance the performance of existing ETL pipelines is a must. Familiarity with Azure Devops will be an added advantage. Candidates with a strong background/proficiency in Power BI and Power Platforms tools such as Power Automate/Apps, and intermediate to advanced proficiency in Python are preferred. Shift Timings:- 1PM-10PM IST (Flexibility Required) About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 312656 Posted On: 2025-06-26 Location: Hyderabad, Telangana, India
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane