Jobs
Interviews

344 Athena Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 2.0 years

6 - 10 Lacs

Mumbai, Hyderabad, Chennai

Work from Office

Your Role You would be working Enterprise Data Management Consolidation (EDMCS) Enterprise Profitability & Cost Management Cloud Services (EPCM) Oracle Integration cloud (OIC). Full life cycle Oracle EPM Cloud Implementation. Creating forms, OIC Integrations, and complex Business Rules. Understanding dependencies and interrelationships between various components of Oracle EPM Cloud. Keep abreast of Oracle EPM roadmap and key functionality to identify opportunities where it will enhance the current process within the entire Financials ecosystem. Collaborate with the FP&A to facilitate the Planning, Forecasting and Reporting process for the organization. Create and maintain system documentation, both functional and technical Your Profile Experience in Implementation in EDMCS Modules Proven ability to collaborate with internal clients in an agile manner, leveraging design thinking approaches. Collaborate with the FP&A to facilitate the Planning, Forecasting and Reporting process for the organization. Create and maintain system documentation, both functional and technical Experience of Python, AWS Cloud (Lambda, Step functions, EventBridge etc.) is preferred. What you"ll love about capgemini You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learn on one of the industry"s largest digital learning platforms, with access to 250,000+ courses and numerous certifications. About Capgemini Location - Hyderabad,Chennai,Mumbai,Pune,Bengaluru

Posted 1 week ago

Apply

5.0 - 10.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Job Title:AWS Data EngineerExperience5-10 YearsLocation:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.

Posted 1 week ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Job Title:EMR_Spark SMEExperience:5-10 YearsLocation:Bangalore : Technical Skills: 5+ years of experience in big data technologies with hands-on expertise in AWS EMR and Apache Spark. Proficiency in Spark Core, Spark SQL, and Spark Streaming for large-scale data processing. Strong experience with data formats (Parquet, Avro, JSON) and data storage solutions (Amazon S3, HDFS). Solid understanding of distributed systems architecture and cluster resource management (YARN). Familiarity with AWS services (S3, IAM, Lambda, Glue, Redshift, Athena). Experience in scripting and programming languages such as Python, Scala, and Java. Knowledge of containerization and orchestration (Docker, Kubernetes) is a plus. Architect and develop scalable data processing solutions using AWS EMR and Apache Spark. Optimize and tune Spark jobs for performance and cost efficiency on EMR clusters. Monitor, troubleshoot, and resolve issues related to EMR and Spark workloads. Implement best practices for cluster management, data partitioning, and job execution. Collaborate with data engineering and analytics teams to integrate Spark solutions with broader data ecosystems (S3, RDS, Redshift, Glue, etc.). Automate deployments and cluster management using infrastructure-as-code tools like CloudFormation, Terraform, and CI/CD pipelines. Ensure data security and governance in EMR and Spark environments in compliance with company policies. Provide technical leadership and mentorship to junior engineers and data analysts. Stay current with new AWS EMR features and Spark versions to recommend improvements and upgrades. Requirements and Skills Performance tuning and optimization of Spark jobs. Problem-solving skills with the ability to diagnose and resolve complex technical issues. Strong experience with version control systems (Git) and CI/CD pipelines. Excellent communication skills to explain technical concepts to both technical and non-technical audiences. Qualification: Education qualificationB.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college. Certifications: AWS Certified Solutions Architect Associate/Professional AWS Certified Data Analytics Specialty

Posted 1 week ago

Apply

4.0 - 8.0 years

5 - 15 Lacs

Thiruvananthapuram

Work from Office

Job Title: Data Associate - Cloud Data Engineering Experience: 4+ Years Employment Type: Full-Time Industry: Information Technology / Data Engineering / Cloud Platforms Job Summary: We are seeking a highly skilled and experienced Senior Data Associate to join our data engineering team. The ideal candidate will have a strong background in cloud data platforms, big data processing, and enterprise data systems, with hands-on experience across both AWS and Azure ecosystems. This role involves building and optimizing data pipelines, managing large-scale data lakes and warehouses, and enabling advanced analytics and reporting. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS Glue, PySpark, and Azure Data Factory. Work with AWS Redshift, Athena, Azure Synapse, and Databricks to support data warehousing and analytics solutions. Integrate and manage data across MongoDB, Oracle, and cloud-native storage like Azure Data Lake and S3. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality datasets. Implement data quality checks, monitoring, and governance practices. Optimize data workflows for performance, scalability, and cost-efficiency. Support data migration and modernization initiatives across cloud platforms. Document data flows, architecture, and technical specifications. Required Skills & Qualifications: 8+ years of experience in data engineering, data integration, or related roles. Strong hands-on experience with: AWS Redshift, Athena, Glue, S3 Azure Data Lake, Synapse Analytics, Databricks PySpark for distributed data processing MongoDB and Oracle databases Proficiency in SQL, Python, and data modeling. Experience with ETL/ELT design and implementation. Familiarity with data governance, security, and compliance standards. Strong problem-solving and communication skills. Preferred Qualifications: Certifications in AWS (e.g., Data Analytics Specialty) or Azure (e.g., Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps for data workflows. Knowledge of data cataloging tools (e.g., AWS Glue Data Catalog, Azure Purview). Exposure to real-time data processing and streaming technologies. Required Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

navi mumbai, maharashtra

On-site

Seekify Global is looking for an experienced and motivated Data Catalog Engineer to join the Data Engineering team. The ideal candidate should have a significant background in designing and implementing metadata and data catalog solutions within AWS-centric data lake and data warehouse environments. As a Data Catalog Engineer at Seekify Global, you will play a crucial role in improving data discoverability, governance, and lineage across our enterprise data assets. Your responsibilities will include leading the end-to-end implementation of a data cataloging solution within AWS, establishing and managing metadata frameworks for structured and unstructured data assets, and integrating the data catalog with various AWS-based storage solutions such as S3, Redshift, Athena, Glue, and EMR. You will collaborate closely with data Governance/BPRG/IT projects teams to define metadata standards, data classifications, and stewardship processes. Additionally, you will be responsible for developing automation scripts for catalog ingestion, lineage tracking, and metadata updates using tools like Python, Lambda, Pyspark, or Glue/EMR custom jobs. Working in coordination with data engineers, data architects, and analysts, you will ensure that metadata is accurate, relevant, and up to date. Implementing role-based access controls and ensuring compliance with data privacy and regulatory standards will also be part of your role. Moreover, you will be expected to create detailed documentation and conduct training/workshops for internal stakeholders on effectively utilizing the data catalog. **Key Responsibilities:** - Lead end-to-end implementation of a data cataloging solution within AWS, preferably AWS Glue Data Catalog or third-party tools like Apache Atlas, Alation, Collibra, etc. - Establish and manage metadata frameworks for structured and unstructured data assets in data lake and data warehouse environments. - Integrate the data catalog with AWS-based storage solutions such as S3, Redshift, Athena, Glue, and EMR. - Collaborate with data Governance/BPRG/IT projects teams to define metadata standards, data classifications, and stewardship processes. - Develop automation scripts for catalog ingestion, lineage tracking, and metadata updates using Python, Lambda, Pyspark, or Glue/EMR custom jobs. - Work closely with data engineers, data architects, and analysts to ensure metadata is accurate, relevant, and up to date. - Implement role-based access controls and ensure compliance with data privacy and regulatory standards. **Required Skills and Qualifications:** - 7-8 years of experience in data engineering or metadata management roles. - Proven expertise in implementing and managing data catalog solutions within AWS environments. - Strong knowledge of AWS Glue, S3, Athena, Redshift, EMR, Data Catalog, and Lake Formation. - Hands-on experience with metadata ingestion, data lineage, and classification processes. - Proficiency in Python, SQL, and automation scripting for metadata pipelines. - Familiarity with data governance and compliance standards (e.g., GDPR, RBI guidelines). - Experience integrating with BI tools (e.g., Tableau, Power BI) and third-party catalog tools is a plus. - Strong communication, problem-solving, and stakeholder management skills. **Preferred Qualifications:** - AWS Certifications (e.g., AWS Certified Data Analytics, AWS Solutions Architect). - Experience with data catalog tools like Alation, Collibra, or Informatica EDC, or open-source tools hands-on experience. - Exposure to data quality frameworks and stewardship practices. - Knowledge of data migration with data catalog and data-mart is a plus. This is a full-time position with the work location being in person.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

Join us as a Data Engineer at Barclays, where you will spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable and secure infrastructure, ensuring seamless delivery of our digital solutions. To be successful as a Data Engineer, you should have experience with hands-on experience in Pyspark and a strong knowledge of Dataframes, RDD, and SparkSQL. You should also have hands-on experience in developing, testing, and maintaining applications on AWS Cloud. A strong hold on AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake formation, Athena) is essential. Additionally, you should be able to design and implement scalable and efficient data transformation/storage solutions using Snowflake. Experience in data ingestion to Snowflake for different storage formats such as Parquet, Iceberg, JSON, CSV, etc., is required. Familiarity with using DBT (Data Build Tool) with Snowflake for ELT pipeline development is necessary. Advanced SQL and PL SQL programming skills are a must. Experience in building reusable components using Snowflake and AWS Tools/Technology is highly valued. Exposure to data governance or lineage tools such as Immuta and Alation is an added advantage. Knowledge of Orchestration tools such as Apache Airflow or Snowflake Tasks is beneficial, and familiarity with Abinitio ETL tool is a plus. Some other highly valued skills may include the ability to engage with stakeholders, elicit requirements/user stories, and translate requirements into ETL components. A good understanding of infrastructure setup and the ability to provide solutions either individually or working with teams is essential. Knowledge of Data Marts and Data Warehousing concepts, along with good analytical and interpersonal skills, is required. Implementing Cloud-based Enterprise data warehouse with multiple data platforms along with Snowflake and NoSQL environment to build data movement strategy is also important. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. The role is based out of Chennai. Purpose of the role: To build and maintain the systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Meet the needs of stakeholders/customers through specialist advice and support. - Perform prescribed activities in a timely manner and to a high standard which will impact both the role itself and surrounding roles. - Likely to have responsibility for specific processes within a team. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - Demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - Manage own workload, take responsibility for the implementation of systems and processes within own work area and participate in projects broader than the direct team. - Execute work requirements as identified in processes and procedures, collaborating with and impacting on the work of closely related teams. - Provide specialist advice and support pertaining to own work area. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how all teams in the area contribute to the objectives of the broader sub-function, delivering impact on the work of collaborating teams. - Continually develop awareness of the underlying principles and concepts on which the work within the area of responsibility is based, building upon administrative/operational expertise. - Make judgements based on practice and previous experience. - Assess the validity and applicability of previous or similar experiences and evaluate options under circumstances that are not covered by procedures. - Communicate sensitive or difficult information to customers in areas related specifically to customer advice or day-to-day administrative requirements. - Build relationships with stakeholders/customers to identify and address their needs. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

We are looking for a highly motivated and enthusiastic Senior Data Scientist with 5-8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. As a Senior Data Scientist, your key responsibilities will include developing and implementing machine learning models and algorithms. You will work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. It is essential to stay updated with the latest advancements in AI/ML technologies and methodologies and collaborate with cross-functional teams to support various AI/ML initiatives. To qualify for this role, you should have a Bachelor's degree in Computer Science, Data Science, or a related field. A strong understanding of machine learning, deep learning, and Generative AI concepts is required. Preferred skills for this position include experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, and Deep Learning stack using Python. Experience with cloud infrastructure for AI/ML on AWS (Sagemaker, Quicksight, Athena, Glue) is highly desirable. Expertise in building enterprise-grade, secure data ingestion pipelines for unstructured data (ETL/ELT) is a plus. Proficiency in Python, TypeScript, NodeJS, ReactJS, and frameworks like pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy, Glue crawler, ETL, as well as experience with data visualization tools like Matplotlib, Seaborn, and Quicksight, is beneficial. Additionally, knowledge of deep learning frameworks such as TensorFlow, Keras, and PyTorch, experience with version control systems like Git and CodeCommit, and strong knowledge and experience in Generative AI/LLM based development are essential for this role. Experience working with key LLM models APIs (e.g., AWS Bedrock, Azure Open AI/OpenAI) and LLM Frameworks (e.g., LangChain, LlamaIndex), as well as proficiency in effective text chunking techniques and text embeddings, are also preferred skills. Good to have skills include knowledge and experience in building knowledge graphs in production and an understanding of multi-agent systems and their applications in complex problem-solving scenarios. Pentair is an Equal Opportunity Employer that values diversity and believes that a diverse workforce contributes different perspectives and creative ideas, enabling continuous improvement.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

You are currently looking to hire a Manager Quality specializing in Medical Coding with 10-12 years of experience for a full-time position based in Hyderabad. As the Manager Quality - Medical Coding, your key responsibilities will include having experience in Inpatient Medical Coding and collaborating with the Coding Education and Quality Coordinator to ensure proper on-the-job training for all staff under your supervision. You will be responsible for monitoring the competency and progress of new employees, providing timely and constructive feedback, and ensuring that work performance meets the required standards. Additionally, you will monitor productivity levels, assist in resolving day-to-day issues that may affect staff, and conduct regular update meetings to keep the team informed about departmental, hospital, market, and company changes or events. The ideal candidate should have a good understanding of HIPAA and healthcare compliance standards. Proficiency in using billing software such as Epic, Athena, Kareo, and QA tools is also required for this role. If you possess the necessary qualifications and experience, we encourage you to apply for this position by sending your resume to suganya.mohan@yitrobc.net for further details. Join our team to contribute to the field of Medical Coding and make a difference in the healthcare industry. Apply now and be a part of our dynamic and growing organization focused on maintaining coding audit and compliance standards in the US healthcare sector.,

Posted 2 weeks ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :In this role, you will work to increase the domain data coverage and adoption of the Data Platform by promoting a connected user experience through data. You will increase data literacy and trust by leading our Data Governance and Master Data Management initiatives. You will contribute to the vision and roadmap of self-serve capabilities through the Data Platform.The senior data engineer develops data pipelines extracting and transforming data as governed assets into the data platform, improves system quality by identifying issues and common patterns and developing standard operating procedures; and enhances applications by identifying opportunities for improvement, making recommendations, and designing and implementing systems.Roles and responsibilities:(MUST HAVE) Extensive experience with cloud data warehouse like Snowflake, AWS Athena, and SQL databases like PostgreSQL, MS SQL Server. Experience with NoSQL databases like AWS DynamoDB and Azure Cosmos is a plus.(MUST HAVE) Solid experience and clear understanding of DBT.(MUST HAVE) Experience working with AWS and/or Azure CI/CD DevOps technologies, and extensive debugging experience.Good understanding of data modeling, ETL, data curation, and big data performance tuning.Experience with data ingestion tools like Fivetran is a big plus.Experience with Data Quality and Observability tools like Monte Carlo is a big plus.Experience working and integrating with Event Bus like Pulsar is a big plus.Experience integrating with a Data Catalog like Atlan is a big plus.Experience with Business Intelligence tools like PowerBI is a plus.An understanding of unit testing, test driven development, functional testing, and performanceKnowledge of at least one shell scripting language.Ability to network with key contacts outside own area of expertise.Must possess strong interpersonal, organizational, presentation and facilitation skills.Must be results oriented and customer focused.Must possess good organizational skills.Technical experience & Professional attributes:Prepare technical design specifications based on functional requirements and analysis documents.Implement, test, maintain and support software, based on technical design specifications.Improve system quality by identifying issues and common patterns and developing standard operating procedures.Enhance applications by identifying opportunities for improvement, making recommendations, and designing and implementing systems.Maintain and improve existing codebases and peer review code changes.Liaise with colleagues to implement technical designs.Investigating and using new technologies where relevantProvide written knowledge transfer material.Review functional requirements, analysis, and design documents and provide feedback.Assist customer support with technical problems and questions.Ability to work independently with wide latitude for independent decision making.Experience in leading the work of others and mentor less experienced developers in the context of a project is a plus.Ability to listen and understand information and communicate the same.Participate in architecture and code reviews.Lead or participate in other projects or duties as need arises. Education qualifications:Bachelors degree in computer science, Information Systems, or related field; or equivalent combination of education/experience. Masters degree is a plus.5 years or more of extensive experience developing mission critical and low latency solutions.At least 3 years of experience with developing and debugging distributed systems and data pipelines in the cloud. Additional Information:The Winning Way behaviors that all employees need in order to meet the expectations of each other, our customers, and our partners. Communicate with Clarity - Be clear, concise and actionable. Be relentlessly constructive. Seek and provide meaningful feedback. Act with Urgency - Adopt an agile mentality - frequent iterations, improved speed, resilience. 80/20 rule better is the enemy of done. Dont spend hours when minutes are enough. Work with Purpose - Exhibit a We Can mindset. Results outweigh effort. Everyone understands how their role contributes. Set aside personal objectives for team results. Drive to Decision - Cut the swirl with defined deadlines and decision points. Be clear on individual accountability and decision authority. Guided by a commitment to and accountability for customer outcomes. Own the Outcome - Defined milestones, commitments and intended results. Assess your work in context, if youre unsure, ask. Demonstrate unwavering support for decisions.COMMENTS:The above statements are intended to describe the general nature and level of work being performed by individuals in this position. Other functions may be assigned, and management retains the right to add or change the duties at any time. Qualification 15 years full time education

Posted 2 weeks ago

Apply

8.0 - 13.0 years

0 - 1 Lacs

Chennai

Hybrid

Duties and Responsibilities Lead the design and implementation of scalable, secure, and high-performance solutions for data-intensive applications. Collaborate with stakeholders, other product development groups and software vendors to identify and define solutions for complex business and technical requirements. Develop and maintain cloud infrastructure using platforms such as AWS, Azure, or Google Cloud. Articulate technology solutions as well as explain the competitive advantages of various technology alternatives. Evangelize best practices to analytics teams Ensure data security, privacy, and compliance with relevant regulations. Optimize cloud resources for cost-efficiency and performance. Lead the migration of on-premises data systems to the cloud. Implement data storage, processing, and analytics solutions using cloud-native services. Monitor and troubleshoot cloud infrastructure and data pipelines. Stay updated with the latest trends and best practices in cloud computing and data management" Skills 5+ years of hands-on design and development experience in implementing Data Analytics applications using AWS Services such as S3, Glue, AWS Step Functions, Kinesis, Lambda, Lake Formation, Athena, Elastic Container Service/Elastic Kubernetes Service, Elastic Search, and Amazon EMR or Snowflake Experience with AWS services such as AWS IoT Greengrass, AWS IoT SiteWise, AWS IoT Core, AWS IoT Events-Strong understanding of cloud architecture principles and best practices. Proficiency in designing network topology, endpoints, application registration, network pairing Well verse with the access management in Azure or Cloud Experience with containerization technologies like Docker and Kubernetes. Expertise in CI/CD pipelines and version control systems like Git. Excellent problem-solving skills and attention to detail. Strong communication and leadership skills. Ability to work collaboratively with cross-functional teams and stakeholders. Knowledge of security and compliance standards related to cloud data platforms." Technical / Functional Skills Atleast 3+ years of experience in the implementation of all the Amazon Web Services (listed above) Atleast 3+ years of experience as a SAP BW Developer Atleast 3+ years of experience in Snowflake (or Redshift) Atleast 3+ years of experience as Data Integration Developer in Fivetran/HVR/DBT, Boomi (or Talend/Infomatica) Atleast 2+ years of experience with Azure Open AI, Azure AI Services, Microsoft CoPilot Studio, PowerBI, PowerAutomate Experience in Networking and Security Domain Expertise: 'Epxerience with SDLC/Agile/Scrum/Kanban. Project Experience Hands on experience in the end-to-end implementation of Data Analytics applications on AWS Hands on experience in the end to end implementation of SAP BW application for FICO, Sales & Distribution and Materials Management Hands on experience with Fivetran/HVR/Boomi in development of data integration services with data from SAP, SalesForce, Workday and other SaaS applications Hands on experience in the implementation of Gen AI use cases using Azure Services Hands on experience in the implementation of Advanced Analytics use cases using Python/R Certifications AWS Certified Solutions Architect - Professional

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

About Us 6thstreet.com is one of the largest omnichannel fashion & lifestyle destinations in the GCC, home to 1200+ international brands. The fashion-savvy destination offers collections from over 150 international fashion brands such as Dune London, ALDO, Naturalizer, Nine West, Charles & Keith, New Balance, Crocs, Birkenstock, Skechers, Levi's, Aeropostale, Garage, Nike, Adidas Originals, Rituals, and many more. The online fashion platform also provides free delivery, free returns, cash on delivery, and the option for click and collect. Job Description We are looking for a seasoned Data Engineer to design and manage data solutions. Expertise in SQL, Python, and AWS is essential. The role includes client communication, recommending modern data tools, and ensuring smooth data integration and visualization. Strong problem-solving and collaboration skills are crucial. Responsibilities Understand and analyze client business requirements to support data solutions. Recommend suitable modern data stack tools based on client needs. Develop and maintain data pipelines, ETL processes, and data warehousing. Create and optimize data models for client reporting and analytics. Ensure seamless data integration and visualization with cross-functional teams. Communicate with clients for project updates and issue resolution. Stay updated on industry best practices and emerging technologies. Skills Required 3-5 years in data engineering/analytics with a proven track record. Proficient in SQL and Python for data manipulation and analysis. Knowledge of Pyspark is a plus. Experience with data warehouse platforms like Redshift and Google BigQuery. Experience with AWS services like S3, Glue, Athena. Proficient in Airflow. Familiarity with event tracking platforms like GA or Amplitude is a plus. Strong problem-solving skills and adaptability. Excellent communication skills and proactive client engagement. Ability to get things done, unblock yourself, and effectively collaborate with team members and clients. Benefits Full-time role. Competitive salary + bonus. Company employee discounts across all brands. Medical & health insurance. Collaborative work environment. Good vibes work culture. Medical insurance.,

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Hyderabad

Work from Office

MEDICAL CODER / MEDICAL BILLER Job Description We are looking for a detail-oriented and proactive Eligibility Executive to manage insurance verification and benefits validation for patients in the revenue cycle process. The ideal candidate will have experience working with U.S. healthcare insurance systems, payer portals, and EHR platforms to ensure accurate eligibility checks and timely updates for claims processing. Key Responsibilities Verify patient insurance coverage and benefits through payer portals, IVR, or direct calls to insurance companies. Update and confirm insurance details in the practice management system or EHR platforms accurately and in a timely manner. Identify policy limitations, deductibles, co-pays, and co-insurance information and document clearly for billing teams. Coordinate with patients and internal teams (billing, front desk, scheduling) to clarify eligibility-related concerns. Perform eligibility checks for scheduled appointments, procedures, and recurring services. Handle real-time and batch eligibility verifications for various insurance types including commercial, Medicaid, Medicare, and TPA. Escalate discrepancies or inactive coverage to the concerned team and assist in resolving issues before claim submission. Maintain up-to-date knowledge of payer guidelines and insurance plan policies. Ensure strict adherence to HIPAA guidelines and maintain confidentiality of patient data. Meet assigned productivity and accuracy targets while following internal SOPs and compliance standards. 1Preferred Skills & Tools Experience with EHR/PM systems like eCW, NextGen, Athena, CMD Familiarity with major U.S. insurance carriers and payer portals Strong verbal and written communication skills Basic knowledge of medical billing and coding is a plus Ability to work in a fast-paced, detail-focused environment Qualifications ANY LIFE SCIENCE DEGREE BSc, MSc, B.Pharm, M.Pharm, BPT NOTE CPC certification preferable Shift & Work Details: Shift Timing: Night Shift 9:00 PM to 7:00 AM Work Days: [Monday to Friday] Gender: Male candidates only (due to night shift operational requirements)

Posted 2 weeks ago

Apply

1.0 - 4.0 years

2 - 5 Lacs

Hyderabad, Navi Mumbai, Chennai

Work from Office

Hiring for AR Callers & Prior Authorization Process Hyderabad, Chennai & Mumbai Role: AR Caller / Prior Authorization Executive Experience: Minimum 1+ Year in AR Calling & Prior Authorization Process Work Mode: Work from Office Locations: Hyderabad | Chennai | Mumbai Notice Period: Immediate Joiners Preferred (Relieving Letter Not Mandatory) Shift: Night Shift (US Healthcare Process) Package: Up to 40,000 Take-home Incentives 2-Way Cab Facility Qualification: Intermediate & Above Job Description: We are hiring experienced professionals in AR Calling and Prior Authorization with a strong understanding of the US healthcare process. Candidates must have at least one year of relevant experience and be ready to work from office locations. Perks: Competitive Salary Performance-based Incentives Cab Facility Quick Onboarding for Immediate Joiners Interested Candidates can share their resumes to: Email: harshithaaxis5@gmail.com Contact: HR Harshitha – 7207444236

Posted 2 weeks ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Chennai

Work from Office

Job Description: EHR Integration Specialist Location: Chennai Shift : Day Shift Education Bachelor's degree in computer science, Information Technology, Health Informatics, or a related field Advanced certifications in healthcare IT (e.g., HL7 Certification, FHIR Certification, EPIC/Cerner/Athena certifications) are highly desirable Experience 3 to 5+ years of hands-on experience in integrating Electronic Health Record (EHR) systems with a strong focus on EPIC, Cerner, and Athena platform Proven experience with HL7 (v2.x/v3), FHIR, and DICOM standards in real-world healthcare environments Practical exposure to healthcare integration engines like Mirth Connect, Cloverleaf, Rhapsody, or Intersystem Ensembles Familiarity with core EHR workflows such as clinical documentation, scheduling, orders/results processing, patient administration, and demographic data exchange Experience working in cross-functional teams including IT, clinical, and vendor stakeholders Skills & Knowledge Strong understanding of EPIC Bridges, Cerner Millennium/OpenEngine, and EPIC/Cerner/Athena API/Webhook frameworks for data exchange and third-party system integrations Solid grasp of HL7 v2.x messaging types (ADT, ORU, ORM, etc.), as well as FHIR resource mapping and implementation for patient-centric data workflow Proficiency in developing, debugging, and supporting FHIR-based RESTful APIs and integration adapted Ability to handle data transformation and mapping between heterogeneous healthcare system Knowledge of security and compliance standards in healthcare such as HIPAA, HITECH, and data masking/encryption best practice Technical Skills Hands-on experience with relational databases (SQL Server, MySQL, PostgreSQL) and NoSQL systems Familiarity with web services (REST, SOAP), OAuth2, and other authentication mechanisms for secure API integration. Experience working in cloud-hosted healthcare environments (Azure, AWS Health Lake, Google Cloud for Healthcare. Scripting and automation skills (e.g., Python, JavaScript, Shell scripting) for integration workflows and monitoring Personal Skills Strong analytical and troubleshooting skills with a proactive approach to solving integration is Effective communication skills to interact with both technical teams and clinical/business stokehold Self-starter with the ability to work independently and collaboratively in a dynamic healthcare set High attention to detail, especially regarding data accuracy, system validation, and regulatory compliance Agile and adaptable to rapidly evolving health IT landscapes and client require

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Kolkata, Hyderabad, Pune

Work from Office

Airflow Data Engineer in AWS platform Job Title Apache Airflow Data Engineer ROLE” as per TCS Role Master • 4-8 years of experience in AWS, Apache Airflow (on Astronomer platform), Python, Pyspark, SQL • Good hands-on knowledge on SQL and Data Warehousing life cycle is an absolute requirement. • Experience in creating data pipelines and orchestrating using Apache Airflow • Significant experience with data migrations and development of Operational Data Stores, Enterprise Data Warehouses, Data Lake and Data Marts. • Good to have: Experience with cloud ETL and ELT in one of the tools like DBT/Glue/EMR or Matillion or any other ELT tool • Excellent communication skills to liaise with Business & IT stakeholders. • Expertise in planning execution of a project and efforts estimation. • Exposure to working in Agile ways of working. Candidate for this position to be offered with TAIC or TCSL as Entity Data warehousing , pyspark , Github, AWS data platform, Glue, EMR, RedShift, databricks,Data Marts. DBT/Glue/EMR or Matillion, data engineering, data modelling, data consumption

Posted 2 weeks ago

Apply

8.0 - 10.0 years

13 - 18 Lacs

Chandigarh

Work from Office

Job Description Full-stack Architect Experience 8 - 10 years Architect, design, and oversee the development of full-stack applications using modern JS frameworks and cloud-native tools. Lead microservice architecture design, ensuring system scalability, reliability, and performance. Evaluate and implement AWS services (Lambda, ECS, Glue, Aurora, API Gateway, etc.) for backend solutions. Provide technical leadership to engineering teams across all layers (frontend, backend, database). Guide and review code, perform performance optimization, and define coding standards. Collaborate with DevOps and Data teams to integrate services (Redshift, OpenSearch, Batch). Translate business needs into technical solutions and communicate with cross-functional stakeholders.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Were looking for a Senior Data Analyst to join our data-driven team at an ad-tech company that thrives on turning complexity into clarity. Our analysts play a critical role in transforming raw, noisy data into accurate, actionable signals that drive real-time decision-making and long-term strategy. Youll work closely with product, engineering, and business teams to uncover insights, shape KPIs, and guide performance optimization. Responsibilities: Analyze large-scale datasets from multiple sources to uncover actionable insights and drive business impact. Design, monitor, and maintain key performance indicators (KPIs) across ad delivery, bidding, and monetization systems. Partner with product, engineering, and operations teams to define metrics, run deep-dive analyses, and influence strategic decisions. Develop and maintain dashboards, automated reports, and data pipelines to ensure data accessibility and accuracy. Lead investigative analysis of anomalies or unexpected trends in campaign performance, traffic quality, or platform behavior. Requirements BA / BSc in Industrial Engineering and Management / Information Systems Engineering / Economics / Statistics / Mathematics / similar background. 3+ years of experience in Data Analysis and interpretation (Marketing/ Business/ Product). High proficiency in SQL. Experience with data visualization of large data sets using BI systems (Qlik Sense, Sisense, Tableau, Looker, etc.). Experience working with data warehouse/data lake tools like Athena / Redshift / Snowflake /BigQuery. Knowledge of Python - An advantage. Experience building ETL processes An advantage. Fluent in English both written and spoken - Must

Posted 2 weeks ago

Apply

1.0 - 3.0 years

3 - 4 Lacs

Hyderabad, Chennai, Mumbai (All Areas)

Work from Office

We Are Hiring for AR Caller, Prior Auth Executives, EVBV Executives || Loc :- Hyderabad, Mumbai & Chennai Hyderabad - AR Callers & EVBV Porcess 1. Experience - Min 1 year into ar calling Package - Max Up to 33k Take Home Qualification - Inter & Above Notice Period :- Immediate Joiners/ Relieving is not Mandate Cab - 2 Way Cab Virtual Interviews 2. Experience - Min 1 year into EVBV Package : Max Upto 4.6 Lpa Qualification : Graduate Mandate Notice Period :- 0 to 60 Days / Relieving is Mandate Cab - 2 Way Cab Virtual Interviews Mumbai - AR Callers & Prior Auth 1. Experience - Min 9 Months Exp into ar calling Package - Max Upto 40k Take Home Qualification - Inter & Above Notice Period :- Immediate Joiners/ Relieving is not Mandate Cab - 2 Way Cab Virtual Interviews 2. Experience - Min 1 year into Prior Authorization Package : Max Upto 4.6 Lpa Qualification : Graduate Mandate Notice Period :- 0 to 60 Days / Relieving is Mandate Cab - 2 Way Cab Virtual Interviews Chennai - Prior Auth 1. Experience - Min 1 year into Prior Authorization Package : Max Upto 40k Qualification : Inter & above Notice Period :- Immediate Joiners/ Relieving is not Mandate Cab - 2 Way Cab Virtual Interviews Perks and Benefits 1. Cab Facility 2. Incentives Interested candidates can share your updated resume to HR Ashwini -9059181376 (share resume via WhatsApp ) ashwini.axisservices@gmail.com Refer your friend's / Colleagues

Posted 2 weeks ago

Apply

2.0 - 5.0 years

3 - 4 Lacs

Kochi

Remote

We are seeking a skilled Accounts Receivable (AR) Caller experienced with Athena billing software to join our team and help drive efficient claims resolution. Key Responsibilitie: Initiate outbound calls to insurance companies to follow up on outstanding claims (unpaid / underpaid) using Athena billing software. Review claims status, identify reasons for non-payment, and take appropriate action to resolve denials or delays. Accurately document call details, action taken, and next steps in Athena and client systems. Coordinate with internal teams to escalate issues as needed for prompt resolution. Meet daily, weekly, and monthly productivity and quality targets. Stay updated on payer-specific guidelines, denial codes, and reimbursement policies. Respond promptly to payer inquiries and provide requested information to expedite payment. Communicate effectively with supervisors and team members to report trends and potential process improvements. Required Skills & Qualifications Minimum 2-5 years of experience as an AR Caller in US healthcare revenue cycle management Strong working knowledge of Athena billing software. Familiarity with medical terminology, CPT/ICD codes, and insurance denial resolution. Excellent verbal communication skills in English. Proficient in MS Office (Excel, Word) and email correspondence. Ability to meet targets under pressure with strong attention to detail. Flexibility to work in US time zones. Preferred: Experience handling multi-specialty practices or large billing volumes. Prior exposure to other practice management systems is a plus.

Posted 2 weeks ago

Apply

13.0 - 17.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced professional with over 13 years of experience in engaging with clients and translating their business needs into technical solutions. You have a proven track record of working with cloud services on platforms like AWS, Azure, or GCP. Your expertise lies in utilizing AWS data services such as Redshift, Glue, Athena, and SageMaker. Additionally, you have a strong background in generative AI frameworks like GANs and VAEs and possess advanced skills in Python, including libraries like Pandas, NumPy, Scikit-learn, and TensorFlow. Your role involves designing and implementing advanced AI solutions, focusing on areas like NLP and innovative ML algorithms. You are proficient in developing and deploying NLP models and have experience in enhancing machine learning algorithms. Your knowledge extends to MLOps principles, best practices, and the development and maintenance of CI/CD pipelines. Your problem-solving skills enable you to analyze complex data sets and derive actionable insights. Moreover, your excellent communication skills allow you to effectively convey technical concepts to non-technical stakeholders. In this role, you will be responsible for understanding clients" business use cases and technical requirements, translating them into technical designs that elegantly meet their needs. You will be instrumental in mapping decisions with requirements, identifying optimal solutions, and setting guidelines for NFR considerations during project implementation. Your tasks will include writing and reviewing design documents, reviewing architecture and design aspects, and ensuring adherence to best practices. To excel in this position, you should hold a bachelor's or master's degree in computer science, Information Technology, or a related field. Additionally, relevant certifications in AI, cloud technologies, or related areas would be advantageous. Your ability to innovate, design, and implement cutting-edge solutions will be crucial in this role, as well as your skill in technology integration and problem resolution through systematic analysis. Conducting POCs to validate suggested designs and technologies will also be part of your responsibilities.,

Posted 2 weeks ago

Apply

6.0 - 7.0 years

27 - 35 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Provide technical leadership and mentorship to data engineering teams. Architect, design, and deploy scalable, secure, and high-performance data pipelines. Collaborate with stakeholders, clients, and cross-functional teams to deliver end-to-end data solutions. Drive technical strategy and implementation plans in alignment with business needs. Oversee project execution using tools like JIRA, ensuring timely delivery and adherence to best practices. Implement and maintain CI/CD pipelines and automation tools to streamline development workflows. Promote best practices in data engineering and AWS implementations across the team. Preferred candidate profile Strong hands-on expertise in Python, PySpark, and Spark architecture, including performance tuning and optimization. Advanced proficiency in SQL and experience in writing optimized stored procedures. In-depth knowledge of the AWS data engineering stack, including: AWS Glue Lambda API Gateway EMR S3 Redshift Athena Experience with Infrastructure as Code (IaC) using CloudFormation and Terraform. Familiarity with Unix/Linux scripting and system administration is a plus. Proven ability to design and deploy robust, production-grade data solutions.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Java, Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a highly motivated and experienced Data Engineer, you will be responsible for designing, developing, and implementing solutions that enable seamless data integration across multiple cloud platforms. Your expertise in data lake architecture, Iceberg tables, and cloud compute engines like Snowflake, BigQuery, and Athena will ensure efficient and reliable data access for various downstream applications. Your key responsibilities will include collaborating with stakeholders to understand data needs and define schemas, designing and implementing data pipelines for ingesting, transforming, and storing data. You will also be developing data transformation logic to make Iceberg tables compatible with the data access requirements of Snowflake, BigQuery, and Athena, as well as designing and implementing solutions for seamless data transfer and synchronization across different cloud platforms. Ensuring data consistency and quality across the data lake and target cloud environments will be crucial in your role. Additionally, you will be analyzing data patterns and identifying performance bottlenecks in data pipelines, implementing data optimization techniques to improve query performance and reduce data storage costs, and monitoring data lake health to proactively address potential issues. Collaboration and communication with architects, leads, and other stakeholders to ensure data quality meet specific requirements will also be an essential part of your role. To be successful in this position, you should have a minimum of 4+ years of experience as a Data Engineer, strong hands-on experience with data lake architectures and technologies, proficiency in SQL and scripting languages, and experience with data governance and security best practices. Excellent problem-solving and analytical skills, strong communication and collaboration skills, and familiarity with cloud-native data tools and services are also required. Additionally, certifications in relevant cloud technologies will be beneficial. In return, GlobalLogic offers exciting projects in industries like High-Tech, communication, media, healthcare, retail, and telecom. You will have the opportunity to collaborate with a diverse team of highly talented individuals in an open, laidback environment. Work-life balance is prioritized with flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional development opportunities include Communication skills training, Stress Management programs, professional certifications, and technical and soft skill trainings. GlobalLogic provides competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance, NPS(National Pension Scheme), extended maternity leave, annual performance bonuses, and referral bonuses. Fun perks such as sports events, cultural activities, food on subsidized rates, corporate parties, dedicated GL Zones, rooftop decks, and discounts for popular stores and restaurants are also part of the vibrant office culture at GlobalLogic. About GlobalLogic: GlobalLogic is a leader in digital engineering, helping brands design and build innovative products, platforms, and digital experiences for the modern world. By integrating experience design, complex engineering, and data expertise, GlobalLogic helps clients accelerate their transition into tomorrow's digital businesses. Operating under Hitachi, Ltd., GlobalLogic contributes to driving innovation through data and technology for a sustainable society with a higher quality of life.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for this position should have advanced proficiency in Python, with a solid understanding of inheritance and classes. Additionally, the candidate should be well-versed in EMR, Athena, Redshift, AWS Glue, IAM roles, CloudFormation (CFT is optional), Apache Airflow, Git, SQL, Py-Spark, Open Metadata, and Data Lakehouse. Experience with metadata management is highly desirable, particularly with AWS Services such as S3. The candidate should possess the following key skills: - Creation of ETL Pipelines - Deploying code in EMR - Querying in Athena - Creating Airflow Dags for scheduling ETL pipelines - Knowledge of AWS Lambda and ability to create Lambda functions This role is for an individual contributor, and as such, the candidate is expected to autonomously manage client communication and proactively resolve technical issues without external assistance.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. Job Overview The AWS Architect with Data Engineering Skills will be responsible for designing, implementing, and managing scalable, robust, and secure cloud infrastructure and data solutions on AWS. This role requires a deep understanding of AWS services, data engineering best practices, and the ability to translate business requirements into effective technical solutions. Key Responsibilities Architecture Design: Design and architect scalable, reliable, and secure AWS cloud infrastructure. Develop and maintain architecture diagrams, documentation, and standards. Data Engineering Design and implement ETL pipelines using AWS services such as Glue, Lambda, and Step Functions. Build and manage data lakes and data warehouses using AWS services like S3, Redshift, and Athena. Ensure data quality, data governance, and data security across all data platforms. AWS Services Management Utilize a wide range of AWS services (EC2, S3, RDS, Lambda, DynamoDB, etc.) to support various workloads and applications. Implement and manage CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy. Monitor and optimize the performance, cost, and security of AWS resources. Collaboration And Communication Work closely with cross-functional teams including software developers, data scientists, and business stakeholders. Provide technical guidance and mentorship to team members on best practices in AWS and data engineering. Security And Compliance Ensure that all cloud solutions follow security best practices and comply with industry standards and regulations. Implement and manage IAM policies, roles, and access controls. Innovation And Improvement Stay up to date with the latest AWS services, features, and best practices. Continuously evaluate and improve existing systems, processes, and architectures. (ref:hirist.tech),

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies