Home
Jobs

5133 Hadoop Jobs - Page 41

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 6.0 years

4 - 8 Lacs

Bengaluru

Work from Office

- Architect and optimize distributed data processing pipelines leveraging PySpark for high-throughput, low-latency workloads. - Utilize the Apache big data stack (Hadoop, Hive, HDFS) to orchestrate ingestion, transformation, and governance of massive datasets. - Engineer fault-tolerant, production-grade ETL frameworks ensuring seamless scalability and system resilience. - Interface cross-functionally with Data Scientists and domain experts to translate analytical needs into performant data solutions. - Enforce rigorous data quality controls and lineage mechanisms to uphold auditability and regulatory compliance. - Contribute to core architectural design, implement clean and modular Python/Java code, and drive performance benchmarking at scale. Required Skills : - 5-7 years of experience. - Strong hands-on experience with PySpark for distributed data processing. - Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.) - Solid grasp of data warehousing, ETL principles, and data modeling. - Experience working with large-scale datasets and performance optimization. - Familiarity with SQL and NoSQL databases. - Proficiency in Python and basic to intermediate knowledge of Java. - Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills : - Working experience with Apache NiFi for data flow orchestration. - Experience in building real-time streaming data pipelines. - Knowledge of cloud platforms like AWS, Azure, or GCP. - Familiarity with containerization tools like Docker or orchestration tools like Kubernetes.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Location: Bengaluru Experience: 2 - 4 yrs Technologies / Skills : Python, machine learning, NumPy, Pandas, Scikit-Learn, PySpark, TensorFlow or PyTorch, About the Role We are looking for an enthusiastic Data Scientist to join our team based in Bangalore. You will be pivotal in developing, deploying, and optimizing recommendation models that significantly enhance user experience and engagement. Your work will directly influence how customers interact with products, driving personalization and conversion. Responsibilities Model Development: Design, build, and fine-tune machine learning models focused on personalized recommendations to boost user engagement and retention. Data Analysis: Perform comprehensive analysis of user behavior, interactions, and purchasing patterns to generate actionable insights. Algorithm Optimization: Continuously improve recommendation algorithms by experimenting with new techniques and leveraging state-of-the-art methodologies. Deployment & Monitoring: Deploy machine learning models into production environments, and develop tools for continuous performance monitoring and optimization. Education level : Bachelor’s degree (B.E. / B. Tech) in Computer Science or equivalent from a reputed institute. Technical Expertise Strong foundation in Statistics, Probability, and core Machine Learning concepts. Hands-on experience developing recommendation algorithms, including collaborative filtering, content-based filtering, matrix factorization, or deep learning approaches. Proficiency in Python and associated libraries (NumPy, Pandas, Scikit-Learn, PySpark). Experience with TensorFlow or PyTorch frameworks and familiarity with recommendation system libraries (e.g., torch-rec). Solid understanding of Big Data technologies and tools (Hadoop, Spark, SQL). Familiarity with the full Data Science lifecycle from data collection and preprocessing to model deployment. About Oneture Technologies Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities. Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. Oneture team delivers full lifecycle solutions—from ideation, project inception, planning through deployment to ongoing support and maintenance. Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for them.

Posted 1 week ago

Apply

6.0 - 11.0 years

4 - 7 Lacs

Ghaziabad

Remote

Dear Candidate, We are looking for a data engineer trainer with Databricks& Snowflake on a part- time basis and can provide training to our US students. Please find the job description for your reference. If it seems good fit for you, please revert to us with your updated resume. Job Summary We are looking for a skilled and experienced Data Engineer Trainer to join our team! In this role, you will deliver training content to our US based students in Data Engineer with Snowflake & Databricks . You will have an opportunity to combine a passion for teaching, with enthusiasm for technology, to drive learning and establish positive customer relationships. You should have excellent communication skills and proven technology training experience. Key job responsibilities In this role, you will be at the heart of the world class programs delivered by Synergistic Compusoft Pvt Ltd- Your job responsibilities will include : 1. Training working professionals on in-demand skills like Data Bricks, Snowflake , Azure Data Lake. 2. Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) 3. Good Knowledge and Understanding of Data warehouse concepts. 4. Experience with designing implementation modern data platforms (Data Fabrics, Data Mesh, Data Hubs etc,) 5. Experience with the design of Data Catalogs/dictionaries driven by active metadata 6. Delivering highly interactive lectures online that are in line with Synergistic Compusofts teaching methodology. 7. Develop cutting edge and innovative content for classes to help facilitate delivery of classes in an interesting way. 8. Strong programming skills in languages such as SQL, Python, or Scala. 9. Knowledge of data integration patterns, data lakes, and data warehouses. 10. Experience with data quality, data governance, and data security best practices. Note: Trainers have to make our students certify on any global azure certification & deliver content accordingly. Excellent communication and collaboration skills. Primary Skills: Databricks , Snowflake Secondary Skills: ADF, Databricks, Python, Perks and Benefits Remuneration Best in the Industry(55-60k per month) 5 Days working (Mon- Fri) For Part Time- 2.5 to 3 hours - Remote (Night Shift -10:30 Pm onwards) The curriculum and syllabus should be provided by the trainer The curriculum and syllabus should align with the Azure Certification requirements. The duration of a single batch depends on the trainer, but it cannot exceed more than 3 months. Company Website- www.synergisticit.com Companys LinkedIn profile- https://www.linkedin.com/redir/redirect?url=https%3A%2F%2Fsynergisticit%2Ecom%2F&urlhash=rKyX&trk=about_website

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Hybrid

Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Data Engineering, Big Data Technologies, Hadoop, Spark, Hive, Presto, Airflow, Data Modeling, Etl development, Data Lake Architecture, Python, Scala, GCP Big Query, Dataproc, Dataflow, Cloud Composer, AWS, Big Data Stack, Azure, GCP Wayfair is Looking for: About the job The Data Engineering team within the SMART org supports development of large-scale data pipelines for machine learning and analytical solutions related to unstructured and structured data. You'll have the opportunity to gain hands-on experience on all kinds of systems in the data platform ecosystem. Your work will have a direct impact on all applications that our millions of customers interact with every day: search results, homepage content, emails, auto-complete searches, browse pages and product carousels and build and scale data platforms that enable to measure the effectiveness of wayfairs ad-costs , media attribution that helps to decide on day to day or major marketing spends. About the Role: As a Data Engineer, you will be part of the Data Engineering team with this role being inherently multi-functional, and the ideal candidate will work with Data Scientist, Analysts, Application teams across the company, as well as all other Data Engineering squads at Wayfair. We are looking for someone with a love for data, understanding requirements clearly and the ability to iterate quickly. Successful candidates will have strong engineering skills and communication and a belief that data-driven processes lead to phenomenal products. What you'll do: Build and launch data pipelines, and data products focussed on SMART Org. Helping teams push the boundaries of insights, creating new product features using data, and powering machine learning models. Build cross-functional relationships to understand data needs, build key metrics and standardize their usage across the organization. Utilize current and leading edge technologies in software engineering, big data, streaming, and cloud infrastructure What You'll Need: Bachelor/Master degree in Computer Science or related technical subject area or equivalent combination of education and experience 3+ years relevant work experience in the Data Engineering field with web scale data sets. Demonstrated strength in data modeling, ETL development and data lake architecture. Data Warehousing Experience with Big Data Technologies (Hadoop, Spark, Hive, Presto, Airflow etc.). Coding proficiency in at least one modern programming language (Python, Scala, etc) Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing and query performance tuning skills of large data sets. Industry experience as a Big Data Engineer and working along cross functional teams such as Software Engineering, Analytics, Data Science with a track record of manipulating, processing, and extracting value from large datasets. Strong business acumen. Experience leading large-scale data warehousing and analytics projects, including using GCP technologies Big Query, Dataproc, GCS, Cloud Composer, Dataflow or related big data technologies in other cloud platforms like AWS, Azure etc. Be a team player and introduce/follow the best practices on the data engineering space. Ability to effectively communicate (both written and verbally) technical information and the results of engineering design at all levels of the organization. Good to have : Understanding of NoSQL Database exposure and Pub-Sub architecture setup. Familiarity with Bl tools like Looker, Tableau, AtScale, PowerBI, or any similar tools.

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Dear Candidate, We received your details on Naukri, and it appears you are actively looking for new opportunities. Based on your profile, we believe you could be a great fit for the Senior Data Engineer role at Grid Dynamics. Please find the job details below: Big Data Engineer We are looking for an enthusiastic and technology-proficient Big Data Engineer, who is eager to participate in the design and implementation of a top-notch Big Data solution to be deployed at massive scale. Our customer is one of the world's largest technology companies based in Silicon Valley with operations all over the world. On this project we are working on the bleeding-edge of Big Data technology to develop high performance data analytics platform, which handles petabytes datasets. Essential functions Participate in design and development of Big Data analytical applications. Design, support and continuously enhance the project code base, continuous integration pipeline, etc. Write complex ETL processes and frameworks for analytics and data management. Implement large-scale near real-time streaming data processing pipelines. Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale. Qualifications Strong coding experience with Scala, Java, or Python. In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams). Understanding of the best practices in data quality and quality engineering. Experience with version control systems, Git in particular. Desire and ability for quick learning of new tools and technologies. Would be a plus Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.). Experience with Github-based development processes. Experience with JVM build systems (SBT, Maven, Gradle). We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office

Posted 1 week ago

Apply

30.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Today’s world is crime-riddled. Criminals are everywhere, invisible, virtual and sophisticated. Traditional ways to prevent and investigate crime and terror are no longer enough… Technology is changing incredibly fast. The criminals know it, and they are taking advantage. We know it too. For nearly 30 years, the incredible minds at Cognyte around the world have worked closely together and put their expertise to work, to keep up with constantly evolving technological and criminal trends and help make the world a safer place with leading investigative analytics software solutions. We are defined by our dedication to doing good and this translates to business success, meaningful work friendships, a can-do attitude, and deep curiosity. Cognyte's global service organization plays a significant role in customer satisfaction and loyalty, solutions stability and integrity in cyber intelligence products. The candidate will join the global services group of SMEs (subject matter experts) that maintain and engineer customers’ support labs, interact with project managers, QA and development teams. The groups activity revolves around application support. The team player is expected to perform expert level debug and analysis based on logs / Wireshark captures and tools, advocate customers' needs to R&D for product improvement and implement solutions and fixes on production systems on time and quality. Job Summary Technical support expert (Tier3) service escalation engineer. Direct interface with product development and foreign offices support teams. Skills And Qualification The Technical Support Expert will be expected to: Have hands-on analytical and troubleshooting skills, eagerness to learn technologies Be customer oriented, have good verbal and written communication skills in English Be a self-starter, a multitasker, independent, responsible Have good inter-personal communication with other team players Be able to learn fast Be available to provide support beyond office hours Be willing to travel abroad (20%) Requirements: Technical Requirements Bachelor of Science or practical engineering degree in computer science - Advantage Еngineering degree in computer science -Advantage At least 3 years of relevant experience supporting of complex technology solutions – Mandatory Strong analytical and technical skills in the following areas: Experience with troubleshooting of multiple Operating Systems (Windows, Linux) including basic administration, configuration optimization, desktop domains (active directory) and Windows security experience – Mandatory Practical experience with Docker and Kubernetes– Mandatory Virtual Environment technologies (VMWare, Hyper-V) – Mandatory Databases hands-on (MS SQL, Sybase, other) author queries, stored procedures – Mandatory Experience with Jenkins, Ansible, Vault, Hadoop cluster – Big advantage LTE, UMTS, GPRS, GSM, CDMA solutions familiarity – Advantage Powershell, Bash, Python scripts, DevOPS knowledge – Advantage Familiarity with Storage and raid configuration, Netapp, EMC – Advantage TDM/IP/VoIP communication flows and debug of SIP-RTP – Advantage Knowledge of widely used protocols – TCP/IP, Mail, HTTP etc. – Advantage

Posted 1 week ago

Apply

175.0 years

0 Lacs

Gurugram, Haryana, India

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. Join Team Amex and let's lead the way together. From building next-generation apps and microservices in Kotlin to using AI to help protect our franchise and customers from fraud, you could be doing entrepreneurial work that brings our iconic, global brand into the future. As a part of our tech team, we could work together to bring ground-breaking and diverse ideas to life that power our digital systems, services, products and platforms. If you love to work with APIs, contribute to open source, or use the latest technologies, we’ll support you with an open environment and learning culture. Function Description: American Express is looking for energetic, successful and highly skilled Engineers to help shape our technology and product roadmap. Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every day. Today, innovative ideas, insight and new points of view are at the core of how we create a more powerful, personal and fulfilling experience for our customers and colleagues, with batch/real-time analytical solutions using ground-breaking technologies to deliver innovative solutions across multiple business units. This Engineering role is based in our Global Risk and Compliance Technology organization and will have a keen focus on platform modernization, bringing to life the latest technology stacks to support the ongoing needs of the business as well as compliance against global regulatory requirements. Qualifications: Support the Compliance and Operations Risk data delivery team in India to lead and assist in the design and actual development of applications. Responsible for specific functional areas within the team, this involves project management and taking business specifications. The individual should be able to independently run projects/tasks delegated to them. Technology Skills: Bachelor degree in Engineering or Computer Science or equivalent 2 to 5 years experience is required GCP professional certification - Data Engineer Expert in Google BigQuery tool for data warehousing needs. Experience on Big Data (Spark Core and Hive) preferred Familiar with GCP offerings, experience building data pipelines on GCP a plus Hadoop Architecture, having knowledge on Hadoop, Map Reduce, Hbase. UNIX shell scripting experience is good to have Creative problem solving (Innovative) We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Greater Hyderabad Area

On-site

About Us: Join our stealth-mode AI startup on a mission to revolutionize AI and data solutions. Headquartered in Hyderabad, we are a well-funded startup with a world-class team and a passion for innovation in AI, NLP, Computer Vision, and Speech Recognition. We are looking for a highly motivated Data Engineer with 2 to 4 years of experience to join our team and work on cutting-edge projects in AI and big data technologies. Role Overview: As a Data Engineer , you will design, build, and optimize scalable data pipelines and platforms to support our AI-driven solutions. You’ll collaborate with cross-functional teams to enable real-time data processing and insights for enterprise-level applications. Key Responsibilities: Develop and maintain robust data pipelines using tools like PySpark , Kafka , and Airflow . Design and optimize data workflows for high scalability and performance using Hadoop , HDFS , and Hive . Integrate structured and unstructured data from diverse sources into a centralized platform. Leverage big data technologies for real-time processing and streaming using Spark Streaming and Nifi . Work on cloud-based platforms such as AWS , Azure , and GCP to deploy and monitor scalable data solutions. Collaborate with AI/ML teams to deploy machine learning models using MLflow and integrate AI capabilities into data pipelines. Automate and monitor workflows to ensure seamless operations using CI/CD pipelines , Kubernetes , and Docker . Implement data validation, performance testing, and troubleshooting of large-scale datasets. Prepare and share actionable insights through BI tools like Tableau and Grafana . Required Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related fields. Experience: 2 to 4 years in data engineering roles, working with big data ecosystems. Technical Proficiency: Big Data Tools: Hadoop, HDFS, PySpark, Hive, Sqoop, Kafka, Spark Streaming, Airflow, Presto, Nifi. Cloud Platforms: AWS (Glue, S3, EMR), Azure (ADF, HDInsight), GCP (BigQuery, Pub/Sub). Programming Languages: Python, SQL, Scala. DevOps & Automation: Jenkins, Ansible, Kubernetes, Docker. Databases: MySQL, Oracle, HBase, Redis. Visualization Tools: Tableau, Grafana, Zeppelin. Knowledge of machine learning models, AI tools (e.g., TensorFlow, H2O), and feature engineering is a plus. Strong problem-solving skills with attention to detail and ability to manage multiple projects. Excellent communication and collaboration skills in a fast-paced environment. What We Offer: Opportunity to work on innovative AI projects with a global impact. Collaborative work culture with access to cutting-edge technologies.

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Summary: We are seeking a seasoned and visionary Head of Database Administration (HOD - DBA) to lead and manage our database architecture and administration function. The ideal candidate will bring deep technical expertise in database operations, replication, disaster recovery, performance tuning, and big data integration, along with strong leadership capabilities to guide a growing DBA team in a complex, high-performance environment. Key Responsibilities: • Implement and oversee replication, sharding, and backup drills. • Develop and maintain disaster recovery plans with regular testing. • Optimize performance through indexing, query tuning, and resource allocation. • Perform real-time monitoring and health checks for all databases. • Create, review, and optimize complex NoSQL queries • Lead database migration projects with minimal downtime. • Administer databases on Linux environments and AWS cloud (RDS, EC2, S3, etc.). • Use Python scripting for automation and custom DBA tools. • Manage integration with Big Data systems, Data Lakes, Data Marts, and Data Warehouses. • Design and manage database architecture for OLTP and OLAP systems. • Collaborate with DevOps, engineering, and analytics teams. Qualifications: • Strong experience with replication, sharding, backup & disaster recovery. • Expertise in database performance tuning, architecture, and query optimization. • Proficiency in MongoDB, PostgreSQL, MySQL, or similar databases. • Hands-on with Python scripting for automation. • Experience in Linux-based systems and AWS services. • Solid understanding of OLTP, OLAP, Data Lakes, Data Marts, and Data Warehouses. • Strong analytical, debugging, and leadership skills. Preferred: • Experience with NoSQL, Big Data tools (Hadoop, Spark, Kafka). • Certifications in AWS, Linux, or leading database technologies

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Experience: 5-12 years Location: Chennai, Delhi, Pune, Kolkata About Tredence: Tredence focuses on last-mile delivery of powerful insights into profitable actions by uniting its strengths in business analytics, data science and software engineering. The largest companies across industries are engaging with us and deploying their prediction and optimization solutions at scale. Head quartered in the San Francisco Bay Area, we serve clients in the US, Canada, Europe, and South East Asia. We are seeking an experienced data scientist who apart from the required mathematical and statistical expertise also possesses the natural curiosity and creative mind to ask questions, connect the dots, and uncover opportunities that lie hidden with the ultimate goal of realizing the data’s full potential. Primary Roles and Responsibilities: ● Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack ● Ability to provide solutions that are forward-thinking in data engineering and analytics space ● Collaborate with DW/BI leads to understand new ETL pipeline development requirements. ● Triage issues to find gaps in existing pipelines and fix the issues ● Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs ● Help joiner team members to resolve issues and technical challenges. ● Drive technical discussion with client architect and team members ● Orchestrate the data pipelines in scheduler via Airflow Skills and Qualifications: ● Bachelor's and/or master’s degree in computer science or equivalent experience. ● Must have total 4+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. ● Deep understanding of Star and Snowflake dimensional modelling. ● Strong knowledge of Data Management principles ● Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture ● Should have hands-on experience in SQL, Python and Spark (PySpark) ● Candidate must have experience in AWS/ Azure stack ● Desirable to have ETL with batch and streaming (Kinesis). ● Experience in building ETL / data warehouse transformation processes ● Experience with Apache Kafka for use with streaming data / event-based data ● Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) ● Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) ● Experience working with structured and unstructured data including imaging & geospatial data. ● Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. ● Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot ● Databricks Certified Data Engineer Associate/Professional Certification (Desirable). ● Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects ● Should have experience working in Agile methodology ● Strong verbal and written communication skills. ● Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ Databricks Tredence is an equal opportunity employer. We celebrate and support diversity and are committed to creating an inclusive environment for all employees. Visit our website for more details: https://www.tredence.com/ NP: Immediate to 30 Days

Posted 1 week ago

Apply

8.0 - 10.0 years

0 Lacs

Delhi, India

On-site

Location : Delhi Reports To : Chief Revenue Officer Position Overview: We are looking for a highly motivated Pre-Sales Specialist to join our team at Neysa, a rapidly growing AI Cloud Platform company that's making waves in the industry. The role is a customer-facing technical position that will work closely with sales teams to understand client requirements, design tailored solutions and drive technical engagements. You will be responsible for presenting complex technology solutions to customers, creating compelling demonstrations, and assisting in the successful conversion of sales opportunities. Key Responsibilities: Solution Design & Customization : Work closely with customers to understand their business challenges and technical requirements. Design and propose customized solutions leveraging Cloud, Network, AI, and Machine Learning technologies that best fit their needs. Sales Support & Enablement : Collaborate with the sales team to provide technical support during the sales process, including delivering presentations, conducting technical demonstrations, and assisting in the development of proposals and RFP responses. Customer Engagement : Engage with prospects and customers throughout the sales cycle, providing technical expertise and acting as the technical liaison between the customer and the company. Conduct deep-dive discussions and workshops to uncover technical requirements and offer viable solutions. Proof of Concept (PoC) : Lead the technical aspects of PoC engagements, demonstrating the capabilities and benefits of the proposed solutions. Collaborate with the customer to validate the solution, ensuring it aligns with their expectations. Product Demos & Presentations : Deliver compelling product demos and presentations tailored to the customer’s business and technical needs, helping organizations unlock innovation and growth through AI. Simplify complex technical concepts to ensure that both business and technical stakeholders understand the value proposition. Proposal Development & RFPs : Assist in crafting technical proposals, responding to RFPs (Request for Proposals), and providing technical content that highlights the company’s offerings, differentiators, and technical value. Technical Workshops & Trainings : Facilitate customer workshops and training sessions to enable customers to understand the architecture, functionality, and capabilities of the solutions offered. Collaboration with Product & Engineering Teams : Provide feedback to product management and engineering teams based on customer interactions and market demands. Help shape future product offerings and improvements. Market & Competitive Analysis : Stay up-to-date on industry trends, new technologies, and competitor offerings in AI and Machine Learning, Cloud and Networking, to provide strategic insights to sales and product teams. Documentation & Reporting : Create and maintain technical documentation, including solution designs, architecture diagrams, and deployment plans. Track and report on pre-sales activities, including customer interactions, pipeline status, and PoC results. Key Skills and Qualifications: Experience : Minimum of 8-10 years of experience in a pre-sales or technical sales role, with a focus on AI, Cloud and Networking solutions. Technical Expertise : Solid understanding of Cloud computing, Data Center infrastructure, Networking (SDN, SD-WAN, VPNs), and emerging AI/ML technologies. Experience with architecture design and solutioning across these domains, especially in hybrid cloud and multi-cloud environments. Familiarity with tools such as Kubernetes, Docker, TensorFlow, Apache Hadoop, and machine learning frameworks. Sales Collaboration : Ability to work alongside sales teams, providing the technical expertise needed to close complex deals. Experience in delivering customer-focused presentations and demos. Presentation & Communication Skills : Exceptional ability to articulate technical solutions to both technical and non-technical stakeholders. Strong verbal and written communication skills. Customer-Focused Mindset : Excellent customer service skills with a consultative approach to solving customer problems. Ability to understand business challenges and align technical solutions accordingly. Having the mindset to build rapport with customers and become their trusted advisor. Problem-Solving & Creativity : Strong analytical and problem-solving skills, with the ability to design creative, practical solutions that align with customer needs. Certifications : Degree in Computer Science, Engineering, or a related field Cloud and AI / ML certifications are highly desirable Team Player : Ability to work collaboratively with cross-functional teams including product, engineering, and delivery teams. Preferred Qualifications: Industry Experience : Experience in delivering solutions in industries such as finance, healthcare, or telecommunications is a plus. Technical Expertise in AI/ML : A deeper understanding of AI/ML applications, including natural language processing (NLP), computer vision, predictive analytics, or data science use cases. Experience with DevOps Tools : Familiarity with CI/CD pipelines, infrastructure as code (IaC), and automation tools like Terraform, Ansible, or Jenkins.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts : Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the Microsoft Fabric platform team builds and maintains the operating system and provides customers a unified data stack to run an entire data estate. The platform provides a unified experience, unified governance, enables a unified business model and a unified architecture. The Fabric Data & Telemetry team is part of the Fabric Platform. It is responsible for all platform capabilities in support of “observability” – From telemetry creation (APIs, event repositories, etc.), through routing and storage, to eventual consumption tools and data models. This observability is critical for the service and business and supports pillars such as live-site monitoring, diagnostics, analytics for operations and product development, and ensuring Fabric customers have high quality data for their own observability needs. The charter of the team is to provide Fabric stakeholders, partners and clients with world-class data for running the Service and Business. The team and its members work across different codebases, disciplines, platform components and observability pillars. In particular, the team is required to function within the service core, applying classic cloud and software engineering practices and tools, as well as a core-data team, applying (big) data-engineering and analytical practices and tools. As a software Engineer on the team, you will play a key role in shaping the future of our platform. Using the latest Azure technologies, you will design and build scalable telemetry instrumentation services and pipelines and develop tools for metadata management, you will create job monitoring and notification services for data pipelines, and streamline daily operations through automation. We are looking for a Software Engineer who is passionate about observability and data, eager to leverage latest Microsoft Azure technologies to help build scalable, high-performance solutions. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Responsibilities Lead the design, develop, and maintain high quality APIs, SDKs and data [YA2.1][WR2.2]pipelines, including solutions for data collection, cleansing, transformation, and usage, ensuring accurate data ingestion and readiness for downstream analysis, visualization, and AI model training. You will build frameworks to validate data quality and completeness, detect anomalies, enhance data pipeline resiliency, and support unit and integration testing. You will lead the design and implementation of end-to-end software and data life cycles—including development, CI/CD, service reliability, and agile practices—in close collaboration with Product Management and partner teams. Serve as the SME for key components in the telemetry pipeline, providing technical leadership and advocating for improvements ensuring the accuracy, efficiency, and scalability of data collection and processing. Deliver high quality features and data pipelines by leveraging industry best practice and using cutting edge technologies and Fabric/Azure-Data stack. You will anticipate data governance needs, designing data modeling and handling procedures to ensure compliance with all applicable laws and policies. You will implement and enforce security and access control measures to protect sensitive resources and data. Be part of the on-call rotation for maintaining service health. You will mentor junior engineers, lead technical discussions, and drive best practices in software engineering. Embody our culture and values Qualifications Required/Minimum Qualifications Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+[YA3.1] years' experience in software development, data engineering, or data modeling work. OR Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years' experience in software development, data engineering, or data modeling work. OR equivalent experience. 4+ years of experience in software engineering, with proven proficiency in C#, [YA4.1]Java, or equivalent. 4+ years of experience working and building distributed cloud services using Azure or similar technology stacks. Experience with scripting languages for data retrieval and manipulation (e.g., SQL or KQL). Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred/Additional Qualifications 1+ years of demonstrated experience implementing data governance practices, including data access, security and privacy controls and monitoring to comply with regulatory standards. Experience with big data technologies such as: Hadoop, Hive, Spark. Equal Opportunity Employer (EOP) #azdat #azuredata #azdat #azuredata #fabricdata #dataintegration #azure #bigdata #telemetry Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category Engineering Experience Sr. Associate Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Senior Associate- Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative,inclusive, and iterative delivery environment? At Capital One India, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 1.5 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 3+ years of experience in application development including Python, SQL, Scala, or Java 1+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 2+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 1+ years experience working on real-time data and streaming applications 1+ years of experience with NoSQL implementation (Mongo, Cassandra) 1+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with UNIX/Linux including basic commands and shell scripting 1+ years of experience with Agile engineering practices ***At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Voyager (94001), India, Bangalore, Karnataka Senior Associate- Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One India, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 1.5 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 3+ years of experience in application development including Python, SQL, Scala, or Java 1+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 2+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 1+ years experience working on real-time data and streaming applications 1+ years of experience with NoSQL implementation (Mongo, Cassandra) 1+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with UNIX/Linux including basic commands and shell scripting 1+ years of experience with Agile engineering practices ***At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Greater Kolkata Area

On-site

Skill required: Tech for Operations - Tech Solution Architecture Designation: Solution Architecture Senior Analyst Qualifications: Any Graduation Years of Experience: 5-8 years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do? In our Service Supply Chain offering, we leverage a combination of proprietary technology and client systems to develop, execute, and deliver BPaaS (business process as a service) or Managed Service solutions across the service lifecycle: Plan, Deliver, and Recover. Join our dynamic Service Supply Chain (SSC) team and be at the forefront of helping world class organizations unlock their full potential. Imagine a career where your innovative work makes a real impact, and every day brings new challenges and opportunities for growth. We re on the lookout for passionate, talented individuals ready to make a difference. . If you re eager to shape the future and drive success, this is your chance—join us now and let’s build something extraordinary together!The Technical Solution Architect I is responsible for evaluating an organization’s business needs and determining how IT can support those needs leveraging software like Azure, and Salesforce. Aligning IT strategies with business goals has become paramount, and a solutions architect can help determine, develop, and improve technical solutions in support of business goals. The Technical Solution Architect I also bridge communication between IT and business operations to ensure everyone is aligned in developing and implementing technical solutions for business problems. The process requires regular feedback, adjustments, and problem solving in order to properly design and implement potential solutions. To be successful as a Technical Solution Architect I, you should have excellent technical, analytical, and project management skills. What are we looking for? Minimum of 5 years of IT experience Minimum of 1 year of experience in solution architecture Minimum of 1 year of Enterprise-scale project delivery experience Microsoft Azure Cloud Services Microsoft Azure Data Factory Microsoft Azure Databricks Microsoft Azure DevOps Written and verbal communication Ability to establish strong client relationship Problem-solving skills Strong analytical skills Expert knowledge of Azure Cloud Services Experience with Azure Data platforms (Logic apps, Service bus, Databricks, Data Factory, Azure integration services) CI/CD, version-controlling experience using Azure Devops Python Programming Knowledge of both traditional and modern data architecture and processing concepts, including relational databases, data warehousing, and business analytics. (e.g., NoSQL, SQL Server, Oracle, Hadoop, Spark, Knime). Good understanding of security processes, best practices, standards & issues involved in multi-tier cloud or hybrid applications. Proficiency in both high-level and low-level designing to build an architect using customization or configuration on Salesforce Service cloud, Field Service lightening, APEX, Visual Force, Lightening, Community. Expertise in designing and building real time/batch integrations between Salesforce and other systems. Design Apex and Lightning framework including Lightning Pattern, Error logging framework etc. Roles and Responsibilities: Meet with clients to understanding their needs (lead architect assessment meetings),and determining gaps between those needs and technical functionality. Communicate with key stakeholder, across different stages of the Software Development Life Cycle. Work on creating the high-level design and lead architectural decisions Interact with clients to create end-to-end specifications for Azure & Salesforce cloud solutions Provide clarification and answer any question regarding the solution architecture Lead the development of custom enterprise solutions Responsible for application architecture, ensuring high performance, scalability, and availability for those applications Responsible for overall data architect, modeling, and related standards enforced throughout the enterprise ecosystem including data, master data, and metadata, processes, governance, and change control Unify the data architecture used within all applications and identifying appropriate systems of record, reference, and management Share engagement experience with the internal audiences and enrich collective IP. Conduct architecture workshops and other enablement sessions. Any Graduation

Posted 1 week ago

Apply

48.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are looking for a detail-oriented and technically strong ETL Quality Engineer to join our data engineering and QA team. The ideal candidate will be responsible for ensuring the accuracy, integrity, and reliability of data pipelines and ETL processes. You will work closely with data engineers, business analysts, and developers to validate and verify end-to-end data flows and transformations. Key Responsibilities Review and analyze data requirements, source-to-target mappings, and business rules. Design, develop, and execute comprehensive ETL test plans, test cases, and test scripts. Perform data validation, transformation testing, and reconciliation across source and target systems. Identify and document defects, inconsistencies, and data quality issues. Validate performance of ETL jobs and data pipelines under various workloads. Participate in code reviews, defect triage meetings, and QA strategy planning. Use SQL to query, validate, and compare large datasets across environments. Maintain and enhance test automation frameworks for data pipeline validation. Required Technical Skills Strong experience with ETL testing tools such as Informatica, Talend, SSIS, DataStage, or equivalent. Proficiency in SQL for complex queries, joins, aggregations, and data validation. Experience working with data warehouses, data lakes, or cloud-based data platforms (e.g., Snowflake, Redshift, BigQuery, Azure Synapse). Hands-on experience with test automation tools and frameworks related to data testing (e.g., Python, PyTest, DBT, Great Expectations). Knowledge of data profiling, data cleansing, and data governance practices. Familiarity with version control systems (e.g., Git) and CI/CD pipelines (e.g., Jenkins, Azure DevOps). Exposure to API testing for data integrations and ingestion pipelines (Postman, SoapUI, REST/SOAP APIs). Candidate Profile Bachelors degree in Computer Science, Information Technology, or a related field. 48 years of experience in data quality engineering or ETL QA roles. Excellent analytical and problem-solving skills. Strong communication and documentation abilities. Experience working in Agile/Scrum teams. Preferred Qualifications Experience with cloud platforms like AWS, Azure, or GCP. Familiarity with Big Data ecosystems (e.g., Hadoop, Spark, Hive). DataOps or DevOps experience is a plus. Certification in data or QA-related domains (ISTQB, Microsoft, AWS Data Analytics, etc.) Why Join Us? Work with modern data platforms and contribute to enterprise data quality initiatives. Be a key player in ensuring trust and confidence in business-critical data. Collaborate with cross-functional data, engineering, and analytics teams. Enjoy a culture that promotes growth, learning, and innovation (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 31.0 years

16 - 17 Lacs

Hyderabad

On-site

Job Title: Cloud Migration Consultant – (AWS to Azure) Experience: 4+ years in application assessment and migration About the Role We’re looking for a Cloud Migration Consultant with hands-on experience assessing and migrating complex applications to Azure. You'll work closely with Microsoft business units, participating in Intake & Assessment and Planning & Design phases, creating migration artifacts, and leading client interactions. You’ll also support application modernization efforts in Azure, with exposure to AWS as needed. Key Responsibilities Assess application readiness and document architecture, dependencies, and migration strategy. Conduct interviews with stakeholders and generate discovery insights using tools like Azure Migrate, CloudockIt, PowerShell. Create architecture diagrams, migration playbooks, and maintain Azure DevOps boards. Set up applications both on-premises and in cloud environments (primarily Azure). Support proof-of-concepts (PoCs) and advise on migration options. Collaborate with application, database, and infrastructure teams to enable smooth transition to migration factory teams. Track progress, blockers, and risks, reporting timely status to project leadership. Required Skills 4+ years of experience in cloud migration and assessment Strong expertise in Azure IaaS/PaaS (VMs, App Services, ADF, etc.) Familiarity with AWS IaaS/PaaS (EC2, RDS, Glue, S3) Experience with Java (SpringBoot)/C#, .Net/Python, Angular/React.js, REST APIs Working knowledge of Kafka, Docker/Kubernetes, Azure DevOps Network infrastructure understanding (VNets, NSGs, Firewalls, WAFs) IAM knowledge: OAuth, SAML, Okta/SiteMinder Experience with Big Data tools like Databricks, Hadoop, Oracle, DocumentDB Preferred Qualifications Azure or AWS certifications Prior experience with enterprise cloud migrations (especially in Microsoft ecosystem) Excellent communication and stakeholder management skills Educational qualification: B.E/B.Tech/MCA

Posted 1 week ago

Apply

8.0 - 10.0 years

15 - 19 Lacs

Bengaluru

Work from Office

Position: Solution Architect (ETL) Location: Bangalore Experience: 8 Yrs CTC: As per the Industry standards Immediate Joiners # Job Summary We are seeking an experienced Solution Architect (ETL) to design and implement data integration solutions using ETL (Extract, Transform, Load) tools. The ideal candidate will have a strong background in data warehousing, ETL, and data architecture. # Key Responsibilities 1. Design and Implement ETL Solutions: Design and implement ETL solutions using tools such as Informatica PowerCenter, Microsoft SSIS, or Oracle Data Integrator. 2. Data Architecture: Develop and maintain data architectures that meet business requirements and ensure data quality and integrity. 3. Data Warehousing: Design and implement data warehouses that support business intelligence and analytics. 4. Data Integration: Integrate data from various sources, including databases, files, and APIs. 5. Data Quality and Governance: Ensure data quality and governance by implementing data validation, data cleansing, and data standardization processes. 6. Collaboration: Collaborate with cross-functional teams, including business stakeholders, data analysts, and IT teams. 7. Technical Leadership: Provide technical leadership and guidance to junior team members. # Requirements 1. Education: Bachelors degree in Computer Science, Information Technology, or related field. 2. Experience: Minimum 8 years of experience in ETL development, data warehousing, and data architecture. 3. Technical Skills: ETL tools such as Informatica PowerCenter, Microsoft SSIS, or Oracle Data Integrator. Data warehousing and business intelligence tools such as Oracle, Microsoft, or SAP. Programming languages such as Java, Python, or C#. Data modeling and data architecture concepts. 4. Soft Skills: Excellent communication and interpersonal skills. Strong problem-solving and analytical skills. Ability to work in a team environment and lead junior team members. # Nice to Have 1. Certifications: Certifications in ETL tools, data warehousing, or data architecture. 2. Cloud Experience: Experience with cloud-based data integration and data warehousing solutions. 3. Big Data Experience: Experience with big data technologies such as Hadoop, Spark, or NoSQL databases. # What We Offer 1. Competitive Salary: Competitive salary and benefits package. 2. Opportunities for Growth: Opportunities for professional growth and career advancement. 3. Collaborative Work Environment: Collaborative work environment with a team of experienced professionals.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Key Responsibilities: Perform development and support activities for the data warehousing domain using ETL tools and technologies. Understand high-level design, application interface design, and build low-level design. Perform application analysis and propose technical solutions for application enhancement or resolve production issues. Perform development and deployment tasks, including coding, unit testing, and deployment. Create necessary documentation for all project deliverable phases. Handle production issues (Tier 2 support, weekend on-call rotation) to resolve production issues and ensure SLAs are met. Technical Skills: Mandatory: Working experience in Azure Databricks/PySpark. Expert knowledge in Oracle/SQL with the ability to write complex SQL/PL-SQL and performance tune. 2+ years of experience in Snowflake. 2+ years of hands-on experience in Spark or Databricks to build data pipelines. Strong experience with cloud technologies. 1+ years of hands-on experience in development, performance tuning, and loading into Snowflake. Experience working with Azure Repos or GitHub. 1+ years of hands-on experience with Azure DevOps, GitHub, or any other DevOps tool. Hands-on experience in Unix and advanced Unix shell scripting. Open to working in shifts. Good to Have: Willingness to learn all data warehousing technologies and work outside of the comfort zone in other ETL technologies (Oracle, Qlik Replicate, Golden Gate, Hadoop). Hands-on working experience is a plus. Knowledge of job schedulers. Behavioral Skills: Eagerness and hunger to learn. Good problem-solving and decision-making skills. Good communication skills within the team, site, and with the customer. Ability to stretch working hours when necessary to support business needs. Ability to work independently and drive issues to closure. Consult when necessary with relevant parties, raise timely risks. Effectively handle multiple and complex work assignments while consistently delivering high-quality work. Matrix is a global, dynamic, and fast-growing leader in technical consultancy and technology services, employing over 13,000 professionals worldwide. Since its founding in 2001, Matrix has expanded through strategic acquisitions and significant ventures, cementing its position as a pioneer in the tech industry. We specialize in developing and implementing cutting-edge technologies, software solutions, and products. Our offerings include infrastructure and consulting services, IT outsourcing, offshore solutions, training, and assimilation. Matrix also proudly represents some of the world's leading software vendors. With extensive experience spanning both private and public sectors—such as Finance, Telecom, Healthcare, Hi-Tech, Education, Defense, and Security—Matrix serves a distinguished clientele in Israel and an ever-expanding global customer base. Our success stems from a team of talented, creative, and dedicated professionals who are passionate about delivering innovative solutions. We prioritize attracting and nurturing top talent, recognizing that every employee’s contribution is essential to our success. Matrix is committed to fostering a collaborative and inclusive work environment where learning, growth, and shared success thrive. Join the winning team at Matrix! Here, you’ll find a challenging yet rewarding career, competitive compensation and benefits, and opportunities to be part of a highly respected organization—all while having fun along the way. To Learn More, Visit: www.matrix-ifs.com EQUAL OPPORTUNITY EMPLOYER: Matrix is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind. Matrix is committed to the principle of equal employment opportunity for all employees, providing employees with a work environment free of discrimination and harassment. All employment decisions at Matrix are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion or belief, family or parental status, or any other status protected by the laws or regulations in our locations. Matrix will not tolerate discrimination or harassment based on any of these characteristics. Matrix encourages applicants of all ages .

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Development Engineer, you will analyze, design, code, and test multiple components of application code across one or more clients. You will perform maintenance, enhancements, and/or development work, contributing to the growth and success of the projects. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Collaborate with team members to analyze, design, and develop software solutions. - Participate in code reviews and provide constructive feedback. - Troubleshoot and debug software applications to ensure optimal performance. - Research and implement new technologies to enhance existing software. - Document software specifications, user manuals, and technical documentation. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data engineering concepts and best practices. - Experience with cloud platforms such as AWS or Azure. - Hands-on experience with big data technologies like Hadoop or Spark. - Knowledge of programming languages such as Python, Java, or Scala. Additional Information: - The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Kolkata office. - A 15 years full-time education is required. 15 years full time education

Posted 1 week ago

Apply

5.0 - 9.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Job Title- PySpark Data Engineer We're growing our Data Engineering team at ValueLabs and looking for a talented individual to build scalable data pipelines on Cloudera Data Platform! Experience- 5years to 9years. Pyspark Job Description: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Qualifications Education and Experience Bachelors or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux.

Posted 1 week ago

Apply

7.0 - 12.0 years

10 - 14 Lacs

Gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : Python (Programming Language), AWS ArchitectureMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the application development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Coordinate with stakeholders to gather requirements- Ensure timely project delivery Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark, Python (Programming Language), AWS Architecture- Strong understanding of distributed computing frameworks- Experience in building scalable and reliable applications- Knowledge of data processing and transformation- Hands-on experience in performance tuning and optimization Additional Information:- The candidate should have a minimum of 7.5 years of experience in Apache Spark- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : PySpark, Python (Programming Language), Apache HadoopMinimum 3 year(s) of experience is required Educational Qualification : 15 plus years of education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Develop and implement scalable applications using Apache Spark.- Collaborate with cross-functional teams to ensure application efficiency.- Troubleshoot and debug applications to optimize performance.- Stay updated on industry trends and technologies to enhance application development processes.- Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark, Python (Programming Language), Apache Hadoop, PySpark- Strong understanding of distributed computing and data processing.- Experience in developing and optimizing Spark jobs for performance.- Knowledge of data streaming and real-time processing with Spark Streaming.- Familiarity with cloud platforms for deploying Spark applications. Additional Information:- The candidate should have a minimum of 3 years of experience in Apache Spark.- This position is based at our Bengaluru office.- A 15 plus years of education is required. Qualification 15 plus years of education

Posted 1 week ago

Apply

2.0 - 3.0 years

10 - 14 Lacs

Coimbatore

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft BOT Framework Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : aMust have:BE/Btech/MCAbGood to have:ME/Mtech Project Role :Application Lead Project Role Description :Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have Skills :Microsoft BOT Framework, SSINON SSI:Good to Have Skills :SSI:No Technology Specialization NON SSI :Job :Key Responsibilities :aWork closely with the client teams to define and architect the solution for our clients Estimating the components required to provide a comprehensive AI solution that meets and exceeds the clients expectations delivering tangible business value to our clientsbCandidate is expected to deliver cognitive applications that solve/augment business issues using leading AI technology frameworkscDirect and influence client Bot/Virtual Agent architecture, solutioning and development Technical Experience :a7-8 years of experience and a thorough understanding of Azure PaaS landscapea2-3 years of experience in Microsoft AI/Bot FrameworkbGood to have - Knowledge experience in Azure Machine Learning, AI, Azure HDInsight Professional Attributes :AShould have strong problem-solving and Analytical skills BGood Communication skills CAbility to work independently with little supervision or as a team DShould be able to work and deliver under tight timelines ECandidate should have good analytical skills Educational Qualification:aMust have:BE/Btech/MCAbGood to have:ME/MtechAdditional Info : Qualification aMust have:BE/Btech/MCAbGood to have:ME/Mtech

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies