Posted:5 days ago| Platform: Foundit logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Responsibilities:

?Analyze and organize raw data: Work with various data sources, document parsing,

extracting relevant information and structuring it for further processing.

?Build data systems and pipelines: Construct robust data pipelines that facilitate data

flow from source to Target.

?Evaluate business needs and objectives: Understand the company's requirements

and align data systems accordingly.

?Interpret trends and patterns: Use your analytical skills to identify data patterns.

?Conduct complex data analysis and report on results: Dive deep into data to extract

meaningful information.

?Prepare data for prescriptive and predictive modeling: Ensure data is ready for

machine learning and statistical analysis.

?Build algorithms and prototypes: Develop and test data processing algorithms.

?Combine raw information from different sources: Integrate data from various systems.

?Explore ways to enhance data quality and reliability: Continuously improve data

processes.

?Identify opportunities for data acquisition: Stay informed about new data sources.

?Develop analytical tools and programs: Create tools to facilitate data analysis.

?Collaborate with data scientists and architects: Work closely with other data

professionals to achieve common goals.

?Implement data access controls, data encryption, and data masking techniques

?Familiarity with data visualization tools and techniques for presenting data

?Create and maintain dashboards and reports for stakeholders.

Mandatory Skills:

?Previous experience as a data engineer or in a similar role with at least 6+ years of

relevant work experience.

?Good knowledge of programming languages (e.g., Python, Java, Spark, etc).

?Design, develop, and maintain data pipelines.

?Exposure to process automation.

?Experience working with REST APIs and services, messaging and event technologies.

?Experience working with large and complex data sets.

?Hands-on experience with SQL/No-SQL database (RDS, Redshift, DynamoDB,

synapse, big query, mongo, etc.)

?Batch/stream data processing experience

?Monitor, troubleshoot, and optimize the performance of data infrastructure to ensure

scalability, reliability, and cost efficiency.

?Stay up to date with cloud services and best practices in data engineering to

continuously improve our data ecosystem.

?Good exposure on at least two public cloud platforms (Azure/AWS/GCP)

?Experience with Graph database (e.g. Neptune, RDF4j, etc.)

?Experience with Vector database (e.g. Pinecone, FAISS, etc.)

Good-to-Have Skills:

- knowledge or work experience in insurance, mortgage, banking domains.

- Proficiency in building stream processing systems using kinesis, Kafka, etc.

- Familiarity with Docker, Kubernetes, CI/CD and cloud services (AWS, Azure, GCP).

- Technical expertise with segmentation techniques.

- NLP knowledge

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You

Pune, Maharashtra, India