Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 12.0 years
10 - 12 Lacs
Bengaluru, Karnataka, India
On-site
ETL DataStage Lead ETL tool DataStage Data Modeling Dimensional modeling, Star, Snowflake Database MS SQL Server, other RDBMS Languages SQL, PL/SQL Tools TOAD, SQL Developer, SQL Plus, Microsoft Visio Responsibilities Total 10+ years of IT experience in Data warehousing Business Intelligence Hands on experience in ETL / ELT preferably using DataStage. Experience in developing DataStage ETL jobs and Sequencers in the Enterprise Warehouse based on the clients requirements Ability to work with Analysts, data modelers to capture business logic and translated into DataStage ETL jobs. Experience in fine tuning DataStage ETL jobs for high volume data processing with high velocity for batch processing. Experience defining Best ETL practice doc, development standards doc for DataStage ETL Jobs. Experienced in the design, development, testing and implementation of DataStage ETL programs.
Posted 16 hours ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As a Data Scientist at TransOrg Analytics, you will have the opportunity to be part of a team that specializes in Data Science, Data Engineering, and Generative AI. We provide advanced analytics solutions to industry leaders and Fortune 500 companies across India, US, APAC, and the Middle East. Our focus is on leveraging data science to streamline, optimize, and accelerate our clients" businesses. We are looking for individuals who have a Bachelor's degree in Computer Science, Engineering, Statistics, Mathematics, or a related quantitative field. The ideal candidate should have 3-5 years of relevant experience in data analysis or a related role. Proficiency in data pre-processing and manipulation using Python or SQL is required. Additionally, experience in statistical analysis and modeling techniques such as regression, classification, and clustering is essential. Experience with machine learning and deep learning frameworks and libraries is a plus. Familiarity with data processing tools and frameworks, as well as data visualization tools like Tableau or Power BI, is highly desirable. The successful candidate should have a proven track record of managing data project delivery, meeting deadlines, managing stakeholder expectations, and producing clear deliverables. Strong problem-solving skills with a keen attention to detail are crucial for this role. The ability to think critically and provide data-backed insights is key. Excellent communication skills, both verbal and written, are necessary to effectively collaborate with team members and stakeholders. An understanding of Cloud Platforms such as Azure, AWS, or GCP is beneficial, along with the ability to utilize them for developing, training, and testing deep learning models. Familiarity with cloud-based data warehousing platforms like Snowflake is also advantageous. If you are passionate about data science, analytics, and driving business impact through data-driven insights, we invite you to explore the opportunity of joining our team at TransOrg Analytics. Visit our website at www.transorg.com to learn more about us and our work.,
Posted 23 hours ago
7.0 - 12.0 years
7 - 12 Lacs
Delhi, India
On-site
As a Lead Software Engineer, you will be responsible for overseeing the end-to-end development, reliability, and performance of our Datafeed applications. You will lead and manage engineering teams, drive strategic initiatives, and collaborate closely with clients and internal stakeholders to enhance functionality, troubleshoot issues, and ensure continuous improvement. Responsibilities : Oversee the development and maintenance of Datafeed applications using Node.js or ASP.NET. Lead and manage engineering teams, providing technical direction, mentorship, and performance management. Collaborate with clients and internal stakeholders to troubleshoot and resolve complex issues related to Datafeed processing, data accuracy, and application functionality. Design and implement strategic enhancements to improve the reliability, performance, and usability of our applications. Drive best practices in software development, documentation, and knowledge sharing across teams. Technical Skills: Expert proficiency in Node.js or ASP.NET for developing and maintaining Datafeed applications. Deep understanding of application development and architecture. Extensive knowledge of data processing and data manipulation techniques. Familiarity with relational and non-relational databases and queries. Mastery of troubleshooting and debugging techniques. Soft Skills: Exceptional communication skills, both written and verbal. Strong problem-solving and analytical skills. Proven ability to lead and manage engineering teams. Customer-focused mindset with a commitment to delivering high-quality solutions. Basic Qualifications: Bachelor s degree in Computer Science, Engineering, or related field. Minimum 7 years of experience in software development or a related field. Experience leading the development and support of Datafeed applications or similar financial systems is a plus. Preferred Qualifications: Experience with cloud platforms such as AWS or Azure. Knowledge of financial markets and Datafeeds. Experience with DevOps practices and tools
Posted 3 days ago
0.0 - 4.0 years
0 Lacs
delhi
On-site
As an intern at S.S.Rana & Co., you will have the opportunity to research and develop machine learning models from inception to deployment. You will work collaboratively with teams to translate business needs into robust technical solutions. Your responsibilities will also include learning and crafting algorithms for efficiently processing vast datasets in real-time. Furthermore, you will integrate models with various platforms for real-world applications and focus on enhancing model performance to achieve greater accuracy and efficiency. It will be essential for you to stay updated on technological advancements to drive innovation within the company. S.S.Rana & Co., established in 1989, is a leading Indian full-service law firm specializing in intellectual property and corporate laws. The firm boasts a dedicated team of advocates, engineers, software professionals, paralegals, and support staff who excel in various areas of practice, including intellectual property rights, corporate and commercial laws, and dispute resolution. With over three decades of experience, the firm has been providing assistance to clients in India and worldwide. Their clientele ranges from fortune 500 companies to Indian multinationals, individuals, inventors, and grassroots innovators across diverse industries such as aviation, automobile, bio-medical, pharmaceutical, consumer goods, real estate, IT technology, e-commerce, heavy machinery, media and entertainment, sports and gaming, hospitality, and education.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
We are looking for a highly motivated and experienced Senior Software Engineer to join our team and contribute significantly to the development and enhancement of our cutting-edge options analytics platform. In this role, you will be responsible for designing, developing, and implementing robust and scalable Java-based solutions focused on calculating and analyzing options pricing models and risk metrics. The ideal candidate will have a solid grasp of financial markets, options theory, and a proven track record of creating high-performance, data-driven applications in Java. Your responsibilities will include designing, developing, and maintaining Java-based components for our options analytics platform, such as pricing models, risk calculations (Greeks, VaR, etc.), and data processing pipelines. You will also be tasked with implementing and optimizing complex algorithms for option pricing and risk analysis to ensure accuracy and performance. Collaboration with product managers and stakeholders to understand requirements and translate them into technical solutions is a key aspect of this role. Additionally, you will be expected to write clean, well-documented, and testable code following best practices, participate in code reviews, and contribute to process improvements within the team. Troubleshooting and debugging issues to ensure platform stability and reliability will also be part of your responsibilities. To succeed in this role, you should hold a Bachelor's or Master's degree in Computer Science, Financial Engineering, or a related field, along with at least 5 years of experience in software development with a focus on Java. A strong understanding of object-oriented programming principles and design patterns is essential, as is proven experience in building and optimizing high-performance, multi-threaded applications. You should also have a solid understanding of financial markets, options theory, derivative pricing models, numerical methods, and algorithms used in options pricing and risk management. Proficiency in working with large datasets, testing frameworks, continuous integration/deployment pipelines, building distributed systems and APIs, as well as excellent problem-solving and analytical skills are required. Strong communication and collaboration skills are also essential for this role. Trading Technologies offers competitive benefits, including medical, dental, vision, flexible work schedules with a hybrid work model, generous PTO days, tech resources, milestone anniversary bonuses, and a culture that promotes diversity and inclusion. Trading Technologies is a leading Software-as-a-Service technology platform provider to the global capital markets industry, connecting to major international exchanges and liquidity venues, offering advanced tools for trade execution, market data solutions, analytics, risk management, and more to a wide range of clients in the financial sector.,
Posted 5 days ago
0.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrowpeople with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description JOB SUMMARY This position participates in the design, build, test, and delivery of Machine Learning (ML) models and software components that solve challenging business problems for the organization, working in collaboration with the Business, Product, Architecture, Engineering, and Data Science teams. This position engages in assessment and analysis of data sources of structured and unstructured data (internal and external) to uncover opportunities for ML and Artificial Intelligence (AI) automation, predictive methods, and quantitative modeling across the organization. This position establishes and configures scalable and cost-effective end to end solution design pattern components to support prediction model transactions. This position designs trials and tests to measure the success of software and systems, and works with teams, or individually, to implement ML/AI models for production scale. Responsibilities The MLOPS developer works on maintaining existing models that are supporting applications such as the digital insurance application and claims recommendation engine. They will be responsible for setting up cloud monitoring jobs, performing quality assurance and testing for edge cases to ensure the ML product works within the application. They are also going to need to be on call on weekends to bring the application back online in case of failure. Studies and transforms data science prototypes into ML systems using appropriate datasets and data representation models. Researches and implements appropriate ML algorithms and tools that creates new systems and processes powered with ML and AI tools and techniques according to business requirements Collaborates with others to deliver ML products and systems for the organization. Designs workflows and analysis tools to streamline the development of new ML models at scale. Creates and evolves ML models and software that enable state-of-the-art intelligent systems using best practices in all aspects of engineering and modelling lifecycles. Extends existing ML libraries and frameworks with the developments in the Data Science and Machine Learning field. Establishes, configures, and supports scalable Cloud components that serve prediction model transactions Integrates data from authoritative internal and external sources to form the foundation of a new Data Product that would deliver insights that supports business outcomes necessary for ML systems. Qualifications Requirements: Ability to code in python/spark with enough knowledge of apache to build apache beam jobs in dataproc to build data transfer jobs. ????? Experience designing and building data-intensive solutions using distributed computing within a multi-line business environment. Familiarity in Machine Learning and Artificial Intelligence frameworks (i.e., Keras, PyTorch), libraries (i.e., scikit-learn), and tools and Cloud-AI technologies that aids in streamlining the development of Machine Learning or AI systems. Experience in establishing and configuring scalable and cost-effective end to end solution design pattern components to support the serving of batch and live streaming prediction model transactions Possesses creative and critical thinking skills. Experience in developing Machine Learning models such as: Classification/Regression Models, NLP models, and Deep Learning models; with a focus on productionizing those models into product features. Experience with scalable data processing, feature development, and model optimization. Solid understanding of statistics such as forecasting, time series, hypothesis testing, classification, clustering or regression analysis, and how to apply that knowledge in understanding and evaluating Machine Learning models. Knowledgeable in software development lifecycle (SDLM), Agile development practices and cloud technology infrastructures and patterns related to product development Advanced math skills in Linear Algebra, Bayesian Statistics, Group Theory. Works collaboratively, both in a technical and cross-functional context. Strong written and verbal communication. Bachelors (BS/BA) degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
andhra pradesh
On-site
As a Credit Officer, your primary responsibility will be to verify whether all loan applications are assessed in accordance with the credit policy, ensuring that any deviations are appropriately mitigated and documented. You will have direct interactions with customers, conducting personal discussions and interviews to evaluate their creditworthiness. It will be crucial for you to ensure that all files are processed within a reasonable timeline while establishing strong relationships with customers. In addition to customer interactions, you will collaborate with the sales and operations teams to gather accurate data for loan processing. You will play a key role in the security creation process for secured loans, ensuring compliance with all KYC guidelines mandated by the RBI. Your analytical skills will be essential for processing data using computer spreadsheets and evaluating credit parameters. Your role will also involve relationship management, where you will coordinate with sales and relationship managers to ensure proper documentation and address any audit queries promptly. A deep understanding of various risk dimensions such as operational risk, credit risk, and market risk will be necessary to perform your duties effectively. Furthermore, you should be comfortable assessing clients even in the absence of audited financial statements. Flexibility and willingness to travel within the city may be required to fulfill the responsibilities of this role effectively.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
kolkata, west bengal
On-site
As a Data Processing Specialist at Modulus, you will play a crucial role in assisting internal clients to determine appropriate data processing needs aligned with the clients" business objectives. Your responsibilities will include collaborating with Project Managers and Delivery Managers to develop project estimates, tabulation plans, and banner plans. You will work closely with vendors to ensure the successful execution of back-end deliverables and post-launch data integrity. Your essential duties will also involve verifying data accuracy, creating tabulation programs, and producing data tables to meet research needs. Additionally, you will be responsible for coordinating with external vendors for coding and tabulation requirements, as well as reviewing data and producing top-line summary reports. Your role will require a hands-on approach in data table design, developing specifications, and facilitating data compilation for research reporting. To excel in this role, you should have practical experience with various quantitative research methodologies and possess an academic background in fundamental research techniques and analyses. Proficiency in tools such as Quantum, SPSS, SAS, Merlin, Excel, SQL+, and MS Access is essential. Your key behavioral attributes should include analytical thinking, clear communication, attention to detail, sincerity, and result orientation. As a part of the Modulus team, you will collaborate with the Research Team to ensure proper data management and the PMT Team for on-time data flow and coordination. You will also interact with clients to establish and maintain a working relationship. If you are ready to take on this challenging yet rewarding role at Modulus, please send your resume to career@modulus-research.com. Join us and be a part of a dynamic organization that values trust, innovation, and professional growth.,
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a Pyspark Developer at Viraaj HR Solutions, you will be responsible for developing and maintaining scalable Pyspark applications for data processing. Your role will involve collaborating with data engineers to design and implement ETL pipelines for large datasets. Additionally, you will perform data analysis and build data models using Pyspark to derive insights. It will be your responsibility to ensure data quality and integrity by implementing data cleansing routines and leveraging SQL to query databases effectively. You will also create comprehensive data reports and visualizations for stakeholders, optimize existing data processing jobs for performance and efficiency, and implement new features and enhancements as required by project specifications. Participation in code reviews to ensure adherence to best practices, troubleshooting technical issues with team members, and maintaining documentation of data processes and system configurations will be part of your daily tasks. To excel in this role, you should possess a Bachelor's degree in Computer Science, Information Technology, or a related field, along with proven experience as a Pyspark Developer or in a similar role. Strong programming skills in Pyspark and Python, a solid understanding of the Spark framework and its APIs, and proficiency in SQL for managing and querying databases are essential qualifications. Experience with ETL tools and processes, knowledge of data visualization techniques and tools, and familiarity with cloud platforms such as AWS and Azure are also required. Your problem-solving and analytical skills, along with excellent communication skills (both verbal and written), will be crucial for success in this role. You should be able to work effectively in a team environment, adapt to new technologies and methodologies, and have experience in Agile and Scrum methodologies. Prior experience in data processing on large datasets and an understanding of data governance and compliance standards will be beneficial. Key Skills: agile methodologies, data analysis, team collaboration, Python, Scrum, Pyspark, data visualization, problem-solving, ETL tools, Python scripting, Apache Spark, Spark framework, cloud platforms (AWS, Azure), SQL, cloud technologies, data processing.,
Posted 6 days ago
1.0 - 8.0 years
2 - 9 Lacs
Mumbai, Maharashtra, India
On-site
Caliber Hunt is seeking a skilled and experienced Data Engineer to join our team. The ideal candidate will have hands-on experience developing and optimizing scalable data pipelines for ingestion and transformation. This role is crucial for building a robust data infrastructure, working with cutting-edge technologies like PySpark and AWS cloud services , and collaborating with various teams to deliver high-quality, fault-tolerant solutions. Responsibilities Develop fault-tolerant data pipelines running on a cluster. Write and optimize efficient SQL queries with Python and Hive for handling large datasets in Big-Data environments. Document technical design documents for given requirements or JIRA stories. Work closely with the overall Enterprise Data & Analytics Architect and Engineering leads to ensure adherence to best practices. Assure quality, security, and compliance requirements are met for the supported areas. Communicate results and business impacts of data initiatives to key stakeholders to collaboratively solve business problems. Debug, tune, and optimize PySpark data pipelines . Develop scalable and modular solutions. Coordinate with users, technical teams, and Data/Solution architects. Required Skills & Qualifications 1-8 years of hands-on experience developing data pipelines for ingestion or transformation using Python (PySpark) / Spark SQL in AWS cloud . Advanced experience in writing and optimizing efficient SQL queries . Experience in development and processing of data at scale using technologies like EMR, Lambda, Glue, Athena, Redshift , and Step Functions . Experience with Git and CI/CD pipelines to deploy cloud applications. Strong understanding and implementation of PySpark data frames, joins, partitioning , and parallelism. Understanding of Spark UI, Event Timelines, DAG , and Spark config parameters for tuning pipelines. Experience in Data-Modelling , Big data , Hadoop , Hive , and ETL pipelines . Familiarity with IaC tools like Terraform . Experience working in Agile implementations . Good knowledge of designing Hive tables with partitioning for performance. Excellent communication skills to coordinate with various stakeholders.
Posted 6 days ago
7.0 - 10.0 years
0 - 0 Lacs
pune, mumbai city
Remote
Position - AWS Data Engineer Job Description: We are seeking a skilled Data Engineer with 7+ years of experience in data processing, ETL pipelines, and cloud-based data solutions. The ideal candidate will have strong expertise in AWS Glue, Redshift, S3, EMR, and Lambda , with hands-on experience using Python and PySpark for large-scale data transformations. The candidate will be responsible for designing, building, and maintaining scalable data pipelines and systems to support analytics and data-driven decision-making. Additionally, need to have strong expertise in Terraform and Git-based CI/CD pipelines to support infrastructure automation and configuration management. Key Responsibilities: ETL Development & Automation: Design and implement ETL pipelines using AWS Glue and PySpark to transform raw data into consumable formats. Automate data processing workflows using AWS Lambda and Step Functions. Data Integration & Storage: Integrate and ingest data from various sources into Amazon S3 and Redshift. Optimize Redshift for query performance and cost efficiency. Data Processing & Analytics: Use AWS EMR and PySpark for large-scale data processing and complex transformations. Build and manage data lakes on Amazon S3 for analytics use cases. Monitoring & Optimization: Monitor and troubleshoot data pipelines to ensure high availability and performance. Implement best practices for cost optimization and performance tuning in Redshift, Glue, and EMR. Terraform & Git-based Workflows: Design and implement Terraform modules to provision cloud infrastructure across AWS/Azure/GCP. Manage and optimize CI/CD pipelines using Git-based workflows (e.g., GitHub Actions, GitLab CI, Jenkins, Azure DevOps). Collaborate with developers and cloud architects to automate infrastructure provisioning and deployments. Write reusable and scalable Terraform modules following best practices and code quality standards. Maintain version control, branching strategies, and code promotion processes in Git. Collaboration: Work closely with stakeholders to understand requirements and deliver solutions. Document data workflows, designs, and processes for future reference. Must-Have Skills: Strong proficiency in Python and PySpark for data engineering tasks. Hands-on experience with AWS Glue, Redshift, S3, and EMR . Expertise in building, deploying, and optimizing data pipelines and workflows. Solid understanding of SQL and databas optimization techniques. Strong hands-on experience with Terraform , including writing and managing modules, state files, and workspaces. Proficient in CI/CD pipeline design and maintenance using tools like: GitHub Actions / GitLab CI / Jenkins / Azure DevOps Pipelines Deep understanding of Git workflows (e.g., GitFlow, trunk-based development). Experience in serverless architecture using AWS Lambda for automation and orchestration. Knowledge of data modeling, partitioning, and schema design for data lakes and warehouses.
Posted 6 days ago
0.0 - 4.0 years
0 Lacs
haryana
On-site
GlobalLogic is looking for motivated, intelligent, and detail-oriented individuals to join their team as Associate Analysts. In this role, you will be responsible for data labeling and annotation to support the development of AI and machine learning models. Even if you do not have prior experience in data annotation, comprehensive training will be provided. If you possess basic computer knowledge and are comfortable using Microsoft Office or Google Suite, this is an excellent opportunity to kickstart or advance your career in the AI/ML industry. As an Associate Analyst at GlobalLogic, you will be expected to have a Bachelor's degree in any discipline, basic computer proficiency, and comfort with MS Office or Google Suite. Strong focus, attention to detail, and the ability to perform repetitive tasks are crucial for this role. You should be a quick learner with a problem-solving mindset, willing to work from the office and open to rotational shifts in a 24/7 work environment. A keen interest in AI, data processing, or machine learning is desirable, along with a high level of reliability, adaptability, and initiative. Your responsibilities will include manually labeling data points such as text, audio, video, and images following clear guidelines and instructions. You will need to ensure accuracy and consistency in annotated data by adhering to predefined quality standards. Strong written and verbal communication skills are essential for understanding and interpreting tasks clearly. Additionally, you will be required to apply reading, writing, and listening skills to interpret and describe different types of content effectively, as well as troubleshoot annotation-related challenges with critical thinking and problem-solving skills. At GlobalLogic, you can expect a culture of caring where people are prioritized, and inclusivity is promoted. Learning and development opportunities are abundant, ensuring continuous growth and skill enhancement. The work you will be involved in is interesting, meaningful, and impactful, allowing you to engage your curiosity and problem-solving skills. Balance and flexibility are encouraged, and GlobalLogic values integrity and trust as fundamental aspects of its organizational culture. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to leading companies worldwide, driving the digital revolution since 2000. The company collaborates with clients to transform businesses and redefine industries through intelligent products, platforms, and services.,
Posted 6 days ago
0.0 - 3.0 years
0 Lacs
punjab
On-site
You will be responsible for assisting in the development and implementation of machine learning models and algorithms. Your role will involve writing clean, efficient, and reusable Python code for data processing, modeling, and deployment. Additionally, you will work on datasets including data cleaning, preprocessing, feature engineering, and visualization. Collaboration with senior data scientists and developers to build scalable ML solutions is a key aspect of this role. You will also be required to conduct literature reviews and stay updated on the latest ML/AI research to contribute effectively to the team. Furthermore, your responsibilities will include contributing to model deployment and integration with APIs or web applications, with a basic knowledge being sufficient for this aspect of the role.,
Posted 6 days ago
10.0 - 15.0 years
0 Lacs
noida, uttar pradesh
On-site
As an experienced Power BI Architect with knowledge of Microsoft Fabric Solutions, you will be responsible for leading the design, development, and implementation of innovative Business Intelligence (BI) solutions. Your expertise in enterprise data architecture, analytics platforms, and data integration strategies will be crucial in optimizing data pipelines and enhancing performance through the effective use of Power BI and Microsoft Fabric. Your key responsibilities will include developing comprehensive Power BI solutions such as dashboards, reports, and data models to meet business requirements. You will lead the end-to-end development lifecycle of BI projects, from requirement gathering to deployment, ensuring optimal performance. Utilizing Microsoft Fabric, you will streamline data pipelines, integrate data engineering, storage, and processing capabilities, and enhance performance and scalability by integrating Power BI with Microsoft Fabric. Your role will also involve working with Azure Data Services like Azure Data Lake, Azure Synapse, and Azure Data Factory to support BI architecture. Implementing best practices in Power BI development, providing leadership and mentorship to a team of developers, overseeing project management tasks, and collaborating with data engineers and stakeholders to translate business requirements into scalable BI solutions will be part of your responsibilities. To qualify for this role, you should hold a Bachelor's degree in Computer Science, Information Systems, Data Analytics, or a related field. You must have 10-15 years of experience in BI development, including at least 3 years in a leadership position. Proven experience with Power BI, Microsoft Fabric, and Azure Data Services is also required for this position.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
You will be responsible for supporting youth in achieving their skills training, mentoring, and employer activities as per their individual Personal Development Plans. Your primary focus will be on ensuring successful youth placements in work, training, or further education and supporting them to sustain these placements for up to 1 year. You will need to effectively leverage partnerships at all levels and organize job fairs, placement drives, and campus recruitments based on both youth readiness and market demand. Additionally, you will conduct a 24-hour training module covering industry overview, mock interviews, and other soft skills training as per the mandated curriculum. Monitoring the performance and retention of youths with employers will also be a key aspect of your role. After placements, you will continue to support the youth to ensure their engagement and sustainability in work or educational placements for up to 1 year. The ideal candidate will have experience in handling placements in colleges and mobilizing youth from educational institutions. Efficiency in managing large numbers and bulk placements is essential. You will be expected to serve as a role model demonstrating the characteristics that youth should develop to succeed in their chosen field. Desired competencies include the ability to build positive relationships with youth, motivate individuals to achieve defined targets, and work well as part of a team. Strong communication skills, empathy, organizational abilities, and excellent time management are also crucial. Flexibility to work evenings and weekends occasionally, as required by the program, is necessary. You should be adept at collecting and processing delivery evidence and data in alignment with program key performance indicators. Strong negotiation skills and the ability to manage relationships with colleges and employers are valued. Qualifications for this role include a relevant degree or similar professional qualification from a reputable institution. Experience in employability skills, job placement, youth development, or vocational training programs, either directly or indirectly, is desirable. Familiarity with reviewing progress against individual training program targets and engaging with various stakeholders such as placement teams, students, and communities is advantageous. Providing guidance to young people on development issues is also part of the role. To apply, please share your resume at the provided email address. This is a full-time, permanent position with benefits including cell phone reimbursement and Provident Fund. The work schedule is day shift, Monday to Friday, at the in-person work location.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
The ideal candidate for the Data Scientist position at TransOrg Analytics should have a Bachelor's degree in Computer Science, Engineering, Statistics, Mathematics, or a related quantitative field. You should possess 3-5 years of relevant experience in data analysis or a related role, along with expertise in data pre-processing and manipulation using Python or SQL. Experience in statistical analysis and modeling techniques such as regression, classification, and clustering is essential. Candidates with experience in machine learning, deep learning frameworks, and libraries will be preferred. Proficiency in data processing tools and frameworks, as well as data visualization tools like Tableau or Power BI, is required. You should have a proven track record of managing data project delivery, meeting deadlines, managing stakeholder expectations, and producing clear deliverables. Strong problem-solving skills and attention to detail are crucial for this role, along with the ability to think critically and provide data-backed insights. Excellent communication skills, both verbal and written, are a must. Familiarity with Cloud Platforms like Azure, AWS, or GCP, and the ability to use them for developing, training, and testing deep learning models will be an added advantage. Knowledge of cloud-based data warehousing platforms, particularly Snowflake, is also beneficial. If you are passionate about leveraging data science to streamline, optimize, and accelerate businesses, and meet the above requirements, we invite you to join our team at TransOrg Analytics. Visit www.transorg.com to learn more about us and explore how you can contribute to our mission of providing advanced analytics solutions to industry leaders and Fortune 500 companies across India, US, APAC, and the Middle East.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As the HR Data and Application Specialist, you will be the front line administrator and main contact for various HR-related applications such as the HRIS, ATS, HR Support Portal, and other HR systems. Your primary responsibilities will include maintaining, auditing, and processing sensitive HR data, ensuring the smooth functioning of HR applications, leveraging technology to streamline manual processes, generating ad-hoc/scheduled reports, and supporting process improvement initiatives and special projects. You will play a crucial role in analyzing HR data to derive meaningful metrics and statistics that can guide decisions related to recruitment, retention strategies, and legal compliance. Additionally, you will contribute to enhancing the usage of applications, collaborating with vendors and internal stakeholders, providing technical expertise for HR projects, and offering configuration recommendations tailored to business requirements. Your key responsibilities will encompass maintaining and supporting HR applications by customizing, upgrading, and ensuring optimal performance, offering technical support to users, ensuring data security compliance, conducting security audits, and documenting system processes. Moreover, you will assist in creating training materials, facilitating end-user guidance, identifying opportunities for process enhancements, and participating in system updates and enhancement projects. On the data front, you will be involved in fulfilling data requests, compiling HR metrics from diverse sources, creating reports for business support and compliance purposes, and manipulating data in Excel for various stakeholders. Education-wise, a Bachelor's degree in HR, Business Administration, or a related field is required, with SHRM-CP or PHR certification being desirable. You should ideally possess a minimum of 3 years of HR application experience, familiarity with Microsoft Office Suite and data management, and exposure to tools like BambooHR, ADP Workforce Now, or First Advantage. To excel in this role, you must demonstrate a passion for data accuracy and process enhancement, exhibit strong critical thinking and analytical skills, and showcase an innovative approach to problem-solving. Effective communication, time management, organizational, and interpersonal skills are essential, along with a keen eye for detail and a commitment to maintaining confidentiality. Adaptability to evolving environments, an entrepreneurial mindset, and a drive for innovation will be key attributes that you bring to this position.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
You are a highly experienced AI engineer with 5+ years of experience, possessing a strong background in machine learning, proficient programming skills, and a deep understanding of generative models. Your primary role involves applying your expertise to translate research findings into practical solutions that effectively tackle real-world challenges. It is crucial for you to ensure the reliability and ethical usage of generative AI in the applications you develop. In terms of technical requirements, you must exhibit a strong proficiency in Python for data processing and automation. You should have hands-on experience working with generative AI models and integrating them into data workflows. Additionally, familiarity with prompt engineering and LLM models (Opensource and Closesource) is essential. Experience with application development frameworks like LangChain, LangGraph, and working with REST frameworks such as Fast API, Angular, Flask, and Django is highly beneficial. Knowledge of cloud platforms such as AWS, GCP, Azure, and related services is a plus. Moreover, familiarity with containerization and orchestration tools like Docker and Kubernetes would be advantageous. As a Data Analysis & Simulation Professional, your responsibilities include the following key areas: Data Pipeline Development: - Designing and implementing scalable data pipelines using Python to ingest, process, and transform log data from diverse sources. Generative AI Integration: - Collaborating with data scientists to integrate generative AI models into log analysis workflows. - Developing APIs and services to deploy AI models for real-time log analysis and insights generation. Data Monitoring and Maintenance: - Setting up monitoring and alerting systems to ensure the reliability and performance of data pipelines. - Troubleshooting and resolving issues related to data ingestion, processing, and storage. Collaboration and Documentation: - Working closely with cross-functional teams to comprehend requirements and deliver solutions that align with business needs. - Documenting data pipeline architecture, processes, and best practices for future reference and knowledge sharing. Evaluation and Testing: - Conducting thorough testing and validation of generative models. Research and Innovation: - Staying updated with the latest advancements in generative AI and exploring innovative techniques to enhance model capabilities. - Experimenting with different architectures and approaches to drive innovation. Furthermore, having experience with Snowflake utilization would be considered an asset: - Designing and optimizing data storage and retrieval strategies using Snowflake. - Implementing data modeling, partitioning, and indexing strategies to improve query performance.,
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within. Responsibilities Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. Thorough understanding of Azure Cloud Infrastructure offerings. Strong experience in common data warehouse modeling principles including Kimball. Working knowledge of Python is desirable Experience developing security models. Databricks & Azure Big Data Architecture Certification would be plus Mandatory Skill Sets ADE, ADB, ADF Preferred Skill Sets ADE, ADB, ADF Years Of Experience Required 3-5 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills ADF Business Components, ADL Assistance, Android Debug Bridge (ADB) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development + 7 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less
Posted 1 week ago
8.0 - 13.0 years
8 - 17 Lacs
Delhi, India
On-site
We are seeking a highly capable and experienced Project Manager - AI to lead and manage our AI and Data Analytics projects. This role demands a strong focus on stakeholder management, proactive risk mitigation, and effective vendor coordination. The ideal candidate will possess a deep understanding of both the technical and functional aspects of AI platforms, coupled with prior experience working with Indian government entities or similar public sector organizations. You will be instrumental in driving AI initiatives from conception to successful delivery, ensuring alignment with strategic objectives and maximum impact. Key Responsibilities Program/Project Management (20%): Lead large-scale AI and data-driven projects from initiation to successful closure. Develop and manage detailed project plans, timelines, budgets, and resource allocation. Ensure timely delivery of projects and alignment with overall business objectives. Analytical and Technical Understanding (20%): Analyze complex requirements and translate them into actionable development plans for AI solutions. Collaborate closely with technical teams to understand platform architecture, design, and functionality. Evaluate AI models, data processing techniques, and system integration aspects . Risk and Issue Management (20%): Proactively identify, analyze, and manage potential project risks and issues. Develop robust risk mitigation strategies and implement effective contingency plans to ensure project continuity. Maintain comprehensive risk logs and conduct regular reviews with all stakeholders. Vendor Management (10%): Manage end-to-end vendor engagement, including the preparation of RFPs/RFQs (Request for Proposals/Quotations) . Ensure vendor deliverables are perfectly aligned with project goals and contractual terms. Monitor vendor compliance and performance metrics to ensure quality and efficiency. Communication and Stakeholder Management (10%): Act as the primary liaison between business teams, technical teams, and critical government stakeholders . Provide clear, concise, and regular project updates to senior management and clients. Facilitate productive stakeholder meetings, manage expectations effectively, and ensure high levels of satisfaction. Desirable Experience (20%): Prior experience working specifically with Indian government agencies or in public sector environments . Hands-on exposure to the complete AI, Machine Learning, and Data Analytics project lifecycles . Relevant certifications such as PMP, Prince2, or AI/Data Analytics certifications are highly preferred. Qualifications Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Experience: Proven experience in project management, with at least 3 years specifically in AI/Data Analytics domains . Team Leadership: Demonstrated ability to manage cross-functional teams effectively in a fast-paced environment. Preferred Skills Problem-Solving: Strong problem-solving and decision-making abilities. Technical Familiarity: Familiarity with AI/ML tools and cloud-based platforms . Communication: Excellent written and verbal communication skills.
Posted 1 week ago
10.0 - 13.0 years
4 - 7 Lacs
Vapi, Gujarat, India
On-site
As a Senior Team Lead for Master Data Management (MDM) in our Shared Services department, you will be responsible for leading a team of professionals in managing and improving our company's data assets. You will play a strategic role in ensuring data is accurate, complete, and reliable, and that it is used effectively across the organization.
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
maharashtra
On-site
Whether you're at the start of your career or looking to discover your next adventure, your story begins here. At Citi, you'll have the opportunity to expand your skills and make a difference at one of the world's most global banks. We're fully committed to supporting your growth and development from the start with extensive on-the-job training and exposure to senior leaders, as well as more traditional learning. You'll also have the chance to give back and make a positive impact where we live and work through volunteerism. Citi Finance is responsible for the firm's financial management and related controls. We manage and partner on key Citi initiatives and deliverables, such as our quarterly earnings process and ensuring Citi's compliance with financial rules and regulations. The team comprises chief financial officers who partner with each of our businesses and disciplines including controllers, financial planning and analysis, strategy, investor relations, tax, and treasury. We're currently looking for a high caliber professional to join our team as Vice President, Balance Sheet Management Lead - Hybrid based in Mumbai. Being part of our team means that we'll provide you with the resources to meet your unique needs, empower you to make healthy decisions and manage your financial well-being to help plan for your future. For instance: - Citi provides access to an array of learning and development resources to help broaden and deepen your skills and knowledge as your career progresses. - We have a variety of programs that help employees balance their work and life, including generous paid time-off packages. - We offer our employees resources and tools to volunteer in the communities in which they live and work. In 2019, Citi employee volunteers contributed more than 1 million volunteer hours around the world. In this role, you're expected to take on: The Balance Sheet Management Lead Analyst is a seasoned professional role. The Lead Analyst is expected to apply in-depth disciplinary knowledge, focusing on model governance processes for a variety of models in Treasury/Balance Sheet Management, data processing, visualization and analysis tools and approaches, and the improvement of processes and workflows for the Balance Sheet Management function. The Balance Sheet Management Model Governance group is the critical team within the Treasury/Balance Sheet Management and is primarily responsible for ongoing maintenance and governance support of the models that are used to generate Non-Trading Marker Risk (NTMR) metrics within Treasury, covering Interest Rate Risk, Credit Spread Risk, Foreign Exchange Risk, valuation risk in Fixed Income and derivatives, Funds Transfer Pricing, and other related areas. This team plays an important role in overall balance sheet management and has a direct impact on Citigroup's Capital. The work in this space is subject to heightened regulatory focus and scrutiny. Key Responsibilities: The Lead Analyst will be responsible for supporting model governance processes within the Treasury/Balance Sheet Management throughout the model lifecycle. As part of those responsibilities, the Lead Analyst would be expected to demonstrate analytical/statistical skills in the design, implementation, ongoing performance assessment, and other governance aspects of models, strong communication skills in documenting and presenting their work, stakeholder management and interaction skills allowing the analyst to clearly and efficiently understand requirements and develop a model or approach to meet those requirements. The Lead Analyst should demonstrate good analytical skills to filter, prioritize and validate potentially complex and dynamic material from multiple sources. The detailed responsibilities include: - Assist in the development and testing of a variety of models used in the calculation of NTMR metrics, stress testing, valuation and pricing of fixed income and derivative instruments, funds transfer pricing, and capital strategy and planning, in accordance with Citi's Model Risk Management requirements, including: - As a part of model development, assisting in the design of the model framework, and performing a set of required statistical, quantitative and qualitative tests. - Producing model documentation according to Model Risk Management guidelines, as well as preparing the related presentation materials to the senior management and regulators, as needed. - Developing the appropriate model performance testing metrics, and conduct the model ongoing performance assessment processes. - Partner with Citi's business leaders and Technology teams in the development, implementation, documentation, and use of models. - Assist in coordinating and liaising with businesses and functions to educate and garner support for project initiatives. - Contribute and support other cross-group projects and initiatives. Qualifications and other Requirements: - 10 or more years of relevant statistical modeling/econometrics, model governance, or model validation experience in the financial domain. - PG/Masters/PhD in a quantitative discipline such as Statistics, Economics, Mathematics, or a related discipline is preferred. Certifications such as FRM, CFA is a plus. - Deep understanding of Treasury/Balance Sheet Management concepts. - Working experience with Artificial Intelligence/Machine Learning techniques and packages (ChatGPT, Copilot) etc. - Extensive experience in programming and modeling using Python and related packages (GitHub, DataFlame) is a must. Working knowledge of statistical packages like SAS/R is a plus. - Experience with SQL and databases. Experience in Excel VBA is a plus. - Analytical background with problem-solving skills and an ability to assimilate information across a variety of financial disciplines. - Strong interpersonal and communication skills, both oral and written, with the ability to converse with a wide variety of people across functions/seniority. - High energy, self-starter with a flexible and pragmatic attitude and a desire to show continued progress. Job Location: Mumbai Job Level: C13 Job Type: Regular/Full time Take the next step in your career, apply for this role at Citi today: [Apply Here](https://jobs.citi.com/dei) Citi is an equal opportunity and affirmative action employer. Citigroup Inc. and its subsidiaries ("Citi) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View the "EEO is the Law" poster. View the EEO is the Law Supplement. View the EEO Policy Statement.,
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
Job Summary: As a Data Engineer at WNS (Holdings) Limited, your primary responsibility will be handling complex data tasks with a focus on data transformation and querying. You must possess a strong proficiency in advanced SQL techniques and a deep understanding of database structures. Your role will involve extracting and analyzing raw data to support the Reporting and Data Science team, providing both qualitative and quantitative insights to meet the business requirements. Responsibilities: - Design, develop, and maintain data transformation processes using SQL. - Manage complex data tasks related to data processing and querying. - Collaborate with the Data team to comprehend data requirements and efficiently transform data in SQL environments. - Construct and optimize data pipelines to ensure smooth data flow and transformation. - Uphold data quality and integrity throughout the data transformation process. - Address and resolve data issues promptly as they arise. - Document data processes, workflows, and transformation logic. - Engage with clients to identify reporting needs and leverage visualization experience to propose optimal solutions. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 2 years of experience in a Data Engineering role. - Profound proficiency in SQL, including advanced techniques for data manipulation and querying. - Hands-on experience with Power BI data models and DAX commands.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Data Engineer at GlobalLogic, you will be responsible for architecting, building, and maintaining complex ETL/ELT pipelines for batch and real-time data processing using various tools and programming languages. Your key duties will include optimizing existing data pipelines for performance, cost-effectiveness, and reliability, as well as implementing data quality checks, monitoring, and alerting mechanisms to ensure data integrity. Additionally, you will play a crucial role in ensuring data security, privacy, and compliance with relevant regulations such as GDPR and local data laws. To excel in this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Excellent analytical, problem-solving, and critical thinking skills with meticulous attention to detail are essential. Strong communication (written and verbal) and interpersonal skills are also required, along with the ability to collaborate effectively with cross-functional teams. Experience with Agile/Scrum development methodologies is considered a plus. Your responsibilities will involve providing technical leadership and architecture by designing and implementing robust, scalable, and efficient data architectures that align with organizational strategy and future growth. You will define and enforce data engineering best practices, evaluate and recommend new technologies, and oversee the end-to-end data development lifecycle. As a leader, you will mentor and guide a team of data engineers, conduct code reviews, provide feedback, and promote a culture of engineering excellence. You will collaborate closely with data scientists, data analysts, software engineers, and business stakeholders to understand data requirements and translate them into technical solutions. Your role will also involve communicating complex technical concepts and data strategies effectively to both technical and non-technical audiences. At GlobalLogic, we offer a culture of caring, continuous learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust environment. By joining our team, you will have the chance to work on impactful projects, engage your curiosity and problem-solving skills, and contribute to shaping cutting-edge solutions that redefine industries. With a commitment to integrity and trust, GlobalLogic provides a safe, reliable, and ethical global environment where you can thrive both personally and professionally.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
You will be part of the Trust & Safety (T&S) team at TikTok, dedicated to ensuring the safety and empowerment of our global online community. Your role will involve monitoring live video content quality to maintain community ecological security. You will conduct thorough data analysis, identify issues, manage necessary escalations, and create quality inspection reports. Collaboration with other departments to address operational needs promptly will be a key aspect of your responsibilities. Additionally, you will contribute to the development and optimization of quality inspection standards and system platforms. As a candidate for this position, you should hold a Bachelor's degree or higher and possess excellent language skills in English and NP. An assessment round for an additional language might be required. Your sensitivity to local customs, culture, and social news is crucial. Strong learning ability, effective cross-department communication skills, and the ability to work independently with logical thinking aligned with the job requirements are essential. Proficiency in office software such as Excel and Word is preferred, along with experience in data processing and analysis capabilities. ByteDance, founded in 2012, aims to inspire creativity and enhance lives through its diverse range of products including TikTok, Lemon8, CapCut, and more. By joining ByteDance, you become part of a global team that promotes creativity and authentic self-expression. At ByteDance, diversity and inclusion are highly valued, and we strive to create an environment where employees are appreciated for their skills, experiences, and unique perspectives. We are committed to celebrating diversity and maintaining an inclusive workplace that reflects the global communities we serve. Trust & Safety at ByteDance acknowledges the challenges and emotional demands associated with keeping our platform safe. We provide comprehensive programs to support the physical and mental well-being of our employees throughout their journey with us. Your well-being is our priority, and we are dedicated to fostering a collaborative, innovative, and integrated approach to ensure a positive and supportive work environment for all.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data processing job market in India is thriving with opportunities for job seekers in the field. With the growing demand for data-driven insights in various industries, the need for professionals skilled in data processing is on the rise. Whether you are a fresh graduate looking to start your career or an experienced professional looking to advance, there are ample opportunities in India for data processing roles.
These major cities in India are actively hiring for data processing roles, with a multitude of job opportunities available for job seekers.
The average salary range for data processing professionals in India varies based on experience and skill level. Entry-level positions can expect to earn between INR 3-6 lakh per annum, while experienced professionals can earn upwards of INR 10 lakh per annum.
A typical career path in data processing may include roles such as Data Analyst, Data Engineer, Data Scientist, and Data Architect. As professionals gain experience and expertise in the field, they may progress from Junior Data Analyst to Senior Data Analyst, and eventually to roles such as Data Scientist or Data Architect.
In addition to data processing skills, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and R. Strong analytical and problem-solving skills are also essential for success in data processing roles.
As you explore opportunities in the data processing job market in India, remember to prepare thoroughly for interviews and showcase your skills and expertise confidently. With the right combination of skills and experience, you can embark on a successful career in data processing in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19400 Jobs | Bengaluru
Accenture in India
15955 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11280 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France