Job
Description
Project Role :IoT Architect
Project Role Description :Design end-to-end IoT platform architecture solutions, including data ingestion, data processing, and analytics across different vendor platforms for highly interconnected device workloads at scale.
Must have skills :Data Architecture Principles
Good to have skills :NAMinimum
15 year(s) of experience is required
Educational Qualification :15 years full time education
SummaryWe are seeking a highly skilled and experienced Industrial Data Architect with a proven track record in providing functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Well versed with OT data quality, Data modelling, data governance, data contextualization, database design, and data warehousing.
Roles & Responsibilities:1. Industrial Data Architect will be responsible for developing and overseeing the industrial data architecture strategies to support advanced data analytics, business intelligence, and machine learning initiatives. This role involves collaborating with various teams to design and implement efficient, scalable, and secure data solutions for industrial operations. 2. Focused on designing, building, and managing the data architecture of industrial systems. 3. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. 4. Own the offerings and assets on key components of data supply chain, data governance, curation, data quality and master data management, data integration, data replication, data virtualization. 5. Create scalable and secure data structures, integrate with existing systems, and ensure efficient data flow. Professional & Technical
Skills:
1.
Must have
Skills:Domain knowledge in areas of Manufacturing IT OT in one or more of the following verticals Automotive, Discrete Manufacturing, Consumer Packaged Goods, Life Science. 2. Data Modeling and Architecture:Proficiency in data modeling techniques (conceptual, logical, and physical models). 3. Knowledge of database design principles and normalization. 4. Experience with data architecture frameworks and methodologies (e.g., TOGAF). 5. Database Technologies:Relational Databases:Expertise in SQL databases such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server. 6. NoSQL Databases:Experience with at least one of the NoSQL databases like MongoDB, Cassandra, and Couchbase for handling unstructured data. 7. Graph Databases:Proficiency with at least one of the graph databases such as Neo4j, Amazon Neptune, or ArangoDB. Understanding of graph data models, including property graphs and RDF (Resource Description Framework). 8. Query Languages:Experience with at least one of the query languages like Cypher (Neo4j), SPARQL (RDF), or Gremlin (Apache TinkerPop). Familiarity with ontologies, RDF Schema, and OWL (Web Ontology Language). Exposure to semantic web technologies and standards. 9. Data Integration and ETL (Extract, Transform, Load):Proficiency in ETL tools and processes (e.g., Talend, Informatica, Apache NiFi). 10. Experience with data integration tools and techniques to consolidate data from various sources. 11. IoT and Industrial Data Systems:Familiarity with Industrial Internet of Things (IIoT) platforms and protocols (e.g., MQTT, OPC UA). 12. Experience with IoT data platforms like AWS IoT, Azure IoT Hub, and Google Cloud IoT Core. 13. Experience working with one or more of Streaming data platforms like Apache Kafka, Amazon Kinesis, Apache Flink 14. Ability to design and implement real-time data pipelines. Familiarity with processing frameworks such as Apache Storm, Spark Streaming, or Google Cloud Dataflow. 15. Understanding event-driven design patterns and practices. Experience with message brokers like RabbitMQ or ActiveMQ. 16. Exposure to the edge computing platforms like AWS IoT Greengrass or Azure IoT Edge 17. AI/ML, GenAI:Experience working on data readiness for feeding into AI/ML/GenAI applications 18. Exposure to machine learning frameworks such as TensorFlow, PyTorch, or Keras. 19. Cloud Platforms:Experience with cloud data services from at least one of the providers like AWS (Amazon Redshift, AWS Glue), Microsoft Azure (Azure SQL Database, Azure Data Factory), and Google Cloud Platform (BigQuery, Dataflow). 20. Data Warehousing and BI Tools:Expertise in data warehousing solutions (e.g., Snowflake, Amazon Redshift, Google BigQuery). 21. Proficiency with Business Intelligence (BI) tools such as Tableau, Power BI, and QlikView. 22. Data Governance and Security:Understanding data governance principles, data quality management, and metadata management. 23. Knowledge of data security best practices, compliance standards (e.g., GDPR, HIPAA), and data masking techniques. 24. Big Data Technologies:Experience with big data platforms and tools such as Hadoop, Spark, and Apache Kafka. 25. Understanding distributed computing and data processing frameworks. Additional Info1. A minimum of 15-18 years of progressive information technology experience is required. 2. This position is based at Bengaluru location. 3. A 15 years full-time education is required. 4. AWS Certified Data Engineer Associate / Microsoft Certified:Azure Data Engineer Associate / Google Cloud Certified Professional Data Engineer certification is mandatory
Qualification
15 years full time education