International Journal of Internet of Things

2024;  12(1): 1-7

doi:10.5923/j.ijit.20241201.01

Received: Mar. 2, 2024; Accepted: Mar. 19, 2024; Published: Mar. 22, 2024

 

The Role of Generative AI in Interpreting Internet of Things at the Edge

Chidumga Izuzu

Cisco Systems Inc, Raleigh, USA

Correspondence to: Chidumga Izuzu, Cisco Systems Inc, Raleigh, USA.

Email:

Copyright © 2024 The Author(s). Published by Scientific & Academic Publishing.

This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

Abstract

This paper investigates how, at the edge of computing networks, generative AI improves the usefulness and understanding of Internet of Things (IoT) data. Edge computing processes data closer to its source, which lowers latency and boosts productivity. Generative AI further enhances this environment, which offers real-time decision-making, predictive modelling, and sophisticated analytics. The convergence of edge computing and IoT platforms is a portent of big things in the manufacturing, urban development, and healthcare sectors.

Keywords: IoT, Edge Computing, Generative AI, Predictive Analysis, Real-Time Processing

Cite this paper: Chidumga Izuzu, The Role of Generative AI in Interpreting Internet of Things at the Edge, International Journal of Internet of Things, Vol. 12 No. 1, 2024, pp. 1-7. doi: 10.5923/j.ijit.20241201.01.

1. Introduction

Integrating the Internet of Things (IoT) and edge computing has ushered in a transformative data processing and analysis era. IoT devices, spread across vast networks, continuously generate large volumes of data. This data has the potential to unlock unprecedented insights. Edge computing plays a crucial role here by processing data close to its source, significantly reducing latency and bandwidth usage, and enabling real-time decision-making. Generative AI, specializing in creating new content from existing data, is becoming a key technology in this field. Generative AI goes beyond analytics to predict trends, model outcomes, and solve complex issues, turning IoT data into actionable intelligence and fostering industry-wide innovation and efficiency. This synergy between IOT, edge computing and generative AI advances data handling and enables sophisticated applications in sectors like smart cities, healthcare, and manufacturing, signifying a major technological advancement.

2. Generative AI: Transforming Data Interpretation

Generative AI marks a paradigm shift in data analysis, particularly with its application in interpreting Internet of Things (IoT) data.
This section delves into the mechanics and advantages of utilizing generative AI for IoT data processing, emphasizing its role in enhancing real-time data interpretation and decision-making capabilities at the edge of computing networks.

2.1. How Generative AI Works with IoT Data

Generative AI employs algorithms capable of learning from vast datasets to generate new data that resembles the original input. In the context of IoT, generative AI analyzes patterns from sensor data, environmental conditions, and user interactions. It uses this analysis to predict future states, simulate outcomes, or generate actionable insights. For instance, a generative model might analyze data from smart thermostats across a network to predict temperature adjustments that optimize energy use while maintaining comfort. By training on diverse data sources, generative AI can uncover correlations that humans or traditional analytics might miss, enabling more nuanced interpretations and predictions.

2.2. Benefits of AI in Real-Time Data Processing

The integration of generative AI with edge computing brings several benefits to real-time IoT data processing:
1. Predictive Analytics: Generative AI can forecast future trends and anomalies in the IoT ecosystem, allowing pre-emptive actions to mitigate risks or capitalize on opportunities.
2. Enhanced Efficiency: By processing and analyzing data locally, edge-based generative AI reduces latency, enabling faster response times for critical applications, such as autonomous vehicles or emergency response systems.
3. Personalization: It enables the customization of services and products by learning user preferences and behaviours, leading to more tailored experiences.
4. Resource Optimization: Generative AI can significantly improve resource allocation, reducing waste and enhancing sustainability efforts in industries like agriculture, manufacturing, and energy.
5. Innovation Acceleration: By simulating different scenarios and outcomes, generative AI fosters innovation, allowing organizations to explore new solutions and models without requiring extensive physical trials.

3. Edge Computing: A Primer

Edge computing represents a transformative approach to processing and analyzing data in IoT ecosystems, moving the computational tasks closer to the data source. This shift is crucial in enhancing IoT devices' and systems' efficiency and responsiveness.

3.1. Definition and Importance of Edge Computing

Edge computing refers to the decentralized approach of computing where processing occurs near the source of data generation, i.e., at or near the network's edge. This model is critical for handling the massive influx of data generated by IoT devices, as it allows for immediate data processing without the latency associated with data transmission to centralized cloud servers.
The importance of edge computing lies in its ability to provide real-time, or near-real-time, processing, and analysis, which is essential for time-sensitive applications that rely on immediate data interpretation for decision-making, such as autonomous vehicles, health monitoring systems, and industrial automation processes.

3.2. Edge Computing in IOT Ecosystems

Edge computing is pivotal in enabling smarter, more efficient systems within IoT ecosystems. By processing data on-site or close to it, edge computing reduces the need for constant data transmission back and forth between the devices and a central server, thus minimizing latency, bandwidth use, and the potential for network congestion.
This local data processing capability is crucial for applications requiring instant analysis and action, such as emergency response systems, real-time traffic management, and predictive maintenance in manufacturing. Furthermore, edge computing enhances security and privacy by processing sensitive information locally, reducing data exposure to potential vulnerabilities associated with data transmission and storage in the cloud.

4. Synergy Between Generative AI and Edge Computing

To delve deeper into the synergy between generative AI and edge computing and provide real-life case studies illustrating the impact of their integration, we will explore how these technologies enhance data processing and decision-making at the edge of networks.

4.1. Enhancing Data Processing and Decision-Making at the Edge

Integrating generative AI with edge computing transforms data processing and decision-making capabilities by leveraging both technologies' strengths. Generative AI's predictive analytics and simulation capabilities and edge computing's low-latency processing enable sophisticated, real-time insights and responses. This synergy is crucial for applications requiring immediate data analysis and action, such as predictive maintenance, smart city infrastructure management, and personalized healthcare.

4.2. Case Studies Illustration the Impact of Their Integration

Healthcare Monitoring: Wearable devices equipped with edge computing and generative AI capabilities continuously analyze patient data, predicting health issues like heart failure or diabetic episodes before they occur, leading to timely interventions. Shandhi et al. 2 developed a model for risk prediction using digital biomarker data from wearables and symptom surveys to predict whether individuals are likely to be positive or negative for COVID-19 before they take a diagnostic test. The model called Intelligent Testing Allocation (ITA) was designed using machine learning to classify these potential positive and negative cases.
The model aggregated data from the CovIdentify platform (6765 participants) and the MyPHD study (8580 participants), both of which integrate commercial wearable device data and electronic symptom surveys from 1265 individuals out of whom 126 tested positive for Covid. Biometrics measured included resting heart rate and step count. The model was validated separately within three cohorts: participants with both high-frequency and device-reported daily values, participants with high-frequency data only, and participants with high-frequency Fitbit data only. [1]
The study revealed that resting heart rate (RHR) data could identify COVID-19-positive cases up to 10 days before diagnosis, earlier than step count data, which showed significance five days prior. Combining step data with RHR increased the diagnostic model's accuracy (AUC-ROC) by 7–11%, and using RHR alone enhanced the model's precision (AUC-PR) by 38–50% compared to step data. In training and test sets, the highest accuracy was obtained from Fitbit devices, with AUC-ROC of 0.73 ± 0.14 and 0.77 and AUC-PR of 0.55 ± 0.21 and 0.24. The ITA method significantly boosted positivity rates by up to 6.5 times in training and 4.5 times in test sets, covering both symptomatic and asymptomatic cases (up to 27%).
These findings indicate that wearable devices could substantially enhance diagnostic testing efficiency and mitigate test shortages.
Agricultural Optimization: Drones and sensors in precision agriculture collect and process data about crop health, soil conditions, and weather patterns at the edge. Generative AI analyzes this data to provide farmers with actionable insights to optimize irrigation, pesticide application, and harvesting.
A good example of the synergy between Generative AI and Agricultural applications is a chatbot powered by artificial intelligence and designed to assist farmers by offering solutions for various agricultural challenges and enhancing their decision-making capabilities. This chatbot responds to standard queries and focuses on identifying crop diseases and predicting weather conditions, providing valuable support to the agricultural community. [2]
The chatbot answers frequently asked questions and focuses on crop disease detection and weather forecasting. It utilizes a trainable sequence-to-sequence learning model based on a multilayered Long Short-Term Memory (LSTM) unit to map input sequences to corresponding output sequences. Additionally, the chatbot incorporates a Convolutional Neural Network (CNN) architecture for disease detection, which classifies plant images into different classes with 94% accuracy on the test data. The conversational system module of the chatbot achieved 98% accuracy on the training data using the Kisan Call Center (KCC) dataset.
Traffic Management Systems: Cities implement IoT sensors and cameras that work with edge computing to monitor traffic conditions in real-time. Generative AI analyzes this data to optimize traffic light sequences, reducing congestion and improving road safety. [6]
Approximately 30% of smart city projects are incorporating AI to improve aspects like sustainability, resilience, social well-being, and liveliness, encompassing solutions for urban transport. This movement is projected to grow, with a forecasted rise in the adoption of AI-driven initiatives within smart cities by the year 2025.
Veres et al. conducted comprehensive research to explore the contribution of Machine Learning (ML) and Deep Reinforcement Learning (DRL) to various challenges within Intelligent Transportation Systems (ITS), such as traffic flow assessment, fleet management, passenger search, channel estimation in Mobile Edge Computing (MEC), and accident probability estimation. These applications demonstrate significant potential for enhancing smart city operations. [7]
H. Yi, K.-H. N. Bui, and H. Jung adopted a DRL-based method to forecast short-term traffic patterns on a highway. Their study utilized the Deep Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) to analyze data from the Gyeongbu Expressway in South Korea, aiming to predict traffic congestion. The results of their experiments showed a notable accuracy in forecasting short-term traffic flows in highway settings. [8]
Energy Management: In smart grids, IoT devices equipped with edge computing capabilities monitor and manage the flow of electricity based on real-time demand. Generative AI predicts peak load times and potential outages, allowing for efficient energy distribution and reduced waste. Recent trend shows that smart grids are making effective use of smart meter big data for different applications like load assessment and prediction, baseline estimation, demand response, load clustering, and malicious data deception attacks [9], [10].
F. Liang, W. G. Hatcher, G. Xu, J. Nguyen, W. Liao, and W. Yu proposed a model that considered the shared energy resources and ML-based techniques as an integrated part of the SGs system that helps in finalizing the complex logical decisions based on provided data. The ML-based model maintains the system performance in an efficient manner and steers the power to critical loads during adverse and unfavorable environments. [11]
S. Mujeeb, N. Javaid, M. Ilahi, Z. Wadud, F. Ishmanov, and M. K. Afzal proposed a Deep Long Short-Term Memory model to forecast the price and demand for electricity for a day and week ahead and tested it using real electricity market data. The model performance was evaluated using Normalized Root Mean Square Error (NRMSE) and Mean Absolute Error (MAE) as the benchmark parameters. The proposed DLSTM model surpassed the existing standard methods in terms of accurate prediction of price and load forecasting. [12]
Inventory Management and Smart Manufacturing: Merging Artificial Intelligence and the Internet of Things presents a revolutionary opportunity in inventory management and smart manufacturing. AI's power in analyzing data and creating predictive models can significantly refine decision-making. Simultaneously, the IoT's interconnected sensors and devices enable the instantaneous tracking and observation of inventory levels. Combining these technologies fosters enhanced precision and efficiency in managing inventory, minimizing excess, and augmenting the agility of the supply chain.
The fusion of AI and IoT technologies enables the real-time tracking of inventory. IoT devices, like RFID tags and sensors, offer continuous insight into the whereabouts and conditions of inventory items. Such immediate tracking empowers businesses to swiftly adapt to inventory level changes, effectively reducing the risks of running out of stock or overstocking. [14] Furthermore, AI amplifies the utility of data gathered by IoT through predictive analytics. By examining both historical and real-time data, AI algorithms can predict future inventory needs, foreseeing fluctuations in demand, and recommending ideal times for stock replenishment. [15]
This predictive capacity proves invaluable in volatile market environments, enhancing the accuracy and efficiency of inventory planning. Automated replenishment represents another significant advantage brought about by the synergy between AI and IoT. AI algorithms can autonomously initiate orders for restocking based on specific criteria, such as reaching minimum stock levels or in anticipation of demand surges. This level of automation not only eases the workload associated with inventory management but also curtails the possibility of errors due to human oversight. A study conducted by S. P and K. Venkatesh in 2023 within the pharmaceutical industry illustrates how AI-driven systems can facilitate timely restocking, thereby keeping inventory at optimal levels and ensuring continuous product availability. [13]
In a smart factory, sensors on equipment use edge computing to process data on-site, while generative AI predicts when machines are likely to fail or require maintenance. Through real-time monitoring of equipment performance and analysis of historical data, predictive maintenance empowers manufacturers to detect potential issues early on, before they escalate into significant issues, thereby preventing expensive operational downtimes. [16]

5. Technical Implementation

Implementing generative AI in IoT at the edge involves a multifaceted approach encompassing generative models, edge hardware, and software frameworks. Each component ensures efficient, real-time processing and analysis of IoT data, fostering innovation across various sectors.

5.1. Generative Models

Generative models are a cornerstone of generative AI, enabling the system to predict future trends, simulate outcomes, and generate actionable insights based on the analysis of vast datasets. These models include but are not limited to:
1. Generative Adversarial Networks (GANs): GANs consist of two neural networks, the generator, and the discriminator, which are trained simultaneously through adversarial processes. The generator learns to produce data resembling the training set, while the discriminator evaluates its authenticity. GANs are particularly effective in generating realistic images, sounds, and text, which can be pivotal in enhancing user interfaces and creating simulated environments for testing IoT applications.
2. Variational Autoencoders (VAEs): VAEs are generative models that learn data distribution in a latent space, enabling them to generate new data points with similar properties to the original dataset. VAEs are useful in anomaly detection within IoT data, identifying patterns that deviate significantly from the norm, which can indicate potential issues or opportunities for optimization.
3. Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks: These models are adept at processing sequential data, making them ideal for time-series analysis standards in IoT applications. They can predict future states and trends based on historical data, which is essential for predictive maintenance, energy management, and other applications requiring temporal data analysis.

5.2. Edge Hardware

The deployment of generative AI models in edge computing scenarios necessitates robust edge hardware capable of supporting the computational demands of AI algorithms. Key considerations include:
1. Processing Power: Devices equipped with powerful CPUs and GPUs to facilitate real-time data processing and analysis without the need to transmit data to centralized cloud servers.
2. Energy Efficiency: Battery-powered or low-energy devices that can operate efficiently in remote or inaccessible locations, ensuring continuous data analysis without frequent maintenance.
3. Storage Capacity: There should be sufficient memory and storage to accommodate AI models and the data they process, enabling local data processing and minimizing latency.
4. Connectivity Options: Support various connectivity standards, including Wi-Fi, Bluetooth, and cellular networks, to ensure seamless communication within the IoT ecosystem.

5.3. Software Frameworks

Software frameworks and tools are essential for developing, deploying, and managing generative AI models on edge devices. Prominent frameworks include:
1. TensorFlow and TensorFlow Lite: TensorFlow offers a comprehensive ecosystem for developing AI models, while TensorFlow Lite is optimized for on-device machine learning, enabling lightweight, efficient deployment of AI models on edge devices.
2. PyTorch and PyTorch Mobile: PyTorch is known for its flexibility and ease of use in AI development, with PyTorch Mobile extending its capabilities to edge devices, supporting efficient on-device inference.
3. EdgeX Foundry: An open-source project aimed at building a common framework for IoT edge computing, EdgeX Foundry facilitates interoperability between devices and applications, streamlining the deployment of AI models and IoT solutions.

6. Challenges, Proposed Solutions and Opportunities

The integration of generative AI and edge computing into IoT ecosystems heralds a transformative era of technological advancement. However, this fusion is not without its challenges, which must be navigated to unlock the full potential of these innovations.

6.1. Technical and Ethical Challenges

1. Complexity in Deployment and Scalability: The deployment of edge computing solutions integrated with generative AI models introduces complexity, especially in scalability and management across vast IoT networks.
Simplifying deployment and ensuring scalability can be addressed through the development of more intuitive frameworks and tools that automate aspects of deployment, configuration, and scaling. Research could focus on creating adaptive systems that automatically adjust their configurations based on the network's current needs and available resources.
2. Data Privacy and Security Risks: With an increase in data processing at the edge, there's a heightened risk to data privacy and security, necessitating advanced encryption and localized data processing protocols. Ethics involves the systematization, defence, and recommendation of what is right and wrong. [5] Within AI, this encompasses the moral responsibilities and obligations of both the AI application and its developers. [4] A table highlights principal ethical dilemmas in generative AI, covering issues like harmful content, bias, dependency, misuse, and concerns over privacy, security, and increasing digital disparities.
To mitigate these risks, research can delve into advanced cryptographic methods, such as homomorphic encryption, which allows computations on encrypted data, ensuring data privacy. Additionally, federated learning models can be explored where the model is trained across multiple decentralized devices or servers holding local data samples, without exchanging them.
3. Algorithmic Bias and Ethical Concerns: Generative AI may inadvertently perpetuate biases present in training data, leading to ethical dilemmas, especially in sensitive applications affecting individual rights or well-being. Fuelled by large language models (LLMs) and generative AI technologies like ChatGPT, there is a significant potential to revolutionize various facets of the healthcare sector, including patient engagement, support in clinical diagnoses, telemedicine, and health education and promotion.
However, the complete adoption of these AI advancements by healthcare professionals and patients remains to be determined. Challenges such as stringent healthcare regulations and high barriers to entry have slowed the penetration of digital innovations like generative AI into healthcare. [3] Issues around ethical AI use, data accuracy, privacy, cybersecurity, and associated risks continue to be major concerns. In the tightly controlled healthcare sector, where creating value is paramount, overreliance on AI-generated content could result in severe consequences, such as incorrect patient treatment.
Tackling bias requires diverse and representative datasets for training AI models. Research should focus on developing techniques for detecting and correcting biases in datasets and models. Ethical frameworks and guidelines for AI development and deployment, particularly in sensitive applications, must be established and adhered to.
4. Interoperability and Standardization: The lack of standardization across devices, platforms, and networks poses significant challenges in ensuring seamless interoperability and efficiency.
Developing universal protocols and standards for IoT devices and data formats can promote interoperability and standardization. Future research could focus on creating more robust and adaptable standards that can easily integrate new devices and technologies as they emerge.

6.2. Future Opportunities and Potential Developments

1. Autonomous Operations in IoT: Advanced generative AI models could enable IoT devices to operate autonomously, making intelligent decisions based on real-time data analysis, thus reducing human error, and increasing efficiency. The prospect of IoT devices operating autonomously, guided by advanced generative AI, promises a leap toward self-sufficient systems capable of making real-time intelligent decisions. This autonomy could revolutionize industries by reducing the need for human intervention in routine tasks, enhancing efficiency, and minimizing errors.
Future research could focus on creating more resilient AI models that can adapt and learn from their environment with minimal supervision. This includes developing algorithms that can handle unexpected situations and make decisions under uncertainty.
2. Innovative Business Models and Services: The synergy could lead to the creation of new business models and services, particularly in sectors like healthcare, agriculture, and urban development, by offering personalized and predictive solutions.
3. Enhanced Data Efficiency and Sustainability: By optimizing data processing and decision-making at the source, this integration promises significant improvements in energy efficiency and sustainability across IoT applications. Developing methods and technologies to reduce the energy consumption of edge devices, including energy harvesting technologies, and investigating more efficient data transmission protocols that minimize energy use is a potential research area that must be explored.
4. Advances in AI and Edge Technology: Ongoing research and innovation in AI and edge computing are set to address current limitations, leading to more sophisticated, efficient, and secure solutions that can handle increasingly complex tasks. With the proliferation of IoT devices, energy efficiency becomes paramount. Future research can explore novel hardware and software solutions that minimize energy consumption for data processing and transmission, such as energy-harvesting technologies and low-power computing architectures. Future research includes creating new forms of edge computing hardware that are more powerful, yet energy-efficient and AI algorithms optimized for running on low-power devices. Additionally, improving the security of these systems is crucial, as edge devices often operate in unsecured environments.
Navigating these challenges and leveraging the opportunities requires concerted efforts from stakeholders across industries, academia, and regulatory bodies. Collaborative initiatives aimed at standardization, ethical AI development, and secure, scalable deployment models will be crucial in realizing the transformative potential of integrating generative AI and edge computing in IoT ecosystems.

7. Conclusions

Integrating generative AI with IoT at the edge offers a ground-breaking approach to data interpretation, enabling real-time insights and decision-making across various sectors. As demonstrated through case studies in healthcare, manufacturing, and urban development, this synergy enhances operational efficiency, fosters innovation, and drives forward the digital transformation agenda. While challenges such as data privacy, security concerns, and the potential for bias exist, the opportunities for improvement and innovation far outweigh these hurdles. Embracing generative AI and edge computing necessitates a balanced approach, considering ethical implications while advancing technological capabilities. As we stand on the brink of this new era, the future of IoT and generative AI holds immense potential for creating more intelligent, efficient, and responsive systems.
The journey towards fully realizing this potential is fraught with challenges, but the benefits to society, industry, and the environment promise to be transformative. As we continue to explore and refine these technologies, their role in shaping the future of our digital world cannot be understated.

GLOSSARY

1. IoT (Internet of Things): A network of physical objects ("things") embedded with sensors, software, and other technologies to connect and exchange data with other devices and systems over the Internet.
2. Edge Computing: A distributed computing framework that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.
3. Generative AI: A type of artificial intelligence focused on generating new data and insights based on existing datasets. It can create content, predict future trends, simulate potential outcomes, and offer solutions to complex problems.
4. Algorithmic Bias: The tendency of AI systems to make decisions or predictions that are systematically prejudiced due to erroneous assumptions in the algorithm or biases in the data used for training.
5. Ethical AI: The practice of designing, developing, and deploying AI systems in a manner that is morally right and fair, addressing concerns like privacy, security, bias, transparency, and accountability.
6. LLM (Large Language Models): A type of deep learning model designed to understand, generate, and interpret human language based on vast amounts of text data. LLMs, such as GPT (Generative Pre-trained Transformer), are capable of tasks including but not limited to text generation, translation, summarization, and question-answering, leveraging their extensive training data to produce contextually relevant responses.
7. Smart Cities: Smart cities are communities that incorporate advanced technologies, sustainable practices, and a safe and attractive environment, all interconnected to enhance its overall performance.
8. Deep Reinforcement Learning: A subset of machine learning that combines deep learning and reinforcement learning principles. In deep learning, neural networks learn to make predictions or classifications based on input data.
9. Blockchain: A distributed database or ledger that is shared among the nodes of a computer network. Blockchain technology ensures transparency and security in transactions, making it tamper-resistant and enabling trustless agreements.
10. Generative Adversarial Networks (GANs): A class of machine learning frameworks designed by pitting two neural networks against each other (a generator and a discriminator) to generate new, synthetic instances of data that can pass for real data.
11. Variational Autoencoders (VAEs): A type of autoencoder that provides a probabilistic manner for describing an observation in latent space. Unlike traditional autoencoders, VAEs are more stable and less likely to overfit the data.
12. Recurrent Neural Networks (RNNs): A class of neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior and use its internal state (memory) to process sequences of inputs.
13. Long Short-Term Memory (LSTM): A special kind of RNN, capable of learning long-term dependencies. LSTMs are particularly useful for processing sequences of data for applications like time series prediction, natural language processing, and more.
14. Convolutional Neural Networks (CNNs): A class of deep neural networks, most applied to analyzing visual imagery. They are particularly known for their ability to detect patterns and features in images.

References

[1]  Shandhi, M. M. H. et al, “A method for intelligent allocation of diagnostic testing by leveraging data from commercial wearable devices: a case study on COVID-19. npj Digit. Med. 5, 130, 2022.
[2]  B. Arora, D. S. Chaudhary, M. Satsangi, M. Yadav, L. Singh and P. S. Sudhish, "Agribot: A Natural Language Generative Neural Networks Engine for Agricultural Applications," 2020 International Conference on Contemporary Computing and Applications (IC3A), Lucknow, India, 2020, pp. 28-33, doi: 10.1109/IC3A48958.2020.233263. keywords: {End-to-End Trainable Task-Oriented Systems; Recurrent Neural Networks; Sequence-to-Sequence Learning; Memory Networks}.
[3]  Ozalp, H., Ozcan, P., Dinckol, D., Zachariadis, M., & Gawer, A. (2022). “Digital Colonization” of Highly Regulated Industries: An Analysis of Big Tech Platforms’ Entry into Health Care and Education. California Management Review, 64(4), 78-107. https://doi.org/10.1177/00081256221094307.
[4]  Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI. Journal of Database Management, 31(2), pp. 74-87. IGI Global.
[5]  Fieser, James (2003). Ethics. Internet Encyclopedia of Philosophy.
[6]  Cugurullo, F. Urban artificial intelligence: From automation to autonomy in the smart city. Front. Sustain. Cities 2020, 2, 38.
[7]  M. Veres and M. Moussa, “Deep learning for intelligent transportation systems: A survey of emerging trends,” IEEE Transactions on Intelligent Transportation Systems, 2019.
[8]  H. Yi, K.-H. N. Bui, and H. Jung, “Implementing a deep learning framework for short term traffic flow prediction.” in WIMS, 2019, pp. [1] 7–1.
[9]  H. Karimipour, S. Geris, A. Dehghantanha, and H. Leung, “Intelligent anomaly detection for large-scale smart grids,” in 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE). IEEE, 2019, pp. 1–4.
[10]  D. Du, R. Chen, X. Li, L. Wu, P. Zhou, and M. Fei, “Malicious data deception attacks against power systems: A new case and its detection method,” Transactions of the Institute of Measurement and Control, vol. 41, no. 6, pp. 1590–1599, 2019.
[11]  F. Liang, W. G. Hatcher, G. Xu, J. Nguyen, W. Liao, and W. Yu, “Towards online deep learning-based energy forecasting,” in 2019 28th International Conference on Computer Communication and Networks (ICCCN). IEEE, 2019, pp. 1–9.
[12]  S. Mujeeb, N. Javaid, M. Ilahi, Z. Wadud, F. Ishmanov, and M. K. Afzal, “Deep long short-term memory: A new price and load forecasting scheme for big data in smart cities,” Sustainability, vol. 11, no. 4, p. 987, 2019.
[13]  P, S., & Venkatesh, K. (2023). Blockchain Assisted Archimedes Optimization with Machine Learning Driven Drug Supply Management for Pharmaceutical Sector. 2023 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI), 1-8.
[14]  Maasoumy, M. (2019). Enterprise-wide AI-enabled Digital Transformation.
[15]  Oleśków-Szłapka, J., Stachowiak, A., Pawlowski, G., & Ellefsen, A. P. M. T. (2019). Multi-Agent Systems: A Case Study in an Onshore Oilfield That Explores Opportunities and Future Perspectives in Terms of IoT, AI and 5G Technology.
[16]  Bermeo-Ayerbe, M.A.; Ocampo-Martinez, C.; Diaz-Rozo, J. Data-driven Energy Prediction Modeling for Both Energy Efficiency and Maintenance in Smart Manufacturing Systems. Energy 2022, 238, 121691.