Computer Science and Engineering
p-ISSN: 2163-1484 e-ISSN: 2163-1492
2025; 15(1): 14-21
doi:10.5923/j.computer.20251501.02
Received: Jan. 7, 2025; Accepted: Feb. 2, 2025; Published: Feb. 26, 2025
Simon Tembo1, Ken-Ichi Yukimatsu2, Ryota Takahashi2, Shohei Kamamura3
1Department of Electrical and Electronic Engineering, University of Zambia, Lusaka, Zambia
2Department of Computer Science and Engineering, Akita University, Akita-shi, Akita, Japan
3Department of Computer Science, Seikei University, Musashino-shi, Tokyo, Japan
Correspondence to: Simon Tembo, Department of Electrical and Electronic Engineering, University of Zambia, Lusaka, Zambia.
Email: | ![]() |
Copyright © 2025 The Author(s). Published by Scientific & Academic Publishing.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/
Research has identified significant shortcomings in modern IP backbone networks, with approximately 20% of failures occurring during scheduled maintenance and the remaining 80% arising unexpectedly. To mitigate service disruption, IP Fast Reroute (IPFRR) minimizes recovery time by precomputing backup routes that enable immediate traffic redirection in the event of a failure. Among IPFRR techniques, the Multiple Routing Configurations (MRC) scheme generates backup topologies to guide rerouting; however, scaling MRC often results in excessive resource usage, including increased demands on forwarding table space and link-state messaging. Simplifying MRC by reducing backup topologies can lead to link congestion, especially under high-traffic conditions. This paper introduces an innovative backup topology design algorithm that addresses congestion during unplanned failures. The proposed method leverages Special Nodes—nodes characterized by high connectivity (node degree) within the backup topology—to redistribute traffic from overloaded links to alternative paths. Considering critical network conditions such as traffic matrices and topological structure, the algorithm achieves efficient load balancing across the network. Experimental evaluations show that the maximum link load can be brought down to roughly a quarter of what traditional methods experience—even though both strategies employ the same number of backup topologies, which underscore this solution’s scalability across any network size. Its effectiveness is especially pronounced in large-scale environments, where designating a select subset of nodes (around one out of every five) based on strategic considerations minimizes congestion and significantly strengthens overall network resilience.
Keywords: Unplanned Failures, IPFRR, Backup Topologies, Congestion Prevention, Traffic Splitting, Special Nodes
Cite this paper: Simon Tembo, Ken-Ichi Yukimatsu, Ryota Takahashi, Shohei Kamamura, Resilient IP Network Architectures: Innovative Methods for Congestion Mitigation During Unplanned Failures, Computer Science and Engineering, Vol. 15 No. 1, 2025, pp. 14-21. doi: 10.5923/j.computer.20251501.02.
![]() | Figure 1. Traditional IP Network versus IP Fast Reroute Network |
![]() | Figure 2. Original Topology with Backup Topology [5,6] |
![]() | Figure 3. Overview of the Backup System Architecture [5-6] |
![]() | Figure 4. (a) Congestion challenges in the existing approach.. (b) Our approach using a Special Node to redistribute hotspot traffic [5,6] |
![]() | Figure 5. Hierarchical Load Distribution Architecture (HLDA) [9] |
![]() | Figure 6. Selecting Special Nodes using Load Order for HLDA Topology |
![]() | Figure 7. Selecting Special Nodes using Degree Order - HLDA Topology |
![]() | Figure 8. Flow Diagram for the Proposed Algorithm [5,6] |
![]() | Figure 9. Steps 1 & 2 for Special Nodes Selection for Proposed Algorithm |
![]() | Figure 10. COST239(11 Nodes, 25 Links) & HLDA(11 Nodes, 25 Links) |
![]() | Figure 11. COST266 (26 Nodes, 49 Links) |
![]() | Figure 12. Correlation between Node Degree and Traffic Volume for COST239 and HLDA network models [5,6] |
![]() | Figure 13. A Comparative Analysis of Load Reduction in COST239 and HLDA [5,6] |
![]() | Figure 14. Top K & Swapping K Methods Applied to the COST239 |
![]() | Figure 15. Top K & Swapping K Methods Applied to the COST266 |