Loading...
Loading...
Validate that your network actually withstands the disruptions BNM examiners and your continuity programme assume it does — capacity, failover, DDoS, DNS, segmentation, ISP redundancy.
Network availability used to be an operations team concern; today it is an examiner concern, a customer concern and a board concern. BNM's RMiT Policy Document sets explicit expectations on Malaysian financial institutions for network resiliency under the technology operations chapter. Customer-facing payment platforms, internet banking and trading systems are subject to disclosure and SLA expectations that translate every minute of outage into reputational and regulatory cost.
Network resiliency is also where assumed-but-untested architecture commonly fails. Active-active configurations that have never failed over actually. ISP redundancy that runs on shared upstream infrastructure. DNS providers without secondary failover. The assessment exists to find these in a controlled exercise — before the live incident does.
We review network capacity headroom across all critical paths: internet ingress and egress, DC-to-DR replication links, intra-DC east-west traffic, and remote-access VPN. Headroom is benchmarked against business growth projections, peak-event traffic patterns, and known seasonality (year-end, festival, sale events).
The output is a capacity heat map identifying segments at risk of saturation in the next 12-24 months, with sized upgrade recommendations and target timelines.
Pull cables, force route convergence, validate session-state preservation, measure user-impact window. Confirms the design actually works under failure.
Trigger orchestrated failover to standby, measure full RTO, validate restored capacity matches active capacity, confirm rollback procedure works cleanly.
Failover tests are run with your network operations team, scheduled out-of-business-hours, with full rollback authority retained on your side. We document every test outcome with measured timings and observed deviations from the documented runbook.
DDoS readiness covers three attack classes: volumetric (Gbps-scale flooding), protocol (TCP/UDP exhaustion), and application-layer (HTTP request floods, slow-loris). For each class we assess your protective stack — ISP-level scrubbing, cloud DDoS provider (Cloudflare, Akamai, AWS Shield, Imperva), origin protection, and application-tier rate limiting.
Where appropriate (and authorised) we run controlled scenario tests using a continuous Breach & Attack Simulation platform — see our broader BAS practice for ongoing validation cadence. The aim is not to break the network but to confirm the playbook works and the SOC sees the right signals.
DNS is a single point of failure for most Malaysian organisations — and most do not realise it until their primary DNS provider has an incident. We assess DNS architecture for: provider redundancy (single vs multi-provider), zone configuration consistency, DNSSEC posture, anycast vs unicast resolver path, registrar lock and account hardening, and cache poisoning resistance.
Recommendations typically include adding secondary DNS provider, hardening registrar account (MFA, registrar lock, transfer lock), and improving DNSSEC posture for high-value zones.
Flat networks turn small breaches into total compromises. We review network segmentation for: Tier-0 (domain controllers, certificate authorities, backup orchestration) isolation, PCI scope segmentation (cardholder data environment containment), OT/IT separation, DMZ-to-internal restrictions, and east-west microsegmentation maturity.
Output is a segmentation gap report with prioritised choke-point recommendations — typically the smallest set of policy changes that delivers the largest blast-radius reduction.
ISP redundancy is frequently assumed but not validated. We trace upstream paths for each ISP relationship (sometimes two ‘different’ ISPs share an upstream provider — meaning a single-point-of-failure remains), validate BGP configuration, confirm route advertisements are correct, and run controlled link-failure scenarios with your network team to confirm BGP convergence works as designed.
For Malaysian datacentres we particularly look at undersea cable diversity in the upstream — recent cable cuts have repeatedly revealed shared paths between ISPs that were sold as independent.
Network resiliency findings feed directly into the broader business continuity programme. See our BCP / DR consulting service for end-to-end programme delivery — BIA, RTO/RPO definition, runbook authoring and annual DR drill facilitation.
Capacity planning headroom, failover testing for redundant components, DDoS readiness (volumetric, protocol and application-layer), DNS resiliency (provider redundancy, cache poisoning controls), network segmentation effectiveness, ISP redundancy and BGP configuration validation, and integration with the broader business continuity programme.
No. A pentest is adversarial — we try to break in. A resiliency assessment is engineering — we validate that the network keeps running under stress (volumetric attacks, hardware failure, ISP outage, regional incident). The two complement each other; many clients run both annually.
RMiT's technology operations and network security clauses (around chapter 10) impose explicit expectations on Malaysian financial institutions — including network availability, redundancy and stress-tested failover. Specific clause numbers and current text take precedence — confirm against the published RMiT Policy Document at bnm.gov.my.
Only with explicit written authorisation, scoped scenarios, agreed traffic profiles and live conference-bridge coordination — and only against environments where this is operationally safe. Typically this is done against staging or against a controlled production segment with your network team and your ISP/CDN provider on the call. We can also use your existing DDoS-test platform (or a continuous BAS DDoS module) to avoid uncontrolled live traffic.
Annually for BNM-regulated FIs, biennially for most other enterprises. Triggering events — major architecture change, ISP migration, datacentre move, new high-traffic product launch — drive interim reassessment of affected segments.
Scoping calls take 30 minutes. Full resiliency assessment (capacity through ISP) completes in 6-8 weeks.
Get a Resiliency Scoping Call