1. Large scale RINA Experimentation on FIRE +
ARCFIRE Final Review
Experiment 3
September 2018
2. Goals
• Explore how the QoS model proposed by the RINA architecture
works in practice
• Experiment with the QTAmux scheduling policies based on the
deltaQ theory to differentially allocate loss and delay to multiple QoS
classes
• Demonstrate applicability of RINA as an effective solution for
transporting IP traffic in provider-based IP VPN scenarios
Large-scale RINA Experimentation on FIRE+ 2
3. RINA QoS model
• Application requirements are explicit and communicated to the DIF via the IPC API at flow
allocation time
• DIF maps QoS request into a QoS cube, and marks EFCP traffic accordingly (qos-id)
• Forwarding, scheduling, resource allocation & congestion mgmt policies applied consistently
• An N-DIF requests a flow to an N-1 DIF exactly the same way (consistent QoS model from app to
the wire)
Large-scale RINA Experimentation on FIRE+ 3
queues
sched.
IPC
Process
queues
sched.
IPC
Process
queues
sched.
IPC
Process
flow flow
N-1 DIF N-1 DIF
N DIF
flow
Port-idPort-id
App A App B
Port-idPort-id
4. Capturing bounds on performance metrics
• How to express requirements for bounds in performance metrics (loss, delay)
and provide them as parameters of a flow request?
Large-scale RINA Experimentation on FIRE+ 4
• Application QoE can be linked to a CDF that
links required delay and packet loss
– E.g. 50% of the SDUs in the flow should
experience < 10 ms delay
– 95 % of the SDUs in the flow should experience <
50 ms delay
– 5% of the SDUs can be lost
• Requirements on the CDF can be modelled
with a series of <“percentage”, “max delay”>
pairs
5. Methodology
• DIFs in experiment 3 use QTAMux scheduling policies, which
allow for the differentiation of delay and loss between multiple
classes of traffic.
• All scenarios in experiment 3 use a 2x2 QTAMux matrix, which
supports 4 QoS cubes:
– High urgency, high cherish (low latency, low loss)
– High urgency, low cherish (low latency, higher loss)
– Low urgency, high cherish (higher latency, low loss)
– Low urgency, low cherish (best effort)
Large-scale RINA Experimentation on FIRE+ 5
• Each experiment run (IP over RINA scenarios) features 3 steps:
– Verification of connectivity and performance through the DIF (rina-echo, rinaperf)
– Setup of Layer 3 VPN using the iporinad application
– Verification of Layer 3 connectivity and performance (ping, iperf)
7. Scenario 1: Low-scale, single DIF, virtual Wall
Large-scale RINA Experimentation on FIRE+
7
CR CR
CR
PE PE
PE
PE
PE
PEPE
PE
PEPE
CE
PE
CR
CE CE
CE
CE
CE
Green customer CPE (IP)
Blue customer CPE (IP)
Purple customer CPE (IP)
Orange customer CPE (IP)
Red customer CPE (IP)
Pink customer CPE (IP)
CE
CE
CE
CE CE CE
CE
CE
CE
CE
Provider network
(RINA-based)
CE CE CE CE CE
CE
CE
CE
PE
CRCR PEPE
CECE Ethernet
Ethernet
EthernetEthernet
Ethernet Backbone DIF
IP (Green customer IP VPN)
8. Scenario1.a: RINA-based core DIF
• Each PE allocates 4 rinaperf flows to another PE, each
one with different loss/delay characteristics
• Rinaperf generates traffic at constant rate
• Generate traffic at different rates for different executions,
to create different levels of offered load per QoS. Measure
loss/delay per QoS using echo application.
• Repeat with FIFO-based scheduling policy, compare.
Large-scale RINA Experimentation on FIRE+ 8
CR CR
CR
PE PE
PE
PE
PE
PEPE
PE
PEPEPE
CR PE
0 5 10 15
0.00.40.8
CDF of delay, flows@p2a, period=50us
Delay(ms)
Probability
QoS1
QoS2
QoS3
QoS4
0 5 10 15
0.00.40.8
CDF of delay, flows@p2a no QoS, period=50us
Delay(ms)
Probability
QoS1
QoS2
QoS3
QoS4
0 5 10 15
0.00.40.8
CDF of delay, flows@p2a, period=75us
Delay(ms)
Probability
QoS1
QoS2
QoS3
QoS4
0 5 10 15
0.00.40.8
CDF of delay, flows@p2a no QoS, period=75us
Delay(ms)
Probability
QoS1
QoS2
QoS3
QoS4
0 5 10 15
0.00.40.8
CDF of delay, flows@p2a, period=100us
Delay(ms)
Probability
QoS1
QoS2
QoS3
QoS4
0 5 10 15
0.00.40.8
CDF of delay, flows@p2a no QoS, period=100us
Delay(ms)
Probability
QoS1
QoS2
QoS3
QoS4
• Demo (part 1)
• Demo (part 2)
9. Scenario1.b: IP VPN over a single DIF
• IP VPN over the RINA core, each PE runs an
iporinad instance.
• Each CE starts iperf session with another
router on a remote site of same VPN (first
with TCP, next with UDP)
• Ping also between the same pair of CEs
while iperf is active
Large-scale RINA Experimentation on FIRE+ 9
10. Scenario 1.b results (TCP)
• Problem: buffers introduced by
iporinad (TUN interface queues)
and its scheduling are completely
QoS unaware
• Effects with TCP: while
orange/green VPNs get a low
latency (as expected), the penalty
on the other ones is too high
– High loss introduced by the iporinad
subsystems causes TCP to be in
congestion mode, increasing delay and
decreasing goodput of affected iperf
flows
Large-scale RINA Experimentation on FIRE+ 10
0 200 400 600 800 1000
0.00.20.40.60.81.0
CDF of delay, site 1
Delay(ms)
Probability
s1c1
s1c2
s1c3
s1c4
s1c5
s1c6
0 500 1500 2500
0.00.20.40.60.81.0
CDF of delay, site 2
Delay(ms)
Probability
s2c1
s2c2
s2c3
s2c4
s2c5
s2c6
0 500 1500 2500
0.00.20.40.60.81.0
CDF of delay, site 3
Delay(ms)
Probability
s3c1
s3c2
s3c3
s3c4
s3c5
s3c6
0 500 1500 2500
0.00.20.40.60.81.0
CDF of delay, site 4
Delay(ms)
Probability
s4c1
s4c2
s4c3
s4c4
s4c5
s4c6
11. Scenario 1.b results (UDP)
• Since UDP traffic is not flow
controlled, iperf sends data at a
constant bit rate, in spite of high
packet loss in the TUN interface
queues (30%)
• When packets enter the RINA flows,
the load level is low enough that
there is almost no difference
between QoS classes
Large-scale RINA Experimentation on FIRE+ 11
1 2 3 4 5 6
0.00.20.40.60.81.0
CDF of delay, site 1
Delay(ms)
Probability
s1c1
s1c2
s1c3
s1c4
s1c5
s1c6
1 2 3 4 5 6
0.00.20.40.60.81.0
CDF of delay, site 2
Delay(ms)
Probability
s2c1
s2c2
s2c3
s2c4
s2c5
s2c6
1 2 3 4 5 6
0.00.20.40.60.81.0
CDF of delay, site 3
Delay(ms)
Probability
s3c1
s3c2
s3c3
s3c4
s3c5
s3c6
1 2 3 4 5 6
0.00.20.40.60.81.0
CDF of delay, site 4
Delay(ms)
Probability
s4c1
s4c2
s4c3
s4c4
s4c5
s4c6
12. Scenario 1.b results (rinaperf + ping)
• Mix approach of scenario 1.a
(rinaperf to generate traffic in the
DIF between PEs), but measure
delay between CEs using ping
– QoS differentiation can be observed
again
• Conclusion: iporinad is a good tool
to validate IP over RINA scenarios,
but cannot guarantee quality under
load
– Needs to be improved
Large-scale RINA Experimentation on FIRE+ 12
1 2 3 4 5 6 7 8
0.00.20.40.60.81.0
CDF of delay, site 1
Delay(ms)
Probability
s1c1
s1c2
s1c3
s1c4
s1c5
s1c6
1 2 3 4 5 6 7 8
0.00.20.40.60.81.0
CDF of delay, site 2
Delay(ms)
Probability
s2c1
s2c2
s2c3
s2c4
s2c5
s2c6
1 2 3 4 5 6 7 8
0.00.20.40.60.81.0
CDF of delay, site 3
Delay(ms)
Probability
s3c1
s3c2
s3c3
s3c4
s3c5
s3c6
1 2 3 4 5 6 7 8
0.00.20.40.60.81.0
CDF of delay, site 4
Delay(ms)
Probability
s4c1
s4c2
s4c3
s4c4
s4c5
s4c6
13. Scenario 2: multiple DIFs, RINA only
Large-scale RINA Experimentation on FIRE+ 13
• Service provider scenario
– 41 nodes
– Two MAN networks
– 1 core network
– 3 levels of DIFs
– 18 CPEs
Access
Router
PtP DIF
CPE
Edge
Service
Router
Metro
Edge
Router
Metro
Edge
Router
Metro BB DIF
Metro service DIF
PtP DIF PtP DIF
PtP DIF PtP DIF
Metro P
router
PTP DIF
Residential customer service DIF
Host
PtP DIF
Public Internet or App-specific or VPN DIF
Backbon
e Router
Backbon
e router
PtP DIF PtP DIF
Backbone DIF
Provider
Edge
Router
Provider
Edge
Router
PtP DIF
Customer network Service Prov. 2Service Prov. 1 network
Access Aggregation Service Edge Core Internet Edge
Public Internet or App-specific or VPN DIF
Home DIF
Customer Premises Equipment
Access Router
MAN Access Router
MAN Core Router
Edge Services Router
Backbone router
• Each CPE (per QoS level):
– Rina-et flow with all other CPEs
– Four 1 Mbps rinaperf flows
• In total 1296 flows
• Physical links running at 90%
14. Scenario 2 results
• Clear differentiation between flows with high urgency and flows with low urgency
• But latency is too high: probably due to the implementation of multiple queues in stacked DIFs
within the IRATI prototype
Large-scale RINA Experimentation on FIRE+ 14
0 500 1000 1500
0.00.20.40.60.81.0
CDF of delay, rina−et instances at system CPE63
Delay(ms)
Probability
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
cpe11−hu
cpe12−hu
cpe13−hu
cpe21−hu
cpe22−hu
cpe23−hu
cpe31−hu
cpe32−hu
cpe33−hu
cpe11−lu
cpe12−lu
cpe13−lu
cpe21−lu
cpe22−lu
cpe23−lu
cpe31−lu
cpe32−lu
cpe33−lu
0 500 1000 1500
0.00.20.40.60.81.0
CDF of delay, rina−et instances at system CPE41
Delay(ms)
Probability
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●
cpe11−hu
cpe12−hu
cpe13−hu
cpe21−hu
cpe22−hu
cpe23−hu
cpe31−hu
cpe32−hu
cpe33−hu
cpe11−lu
cpe12−lu
cpe13−lu
cpe21−lu
cpe22−lu
cpe23−lu
cpe31−lu
cpe32−lu
cpe33−lu
15. Scenario 3: Large-scale, IP VPNs on CPEs, QEMU testbed
• Scaled up version of
scenario 2, but with
CPEs supporting IP
VPNs
• Runs in QEMU testbed
(not enough machines
available on Virtual Wall
or other FED4FIRE+
testbeds)
• 144 systems: 96 running
RINA and 48 IP only.
Large-scale RINA Experimentation on FIRE+ 15
CPE Router
MAN Access Router
Access Router
MAN backbone Router
Edge Service Router
Backbone router
Host, Green VPN
Host, Blue VPN
Host, Purple VPN
Host, Orange VPN
Host, Red VPN
Host, Pink VPN
Host, Brown VPN
Host, Black VPN
Host, Yellow VPN
Host, Cyan VPN
Host, Grey VPN
Host, Magenta VPN
16. Scenario 3 results
• Too many nodes to get QoS
differentiation results within a single
physical machine.
• Just focus on
– checking that the scenario can be
setup using IRATI
– There is connectivity between the
hosts in the same VPN (via ping)
• Demo part 1
• Demo part 2
Large-scale RINA Experimentation on FIRE+ 16
0 10 20 30 40
05101520
Ping times between host nodes
Ping sessions between nodes
Delay(ms)
min avg max
18. Implications
• Consistent QoS model from app to wire: applications (if they wish to do so) can provide
quality requirements to the network in a technology-independent way
• No need to do DPI to identify classes of traffic and “infer” quality requirements
• EFCP traffic marking enables resource allocation policies (routing, scheduling,
congestion control) to act consistently across a DIF.
• Layers (DIFs) provide QoS requirements to lower layers the same way, no need to
standardise QoS cube identifiers across DIFs (but yes the semantics of quality
parameters)
• RINA can take the role of MPLS (and similar technologies) to address use cases such
as provider-based IP / or Ethernet VPNs or networks slices, but with more flexibility to
provide QoS, enhanced security and scalability
• Not just virtual circuits, but any combination of routing, scheduling, forwarding and
congestion control policies that works for the use case
Large-scale RINA Experimentation on FIRE+ 18