General information not directly related to any of the IoU projects.

Live Audio and Video over WebRTC’s datachannel

UNINETT IoU has over the summer developed a WebRTC demonstrator which attempts something “naughty”…

As part of our work on WebRTC as well as our work within low latency collaboration tools, we decided to find an answer to the following research questions:

Is it possible to transfer live audio and video over the data-channel in WebRTC?
If yes, can we achieve lower latency with data-channels than wit WebRTC media-channels?

Our demonstrator, titled WebRTC data-media, is now available (also on github.) In short the demonstrator

  • consists of a node.js based server and a html+css+javascript based webrtc client,
  • applies the socket.io framework to provide “rooms” for peers to communicate basic signaling,
  • sets up a separate independent data-channels for audio and video content,
  • applies “getUserMedia” to grab live audio and video from microphone and camera,
  • applies the “ScriptProcessorNode” class to grab, transfer, and play out raw audio samples,
  • applies canvas‘s “drawImage”” and “toDataURL” to grab, compress and send video frames

The implementation of the demonstrator is a success. Both live audio and video is transferable over webRTC data-channels. Hence the answer to our first question is a definitive “yes”.

However measurements (to be published in our Multimedia Delay Database) show no significant improvement in delay compared to what “vanilla” WebRTC multimedia channels can offer.

For audio, delay is at best similar, but raw data-channel-audio degrades in quality when buffer lengths are reduced to the supported minimum for ScriptProcessorNode, i.e. 265 samples. Packet loss/jitter is probably caused by the fact that ScriptProcessorNode’s javascript code is executed in the web page’s main thread. Utilizing the upcoming AudioWorklet API will potentially imporve upon this since separate threads for audio processing will be available. However AudioWorklets are (when this article is written) not yet supported by any browser. (Only a developers patch seems to exist for Chromium.)

For video, delay is also very similar, at best slightly better with data-channel transfer. The most significant limiting factor in this case seem to be a combination of maximum frame rate provided by the available cameras and the necessary buffering (and buffer copying) of video frames in the code.  A maximum of 30 frame per second implies an added 33ms delay for each frame buffered.

Attempts were made (in early version of the demonstrator) to minimize buffering by pushing raw uncompressed video frames across the data channel. But as the data channel capacity was limited to ~150 Mbps , only very low resolution video (less than VGA) was possible to transmit. Hence no measurements were performed for this version. If data-channel capacity can be increased and/or buffer handling made more efficient by applying multi-threading via Worklets, is currently an open open question.

A future version of the demonstrator will aim to implement and utilized Worklets both for audio- and video-processing.

(Note: This blog will be updated with diagrams an explicit results soon…)

Clean Sky and Netsys 2017

In week 11 (March 13-17) 2017 both Clean Sky‘s (an EU ITN) annual conference as well as the  NetSys 2017 conference took place in Göttingen, Germany. UNINETT visited both events.

The Clean Sky fellows (PhD students) are all progressing steadily with their SDN-NFV topics. A majority of the works focus on optimizing different aspects of a future edge/fog computing environment.  Among the topics presented (some by keynote speakers) this time was

  • ClusPR: An algorithm for optimized placement of both flows and VNF in a topology
  • Profiling the edge network: Work in progress to anonymized web-logs so that they may be applied for user interests analysis
  • Multihop middle-box selection: New DNS record suggested to enable a client to influence how a chain of middle-boxes is to be composed
  • NFV state migration: “Statelets” introduced (small state update packets) to enable close to seamless migration of a VFN.
  • VNF placement in the edge-cloud: Network cost, processing cost with energy parameters are included in  a placement algorithm. IoT is the target domain.
  • Deploying distributed application: A VNF is just a high performance (low delay and/or high throughput) micro-service. Software developer need to supply quantitative information (from code profiling) to deployments engineers. New deployment templates suggested.

UNINETT is currently hosting one of the Clean Sky fellows and supporting him in his work on profiling user behavior to optimized data caching and computation in fog-computing contexts. Web server logs will (hopefully) be made available, after being anonymized, for profiling analysis (ref. pin 2 above).

NetSys 2017 presented work from a fairly broad range of networking research topics. “Single line” summaries of the more relevant presentations, seen from a backbone operators point of view, follows below.

  • Sufian Hameed et al (NUCES) presented a light weight protocol which may utilize SDN equipment in multiple domains (ASes) to block DDoS attacks efficiently.
  • Nicholas Gray et al (University of Würzburg) suggested a hot-standby regime for L4 firewalls.
  • Robert Bauer et al (Karlsruhe Institute of Technology) showed how “flow load” distribution can be realized in an SDN network. A switch with full FIB may be offloaded by having entries moved to neighboring switches.
  • Leonhard Nobach et al (Technische Universität Darmstadt) presented how the balance between applying FPGA or COTS hardware for NFV can be optimized.
  • Keynote speaker Henning Schulzrinne ( Columbia University) emphasized that IoT expose all security deficiencies of the internet. There is currently little incentive for producers and consumers to change this, since none of them are directly affected when IoT devices are exploited for e.g. DDoS attacks. Large scale management (enrollment, updates, …) of IoT devices will be crucial in the future.
  • Cristina Muñoz et al (University of Cambridge) explained how iterative bloom-filters may be applied to reduce FIB size in a named data network (or information centric network, ICN)  node.
  • Keynote speaker Wieland Holfelder (Google Germany GmbH) recommends Googles tensorflow.org project for machine learning.
  • Keynote speaker Rolf Stadler (KTH) showed how a prediction engine can be trained to predict QoE-parameters from system KPI values only (e.g. from statistics in linux servers’s  /proc or just statistics from network switches.)
  • Claas Lorenz (genua GmbH) suggested how complex firewall rule sets may be analyses and verified efficiently.

Research and Study Network Technologies – White Paper

Uninett contributed to the white paper, which have been published in conjunction with GEANT4 deliverable D13.1. This deliverable reports on the work carried out by GN4-1 Joint Research Activity 1 Future Network Topologies, Task 1 Current and Future Network Technologies to investigate the trends and technologies in optical transport networks and how these can be managed to help deliver the concept of zero-touch connectivity. It covers increasing utilisation of the photonic layer; spectral sharing and alien waves; frequency and time distribution; and network dynamicity.


Alien wavelength 100G field interoperability testing

We performed 100G field alien wavelength test over 1235km of DWDM-path. The purpose of the test was to verify support of 100G alien wavelengths in UNINETT’s (Norway’s NREN) optical network. In addition, UNINETT wanted to gain some experience with new 100G OTN/DWDM cards from Juniper and Cisco, and prove the interoperability between them. The following tests have been performed:
– 100G single-vendor alien wavelength (AW) test over Coriant hiT 7300 platform.
– 100G multi-vendor AW test over Coriant hiT 7300 platform.


NTP Clock Synch Accuracy – It’s time for microseconds

Making accurate clock signals available has been an ongoing challenge for mankind for millenniums (ref wikipedia).   We have increased accuracy gradually, from half hours (sundials) down to nanosecond (atomic clocks) over all those years.

But an accurate clock is worth little if it is not synchronized with other clocks relevant in a certain setting, i.e. a group of people with a meeting coming up or computer systems registering and sharing transactions.

Network Time Protocol (NTP) is a standardized Internet protocol (ref rfc1305) for clock synchronization between clients and servers. Practically all modern desktops, tablets, smart-phones, front-end and back-end servers apply NTP today.

Example NTP hierachy

NTP infrastructure is hierarchical (see Figure).  Top units, “stratum 1”, synchronize with external source of extreme accuracy, e.g. GPS or atomic clocks. Other units apply the NTP protocol to synchronize their internal (crystal based) clocks to the parent or neighbor unit.

Units send NTP-requests upward in the hierarchy at certain interval. NTP-replies are applied to gracefully adjust the local clock taking into account potential timing disturbance added to the requests replies by the network between the peers.

The default request interval for Linux and FreeBSD servers and desktops is 2¹⁰ seconds, ie. around 17 minutes. With this interval a stratum 2 server at UNINETT achieves an accuracy within 1ms, i.e. the local clock is out of sync with the stratum 1 server with +- 0.5ms. Accuracy oscillates a few times per hour. See figure.


Clock accuracy of stratum 2 server relative to 4 of its stratum 1 clock sources.

In 2016 a request interval of 17 minutes seem rather conservative given todays CPU and network capacities. Hence in the evening April 21th UNINETT decided to shorten the request interval to 2⁶ seconds, i.e. from 17 minutes down to 1 minute. As the the figure above illustrates, this update made a significant increase in clock accuracy. The stratum 2 clock server in question has now stable sub-millisecond accuracy and stays in sync with its stratum 1 servers with less than 200 microseconds offset.

Do we need this type of accuracy…? We believe so, first in scientific and back-end settings and later in more common application. Synchronization of database transaction in distributed systems is already relaying in tight clock synchronization.

PHD in SDN at University of Stavanger

Aryan Taherimonfared has completed his PHD at the University of Stavanger within Software Defined Networking(SDN). His PHD advisor was Chunming Rong. He has been working for UNNETT contributing to the UNINETT SDN-project as well during this time.

Thesis abstract

The contribution of this thesis is twofold. First, several architectural improvements are proposed for network monitoring services. These proposals take advantage of the data-intensive computing model and SDN mechanisms to advance the state-of-the-art in monitoring backbone and data centre networks. Second, various components of an SDN architecture framework are designed that enhance the efficacy, reliability, and manageability of a large-scale cloud infrastructure. The enhancements are particularly made to network virtualization techniques, which are the critical building blocks in the cloud service delivery.

Read the thesis at http://www.ux.uis.no/~aryan/docs/thesis/

Workshop on SDN, Summer 2015

UNINETT invited to a another workshop in our series of half day workshops on SDN at the end of the summer, August 27 2015. 8 people attended, arriving from Transpacket, Department of Telematics at NTNU and UNINETT. Two participants attended remotely from Oslo.

The workshop program was the following

Presentation slides will soon become available.

Discussions went lively throughout the workshop, and many aspects and challenges with SDN where addressed. The participants where in general satisfied with the workshop (even though attendance was somewhat lower than expected). Hence UNINETT will strive to offer another workshop in the spring 2016.


Close to 1/3 of all main track presentation at SIGCOMM 2015 in London, August 18-20, addressed challenges and experiences related to data centres. Software Defined Networking was often the actual or assumed underlying technology.

All SIGCOMM 2015 papers are available online  via the conference web site.

A general impression is that most accepted work at SIGCOMM is funded by “the big players”, e.g. Google, Facebook, Microsoft, Cisco. A majority of work presented reports results from mature research often already deployed in pilot (and even production) infrastructures. Hence few “crazy” new ideas are introduced.

Fortunately the poster sessions did give room for some novel and surprising ideas, among them free space optics based intra-data centre networks with physical multicast capabilities.

This post summarises a selection of the papers presented.

  • Best paper award:  Stefano Vissicchio et al from UCLouvain presented their SDN concept added on top of a link-state routed network. A central controller introduce fake nodes by communicating tailored link state announcements to routers in the network, and enable traffic engineering on a source-destination level. If the controller fails, the system default back to standard link-state behaviour.
  • Keynote:  Albert Greenberg from Microsoft explained how the Azure infrastructure is running close to 100% on SDN technology. 40Gbps 4 level clos networks interconnects servers in data centres. Data centre resources are now applied to operate the data centre, e.g. fairly intense active monitoring of end-to-end paths by running traffic generators and sinks.
  • Policy languages: Prakash et al from University of Wisconsin-Madison presented a graph based system for better policy conflict managements. Set theory is applied. It seems to scale well, but results are none-deterministic.
  • Resource management: Several papers presented techniques to optimize placement and access to data centre resource. Scheduling challenges were addressed. Google gave a historical summary of their data centre activities explaining how and what they have learned is important to be able to scale up their installations.
  • Wireless aspects: A set of papers look into the utilizing backscatter, i.e. superimposing signals on top of reflected or transit waves from other sources,  in new ways.  High accuracy positioning with off-the-shelf  wifi equipment was also address by several groups.
  • Video streaming: Work on optimization of content placement in content delivery networks (CDNs) where presented, as well as advanced control theory driven rate control in video players
  • Physical internet: Ramakrishnan Durairajan et al from University of Wisconsin – Madison presented work on mapping physical infrastructure of US based ISPs. Results show that ducts are shared frequently and as many as 80% share at least one duct. Hence care is needed to ensure true resilience when multi-homing to different ISPs.

Otto’s personal notes are available on request.

Network Performing Arts Production Workshop 2015

The annual “Network Performing Arts Production Workshop” took place at the very top floor of the Royal College of Music (RCM) in London in May 4-6 2015. UNINETT was present in the audience (but not performing, presenting or demonstrating anything this year).

The workshop seems to attract a balanced mix of people with technical and artistic background and interests. Approximately half of the presentations and demos where technically related while the other half address artistic aspects and ideas.

Day 1 of the workshop summarised the background for the workshop series and gave a quick walk through of current tools in use by key participants in the community. UNINETT was listed among the key participants. The most applied set of tools seems to be

Separate presentations with further updates about Polycom and LOLA was given by representatives from the respective development communities (Polycom and GARR). Polycom units are frequently applied for master classes (100 per year at the Danish Academy of Music) but only with all musicians at one end and the instructor on the other end of the link. Due to too much latency the instructor can only listen and comment but not play himself (e.g. not accompany a singer). LOLA, approaching it’s 2.0 release but already with support for digital HD cameras (CoaxExpress and USB 3),  enables full musical collaboration. Two demo sessions between London and Copenhagen showed the differences between the two tool clearly.

Day 2 focused on LOLA which seems to be maturing well and gaining support. Several users are working on a “LOLA in a box” concept which aims for a compact portable LOLA based kit. Several session addressed challenges with LOLA in real life settings due to its demand for high network (gigabit) capacity. Challenges included network hw equipment not performing according to specifications as well as interference issues between Wifi and gigbit TP cables.

During Day 1 and Day 2 several online collaborative dance performances  where demonstrated. Dancers from countries in Europe collaborated. Clever choreographies made distance dances “meet” on stage. Remote dancer where also moved (thrown) from one projection surface to another. When the “demo effect” kick in for one of the demonstration, interesting insight into the efforts required to prepare for a session was given. Synching, resetting and reconnecting all tools seemed far from “out of the box”.

On Day 2 and 3 two hardware based music collaboration tools where presented, both FPGA based. “Flexilink” switches provides a multimedia service class with guaranties delay as well as best effort class. The hw design is to some extent inspired by ATM. “4K Gateway” provides fast transmission (and some compression) of 2x 4k video and up to 96 audio channels. The latter system was applied in an impressive violin (in Praha) and piano (in London) demonstration.

In general it seems now that the available sw and hw tools for (encoding and) transmission and reception of musical/artistic collaboration sessions are fast enough given sufficient bandwidth. NREN and GEANT networks also now seem well equipped to provide enough bandwidth (at small usage scale) if configured correctly. The bottleneck in the multimedia pipeline is now definitely cameras, screens and projectors. Low frame rates (<60hz) and internal buffering often introduce >50ms delays. Industrial cameras (e.g. USB3, CoaxExpress) provide promising speeds and configurability (though somewhat medium image quality), while screens and projectors still are “black boxes” at best with a “gaming mode” option. There is a need for more “white box” freely configurable cameras, screen and projector.



SDN-workshop, Spring 2014

IoU at UNINETT invited for a  half day workshop on SDN. Time and place was Tuesday May 6, 09:00 – 13:00, at UNINETT. In total 14 participants attended the workshop.

The program was

  • 9:00-9:15 Welcome (Otto)
  • 9:15-9:30 Reports from recently visited conferences and workshops (Otto, Martin)
  • 9:30-11:30 Summary of ongoing work by workshop participants (15 min each: UiS, UNINETT, UNIK, HiOA, Telenor, NTNU, Transpacket, Albis)
  • 11:30-11:45 Break
  • 11:45-12:15 Potential future collaborative projects and coordination of lab facilities
  • 12:15-12:30 Research on SDN – current and upcoming PhDs (UiS, UNIK, HiOA)
  • 12:30: Lunsj – served in the meeting room