Open Access Presentation + Paper
21 August 2020 Live streaming of events over an IP network as a catalyst in media technology education and training
Author Affiliations +
Abstract
The paper describes how students are involved in applied research when setting up the technology and running a live event. Real-time IP transmission in broadcast environments via fiber optics will become increasingly important in the future. Therefore, it is necessary to create a platform in this area where students can learn how to handle IP infrastructure and fiber optics. With this in mind, we have built a fully functional TV control room that is completely IP-based. The authors present the steps in the development of the project and show the advantages of the proposed digital solutions. The IP network proves to be a synergy between the involved teams: participants of the robot competition and the members of the media team. These results are presented in the paper. Our activities aim to awaken enthusiasm for research and technology in young people. Broadcasts of live events are a good opportunity for "hands on" activities.
Conference Presentation

1.

INTRODUCTION

After gathering experience for several years, we produced live events and live studio productions involving our students as part of our department’s curriculum (Fig. 1). As an educational institution, we are committed to remaining at the cutting edge of technology in our field. Therefore we had to adapt to new trends and technical possibilities in the world of live productions.

Figure 1.

The video control room operating in production. © Marcel Kaufmann

00158_PSISDG11480_114800L_page_1_1.jpg

Nowday, considering the digitalization technology progress, we are able to transmit audio and video signals in real time over a network based on the Internet Protocol (IP).

This allows us to be even more flexible and faster in accomplishing a high-quality production. With motivated and technical interested students we have built the basics of our control room (Figure 1) which can receive camera signals from our whole campus and possibly from everywhere around the globe. The system is essential in the practical part of a lecture about mediatechnic and makes it possible to teach our students the state of art in live video broadcasting.

2.

THEORETICAL CONSIDERATIONS

Technical Implementation

The proposed and developed System is a low-cost and self implemented setup (Fig. 2). Considering a limited budget therefore we have to implement custom hardware in our setup, running a unix operating system with decklink cards for Serial/Standard Digital Interface (SDI) Input on it. Using FFmpeg, we convert Video Signals from the main conventional Camera via SDI to Network Device Interface (NDI), an open protocol developed by NewTek. NDI is widely used for Video over IP setups to “share video across a local area network”[1]. NDI is also a very common standard for Live Broadcast Events. It fits perfectly in our demands due to flexibility and efficiency. [2]

Figure 2.

Overview of our technical implementation

00158_PSISDG11480_114800L_page_2_1.jpg

The network infrastructure we use, is a standard gigabit network with multicast and unicast support. For our bidirectional connection to the decoding system we are able to use fiber optic connections up to 40G. This can become very important in the near future if we want to stream for example in UHD, 4K or even in 8K Quality.

We implemented a mirrored system in encoding and decoding, therefore our decoding system differs only in certain points, such as in the decoding and encoding technique. Instead of using FFmpeg like in our encoding System, we use Gstreamer with ndi support on our decoding Server. The reason for this decision was the unsatisfactory test results of decoding ndi streams with FFmpeg seamlessly and without significant delays or frame drops.

A main goal of our developed system is flexibility. The system is completely modular and can be configured for different requirements and events. In addition, the system is easily scalable, for example we can increase the processing power of our encoding and decoding servers to handle a higher resolution or to drive more cameras simultaneously. As well with the help of IP based communication the system is completely independent in its location. Due to compact and moveable racks simple tear up and tear down is possible as well.

For handling all the data transferred over the different subnets, layer 3 gigabit switches with 10G uplinks are indispensable. The connection between the production and the video direction can be established with glass fiber for long distance and high bandwidth applications or normal gigabit ethernet cable for shorter distance and low bandwidth applications. For special cases and if the network device supports small form-factor pluggable interfaces up to 10Gbit/s (SFP+), 10 Gigabit Ethernet (10GE) can be used to use an ethernet cable for higher bandwidth over short distances. Further possibilities could be for a mobile production the use of GSM/LTE to transmit the data to the control room. Therefore connections between all the different encoding and decoding systems could be any connection that supports IP.

Set-Up and Signal Flow

The signal flow is divided into audio, video and control signals all sent over the network layer. Video signals are converted from SDI to NDI. The signal from the camera is fed in the first patch panel in the encoder rack. Next, the signal is fed into the video router for routing all signals to the right destination. Then each signal is split up. For the vision mixer input five to eight need to be HDMI this leads to four SDI to HDMI converter instead of normal SDI splitter. All signals are fed in the vision mixer to get a preview at the recording location and the encoding server to send the signals over the network to the decoder server. All signals are fed back into a patch panel to the multiview and afterwards into the final vision mixer to mix the live stream feed (Figure 3).

Figure 3.

Video signal network. Flow. [4]

00158_PSISDG11480_114800L_page_3_1.jpg

Audio signals are initially received by standard analog signals. All signals are split to be able to mix parallel the live audio in the studio and the live stream audio. This is handled by two separated mixers. The stage box at the studio transmits all audio signal in both directions over the network by Dante, “a combination of software, hardware, and network protocols that deliver uncompressed, multi-channel, low-latency digital audio over a standard Ethernet network using Layer 3 IP packets.” [3] Addition audio sources are either send by NDI together with the video or by Dante from the audio pc. At the audio pc, all audio routing also takes place.

Figure 4.

Diagram of the signal flow for audio using Dante. [4]

00158_PSISDG11480_114800L_page_4_1.jpg

Currently we use a decoding system in our live control room and a encoding system at the production location depending on the necessity of the broadcasting production. Using the given local network links, our setup is flexible and the number of operating systems can be adapted corresponding the requirements. We can easily increase the number of studios or external cameras and mix them all together in a control room.

3.

HANDS ON BROADCAST

The system has been applied on several opportunities in the past years. So for instance, the yearly regional finale of the First Lego League competition, is organized at Offenburg University (Figure 5 and 7). This offers a excellent challenge for our students to be involved in a live event that is broadcast via a LIVE stream. An overview of the students’ work in the control room is given in (Figure 8). The design control romm used was very complex and all available capacities were almost exhausted up to eight cameras and multiple return channels.

Figure 5.

Finale - First Lego Legaue 2020 - Live stream operated by students as part of the hands-on training. © Alexander Weigand

00158_PSISDG11480_114800L_page_5_1.jpg

Figure 6.

Student operating a camera during the live event. © Alexander Weigand

00158_PSISDG11480_114800L_page_5_2.jpg

Figure 7.

Live - overview - First Lego Legaue 2020 at our university. © Alexander Weigand

00158_PSISDG11480_114800L_page_5_3.jpg

Figure 8.

Operating control room during the live event. © Alexander Weigand

00158_PSISDG11480_114800L_page_6_1.jpg

The particular aim of the system is not only that each student only understands the whole system, but rather that the students are able to handle the system. This opens up the possibility to delve deeper into areas of interest to them and generally work with the system to get to know each component. Finally, students should be able to develop their own ideas and implement them in the system.

Such an educational event is a perfect opportunity for students to explore and learn about the workflow of a fully IP-based live video and audio production. Participants can learn to set up network devices, use them over the network, and apply their current knowledge of video and audio in combination with the new ip-based standard (Figure 6).

Goals of Practical Work

The most important application of the developed sistem is to be part of the courses and lectures in live video broadcasting. As a practical application for the attending students is to learn the workflow and how to realize live broadcasting events and studio productions. After the participants have acquired the basic knowledge about the technical fundamentals of the system, they continue with the application of the system. The students learn the organisation and process of their work by their own involvement and finally they have to adapt to the unique situation and apply their newly acquired knowledge. This “learning by doing” method involves all participants in bringing their unique skills into production. As a result, all students get a good insight into the technology they use.

4.

BROADCASTING OF LIVE STREAMS A NEW PARADIGM FOR EDUCATION AND TRAINING IN PANDEMIC TIMES

Due to covid-19 pandemic our university had to adapt to new guidelines with new forms of teaching especially digital teaching. Live Streaming is an ideal way of providing students with knowledge in cases where it is not possible to give a lecture in the classroom. Therefore, we could use our knowledge and infrastructure to enable a better quality of learning and to achieve higher learning goals than other providers could offer in the same short time.

By adapting to the specific learning subjects, we can achieve the highest goals in Bloom’s taxonomy [5]-[7] such as synthesis and evaluation[8]. On the one hand we achieve these goals for our teaching subject and on the other hand we offer the platform for teaching other knowledge topics like media technology or science, technology, engineering, and mathematics (STEM) education [9].

5.

CONCLUSION

The system achieved its purpose in providing state of the art technology to give the possibility for students to adapt to the future way of broadcasting. With combining theoretical lectures and practical experience we are able to teach a new level of media technologies. Due to the modular system we can easily provide continuous improvement in hardware and software to catch up with the latest technology. Students are able to get a hands-on impression in how today’s broadcast systems work.

We can conclude that we have established a system at our university that we can be useed for teaching, where students can training “Hands On” the use of live production equipment.

REFERENCES

[2] 

Aleksandersen, David, “WHAT IS NDI® (NETWORK DEVICE INTERFACE)?,” (2017) https://newsandviews.dataton.com/what-is-ndi-network-device-interface Google Scholar

[4] 

“Huck Thomas: Audio over IP im Produktionskontext,” (2020). Google Scholar

[5] 

Bloom, Benjamin S.:, “Taxonomie von Lernzielen im kognitiven Bereich,” Auflage, Beltz Verlag, Weinheim und Basel, Germany/Switzerland (1972). Google Scholar

[6] 

Max D. Engelhart, Edward J. Furst, Walker H. Hill, Benjamin S., “Bloom: Taxonomie von Lernzielen im kognitiven Bereich, Beltz Verlag, Auflage 5,” (2001). Google Scholar

[7] 

D. R. Krathwohl, B. S. Bloom, B. M. Bertram, “Taxonomy of Educational Objectives, the Classification of Educational Goals. Handbook II: Affective Domain,” David McKay Co. Inc., New York (1973). Google Scholar

[8] 

Anderson, Lorin W. & Krathwohl, David R., “A Taxonomy for Learning, Teaching, and Assessing. A Revision of Bloom’s Taxonomy of Educational Objectives,” Addison-Wesley, New York (2001). Google Scholar

[9] 

Hallinen, Judith, “STEM Education Curriculum,” ENCYCLOPÆDIA BRITANNICA, 2020). Google Scholar
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Nicola Jäger, Marcel Kaufmann, Benjamin Heitz, Oliver Vauderwange, Ulrich Haiss, and Dan Curticapean "Live streaming of events over an IP network as a catalyst in media technology education and training", Proc. SPIE 11480, Optics Education and Outreach VI, 114800L (21 August 2020); https://doi.org/10.1117/12.2568843
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Computer programming

Cameras

Education and training

Imaging systems

Control systems

Fiber optics

Back to Top