Studentská sekce IEEE Vysoké učení technické v Brně 25. – 27. srpen Zvůle 2014 Sborník příspěvků studentské konference Název: Sborník příspěvků studentské konference Zvůle 2014 Editor: Ondřej Zach Vydavatel: Vysoké učení technické v Brně Fakulta elektrotechniky a komunikačních technologií Rok vydání: 2014 Vydání: první Organizační výbor konference: Martin Kufa Jan Vélim Roman Mego Ondřej Zach Tato publikace neprošla jazykovou úpravou. Za obsah, původnost a literární citace odpovídají autoři jednotlivých příspěvků. ISBN 978-80-214-5005-9 Úvodní slovo Vážení kolegové, po dvou letech odmlky se opět vracíme k tradici pořádání konference pro mladé vědce pod záštitou studentské sekce IEEE při VUT v Brně. Letošní setkání je jubilejním X. ročníkem, který se pro tentokrát koná v chatové osadě Zvůle, jenž se nachází v malebné přírodě České Kanady. Po dvou slabších ročnících se díky masivní finanční podpoře od Československé sekce IEEE podařilo uspořádat konferenci s hojným počtem účastníků, jak aktivních s příspěvkem, tak i několika návštěvníků. Předsednictvo studentské sekce IEEE při VUT v Brně Vám mimo jiné na konferenci Zvůle 2014 může nabídnout neformální přátelskou atmosféru a hlubší provázání mladých vědců ze všech odvětví elektrotechnicky zaměřených vysokých škol z České a Slovenské republiky. Setkání s ostatními mladými vědci by Vám mělo pomoci v navázání kontaktů s lidmi z jiných pracovišť, odnesete si poznatky o jejich výzkumech a získáte cenné zkušeností s prezentováním Vašeho dosavadního vědeckého výzkumu na akademické půdě. Kromě prezentací nás doktorandů na konferenci proběhne i přednáška pana profesora Jana Machače z katedry elektromagnetického pole FEL ČVUT na téma „Jak psát články do časopisů IEEE“. Přednáška pana profesora je jistě více než dobrá zkušenost a mnohým z nás to umožní hladší průchod recenzním řízením při podání příspěvku do IEEE časopisů. Na závěr bych Vám rád jménem celého předsednictva studentské sekce IEEE při VUT v Brně poděkoval za účast a podporu konference Zvůle 2014 a popřál Vám příjemně strávený čas plný zajisté cenných informací ale také zábavy. Doufám, že se ještě v hojnějším počtu shledáme na příštím XI. ročníku, který se bude konat opět v nějakém nádherném koutu naší republiky. Za předsednictvo studentské sekce IEEE při VUT v Brně Martin Kufa 4 8 10 15 19 23 27 30 34 37 41 45 49 53 57 61 64 67 71 74 77 80 83 Scene Change Based GOP for HTTP Adaptive Streaming Utilizing High Efficiency Video Coding Broken Bar Analysis of the Squirrel Cage Machine Multifunctional Non-differential Controllable 2nd-order Frequency Filter Antennas for Radio Telescopes USRP Setup for Energy Detection-Based Cooperative Spectrum Sensing for Cognitive Radio Networks Surveillance Face Recognition: Challenges and Solutions Measurement of Optical Signals Emitted by the Energetic Materials during Detonation Errors in Recording and Compression of Stereoscopic Videos Basic Time Synchronization Methods in Smart Grids A New Software Tool for Physical Protection System Effectiveness Evaluation Influence of M2M Communication on LTE Networks Moslem Amiri, Václav Přenosil Karel Čermák Ibrahim Ghafir, Martin Husak, Vaclav Prenosil Tomas Horvath, Petr Munster, Radim Sifta Eva Klejmová Radio Frequency Remote Control A Survey on Intrusion Detection and Prevention Systems Simulation of Triple Play Services in NG-PON2 Networks A Counter/Discriminator of Neutrons and Gamma Rays Subjective Quality Assessment for HEVC Stochastic Differential Equations in Biology Josef Polak, Lukas Langhammer, Jan Jerabek Martin Pospisil, Roman Marsalek, Ales Prokes, Jiri Pachman, Jakub Selesovsky Martin Šindelář Behavior of Hardware Acceleration on Real-time Operating Systems Wavelet Transform Based M-QAM Classification Compression Tool for Aeronautical Data Aircraft Wiring and Transients Caused by Lighting Jaroslav Kostrhoun Dominik Kovac, Pavel Masek, Michal Jelen David Krutílek Martin Kufa, Zbynek Raida, Jordi Mateu Demian Lekomtcev, Roman Marsalek Equivalent Circuits of Three–Element Filtering Antenna Array Fed by Apertures Marie Klimešová Ladislav Šťastný, Zdeněk Bradáč Jan Vélim Ondrej Zach Tobias Malach Tereza Malachová Pavel Masek, Jiri Hosek, Marek Dubrava Roman Mego Michal Mrnka Lukáš Nekolný Comparison of Computational Methods in FEKO Software Obsah 3 A Counter/Discriminator of Neutrons and Gamma Rays Moslem Amiri Faculty of Informatics Masaryk University Brno, Czech Republic Email: amiri@mail.muni.cz V´aclav Pˇrenosil Faculty of Informatics Masaryk University Brno, Czech Republic Email: prenosil@fi.muni.cz Abstract—An optimum filter-based method for counting/discriminating of neutrons and gamma-rays in a mixed radiation field is presented. This technique is computationally simple, hence appropriate for field measurements. Applied to several sets of mixed neutron and photon signals obtained through different digitizers using stilbene scintillator, this approach is analyzed and its discrimination quality is measured. Keywords—Counter/discriminator, Optimum filter, Neutron spectroscopy, Organic scintillator. I. INTRODUCTION The range of applications of neutron detectors grows fast. Nowadays, neutron detectors are used for neutron imaging techniques, nuclear research, nuclear medicine applications, and safety issues, and their usage spans on various branches of science including nuclear physics, biology, geology, and medicine. The main problem in neutron detection is the discrimination of neutrons from the background gamma rays. Fast neutrons produce recoil protons whose detection is the most common method to detect neutrons. Organic scintillators are widely used to detect these recoil protons. Fast neutrons in organic scintillators produce recoil protons through (n, p) elastic scattering and energy of a recoil proton at the highest level is equal to the energy of the neutron [1]. Among organic scintillators, stilbene and NE-213 come with some advantages for neutron spectroscopy purposes; they have rather low light output per unit energy, but this light output induced by charged protons can be easily distinguished from electrons/photons. Hence, stilbene and NE-213 scintillators produce very good results using pulse shape discrimination (PSD) methods. Time-domain PSD methods are not computationally intensive, and hence are suitable for real-time applications. Classically, following analog PSD techniques were most often used for n/γ-ray discrimination [2]: 1) rise-time inspection; 2) zero-crossing method; 3) charge comparison. Although analog techniques make acceptable n/γ-ray discrimination, availability of precise and fast digitizers and various PSD algorithms have made it possible to do a better This work was supported by Technology Agency of the Czech Republic under contract No. TA01011383/2011. discrimination of these radiations digitally. Among digital PSD methods, pulse rise-time algorithm and charge comparison are probably the most favorable ones. In this paper, we introduce a computationally-simple discrimination method and calculate its separation quality. To obtain the sampled data of mixed neutron and gamma-ray pulses, we use two differently-featured digitizers: Acqiris DP210 with 8-bit resolution and set at 1 and 2 GSamp/s, and Acqiris DC440 with 12-bit resolution and set at 250 and 420 MSamp/s. Doing so, we could find the effect of resolution and sampling frequency of the digitizers on the quality of the discrimination results for our novel method. Every experiment is carried out using 100,000 pulses of mixed neutron and photon signals. Stilbene scintillation detector was used with 45x45 crystal, and the neutron-gamma radiation source used was 252Cf. A comparison among various techniques, applied to data obtained from the different digitizer types and settings, is done by using the Figure of Merit (FoM) for the neutron/gamma discrimination, defined as: FoM = S FWHMn + FWHMγ (1) where S is the separation between the peaks of the two events, FWHMγ is the full-width half-maximum (FWHM) of the spread of events classified as gamma-rays and FWHMn is the FWHM of the spread in the neutron peak [3]. FWHMs are calculated using the Gaussian fits to the neutron and gammaray events on experimental distribution plot. II. NEUTRON AND PHOTON SIGNALS A sample smoothed neutron is compared with a sample smoothed photon pulse in Fig. 1. These signals are obtained from the stilbene scintillator. As seen in this Figure, these signals are composed of a rising and a trailing edge. The rising edges could not be exploited for discrimination purposes. On the other hand, the trailing edge of the neutron signal has higher rise time than that of the photon signal. This property could be used to separate these two radiations. However, this difference is not large enough to be easily exploited by directly applying signal processing techniques. An innovative discrimination approach is to remove the similar segments of the two signal types and apply the technique only to the differing segments. 4 Studentská konference Zvůle 2014 0 20 40 60 80 100 −1 −0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 Time (ns) Normalizedamplitude Neutron Photon Fig. 1. Comparison of a sample smoothed neutron with a sample smoothed photon. These signals are obtained from the stilbene scintillator. III. OPTIMUM FILTER IMPLEMENTATION In this Section, we will use a known principle to implement an optimum filter for discrimination purposes. The principle used here is introduced in [4]. Let n(i) and g(i) be two discrete-time functions, both normalized to unity, i.e. i n(i) = i g(i) = 1 (2) If we compute the time function of the relative difference between n(i) and g(i) (weights) as follows: p(i) = g(i) − n(i) g(i) + n(i) (3) then an unknown function u(i), close to either n(i) or g(i), can be identified as one of them by the sign of S defined as: S = i p(i)u(i) (4) We use this principle to design a filter for discrimination of neutrons and gamma-rays. In Eqs. 2, 3, and 4, if we replace n(i) and g(i) with neutron and gamma-ray pulses, respectively, then if S < 0, the particle is identified as gamma-ray, and if S > 0, as neutron. According to Eq. 3, those parts of the neutron and photon signals that differ most will have greater weights and the similar parts will have negligible weights. The similar segments could have weights with large absolute values when they are very close to zero; but according to Eq. 4, the final effect is minimal. Since the leading edges and the end-tail segments of neutrons and gamma-rays have almost the same shape, there will be insignificant weights or effects for corresponding points when these segments are included. However this minimal improvement of the discrimination caused by these segments will help us better identify the particles in low energy region. Inclusion of these parts is directly related to the capabilities of the hardware at hand. Omitting these segments will have the benefit of fewer number of multiplications (based on Eq. 4), but a slight decrease in the quality of the results. For this work, the area of interest starts from the point where the rising edge hits the 1% threshold level and the end point is a constant number of samples after this starting point for all signals, such that this interval covers a signal as much as possible. In Eq. 3, a sample gamma-ray g(i) and a sample neutron n(i) are picked and used to build the weights. These samples need to be patterns representing the types of pulses contained 0 10 20 30 40 50 −0.025 −0.02 −0.015 −0.01 −0.005 0 Discrete−time index Normalizedamplitude Neutron Photon Fig. 2. Segments of neutron and gamma-ray pulses, obtained from DC440 digitizer (12-bit resolution, 420 MSamp/s), when normalized to unity (using Eq. 2). 0 10 20 30 40 50 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 Discrete−time index p(i) Fig. 3. Weight function p(i), obtained from Eq. 3 using the two signal segments shown in Fig. 2. in the whole data set. Therefore, more than one sample should be used for each pulse type to obtain better results. If we use k number of pulses (k > 1) from each radiation type to build the sample pulses required, then g(i) = k j=1 gj(i) k n(i) = k j=1 nj(i) k (5) Once every point of the two sample pulses are built using the Eqs. 5, they are normalized to unity using the Eq. 2 (as Fig. 2 illustrates), and then applied to the Eq. 3 to build the weight sequence (as shown in Fig. 3). We use the constant weight sequence p(i), in conjunction with every arriving pulse, to detect that specific pulse. If u(i) is the unknown pulse to be processed, it is passed along with p(i) to Eq. 4 to compute S. As mentioned before, S serves as the identifier for the pulse and hence can be used as counting/discriminating factor. The sign of S for a pulse reveals its identity; Using Eq. 3, neutrons will have positive signs for S while photons will have negative ones. This can be used to count the number of neutrons and photons in an experiment. Since the zero base-line is the separator between these signals, to find the efficiency of discrimination, an ideal factor to use would be the amplitude of S for a pulse. Fig. 4 illustrates the experimental distribution plot of neutrons and photons for the data obtained from DC440 digitizer with 12-bit resolution and set at 420 MSamp/s frequency rate. As seen, the zero discrimination value is the separator here; neutrons have positive and gamma-rays have negative discrimination values. Tab. I shows the FoM (computed using Eq. 1) and neutron and photon counts for this data set. 5 Studentská konference Zvůle 2014 −100 −50 0 50 100 150 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 Discrimination value Counts Fig. 4. Discrimination of photon and neutron signals, applying optimum filter. The pulses are obtained using DC440 digitizer (12-bit resolution, 420 MSamp/s). TABLE I. FOM AND COUNTS OF THE PULSES OBTAINED FROM DC440 DIGITIZER. Data format FoM Neutron counts Photon counts 12-bit, 420 MSamp/s 1.21 9293 90707 TABLE II. FOMS AND COUNTS OF THE PULSES OBTAINED FROM DC440 AND DP210 DIGITIZERS UNDER DIFFERENT SAMPLING RATES. Data format FoM Neutron counts Photon counts 12-bit, 250 MSamp/s 1.25 9032 90968 8-bit, 1 GSamp/s 1.06 9558 90442 8-bit, 2 GSamp/s 1.05 9462 90538 FoMs and pulse counts for the other data sets with different resolutions and frequency rates are shown in Tab. II. This method discriminates both low- and high-resolution pulses very efficiently. IV. DISCUSSION Two important factors affecting the FoM of a discrimination method are resolution and sampling rate of the digitizer. According to Nyquist criterion, the sampling rate must be greater than twice the bandwidth of continuous digitizer input signal. The FFT of the recorded neutron and photon signals indicates frequency components up to 100 MHz [5]. Therefore, the minimum necessary sampling frequency for neutron and photon signals is about 200 MS/s. The exact impact of the sampling rate on the separation quality of a specific method depends on how the method functions, and estimation of this effect can be involved. For the approach introduced in this article, as Tabs. I and II show, increasing from the low sampling rate of 250 MHz (which is close to the minimum 200 MHz required) to 420 MHz or increasing from the high sampling rate of 1 GHz to 2 GHz does not improve the FoM. The factor with a greater impact on discrimination quality is digitizer resolution. The process of converting a discretetime continuous-amplitude signal into a digital signal by expressing each sample value as a finite number of digits is called quantization. The resolution (or quantization step size) is the distance between two successive quantization levels. The error introduced in representing the continuous-valued signal by a finite set of discrete value levels is called quantization error or quantization noise. The quality of the digitizer output could be measured by signal-to-quantization noise ratio (SQNR). Since quantization errors of neutron and photon signals are almost uniformly distributed over the quantization interval, 0 20 40 60 80 100 −1 −0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 Time (ns) Normalizedamplitude Neutron Photon constant time later (t p , y p ) (t d , y d ) Fig. 5. The points on smoothed neutron and photon signals used in PGA discrimination method. TABLE III. FOMS OF PGA METHOD FOR THE PULSES OBTAINED FROM VARIOUS DIGITIZERS. Digitizer 8-bit, 1 GS 8-bit, 2 GS 12-bit, 250 MS 12-bit, 420 MS FoM 0.88 0.91 0.94 1.00 the following well-known equation [6] reliably estimates the quality of a b-bit digitizer output: SQNR(dB) = 1.76 + 6.02b (6) Eq. 6 implies that SQNR increases approximately 6 dB for every bit added to the digitizer word length. This relationship gives the number of bits required by an application to assure a given signal-to-noise ratio. In order to verify the performance of the novel method introduced in this article, we apply PGA method to the same pulses datasets as used for the method in this paper. PGA method, introduced in [3], is recognized as an efficient n/γ discrimination method with a high FoM. The slower decay of the light function of a scintillator for a neutron interaction than that for a γ-ray interaction is exploited in this method. The gradient between the peak amplitude and the amplitude a specified time after the peak amplitude (called the discrimination amplitude) on the trailing edge of the pulses are compared and used as the discrimination factor. Fig. 5 illustrates the peak and discrimination amplitudes on neutron and photon signals. The gradient is calculated using m = ∆y ∆t = (yp − yd) (tp − td) (7) where m, yp, yd, tp, and td are the gradient, the peak amplitude (which is a constant for normalized pulses), the discrimination amplitude, the time of peak amplitude occurrence, and the time of discrimination amplitude occurrence, respectively. For this work, we used some training pulses to locate the best discrimination amplitude, which occurred about 36 ns after the peak of the pulse. In general, the optimal timing for the discrimination amplitude which makes the highest difference between the two radiation types is dependent on the scintillator properties and also on the PMT. The FoMs obtained are listed in Tab. III. A comparison shows that the novel method introduced here has better discrimination quality than the PGA method does. Fig. 6 shows the best discrimination plot obtained by PGA method. V. CONCLUSION In this article, we introduced a novel algorithm to discriminate the neutron and photon pulses captured in a mixed 6 Studentská konference Zvůle 2014 500 550 600 650 700 0 1000 2000 3000 4000 5000 6000 7000 8000 Discrimination value Counts Fig. 6. Discrimination of photon and neutron signals, applying PGA method. The pulses are obtained using DC440 digitizer (12-bit resolution, 420 MSamp/s). environment. Two digitizers, each featuring a different resolution and each set at two different sampling rates, were used to observe the reaction of the method to the data sampling conditions. The introduced optimum filter-based counter/discriminator is robust, i.e. it provides promising results when applied to the data recorded at either low or high resolutions, or sampled at low or high rates. Since the discrimination approach presented in this article is computationally simple, typical embedded system technologies could be easily used for realization. Moreover, in many industrial applications, neutron/gamma discrimination is required to be done in real-time fashion. Discrimination of the pulses through a simple method brings about quickness needed for real-time operations. REFERENCES [1] S. Budakovsky, N. Galunov, B. Grinyov, N. Karavaeva, J. K. Kim, Y.-K. Kim, N. Pogorelova, and O. Tarasenko, “Stilbene crystalline powder in polymer base as a new fast neutron detector,” Radiation Measurements, vol. 42, no. 4-5, pp. 565 – 568, 2007, proceedings of the 6th European Conference on Luminescent Detectors and Transformers of Ionizing Radiation (LUMDETR 2006). [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1350448707001564 [2] G. Ranucci, “An analytical approach to the evaluation of the pulse shape discrimination properties of scintillators,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 354, no. 2-3, pp. 389 – 399, 1995. [Online]. Available: http://www.sciencedirect.com/science/article/pii/0168900294008868 [3] B. Mellow, M. Aspinall, R. Mackin, M. Joyce, and A. Peyton, “Digital discrimination of neutrons and g-rays in liquid scintillators using pulse gradient analysis,” Nuclear Instruments and Methods in Physics Research Section A, vol. 578, no. 1, pp. 191–197, 2007. [4] E. Gatti and F. de Martini, “A new linear method of discriminating between elementary particles in scintillation counters,” in Int. Symp. Nuclear Electronics, vol. 2, Belgrade, 1961, pp. 265 – 276. [5] F. Belli, B. Esposito, D. Marocco, and M. Riva, “A study on the pulse height resolution of organic scintillator digitized pulses,” Fusion Engineering and Design, vol. 88, no. 68, pp. 1271 – 1275, 2013, proceedings of the 27th Symposium On Fusion Technology (SOFT- 27); Li´ege, Belgium, September 24-28, 2012. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S092037961200587X [6] J. G. Proakis and D. G. Manolakis, Digital Signal Processing, Fourth Edition. Upper Saddle River, New Jersey 07458: Prentice Hall, 2006. 7 Studentská konference Zvůle 2014 Radio Frequency Remote Control for multipurpose aid for children with specific learning dissabilities Karel Čermák Department of Applied Electronics and Telecommunications, University of West Bohemia Univerzitni 22, 306 14 Plzen, Czech Republic cermakk@kae.zcu.cz Abstract—This paper describes the design idea of a remote control based on radio frequency module. This electronic device was mainly designed to control an education aid, but it could also be used for other devices and toys that require the remote control. First part shortly introduces the project. Next parts describe the HW concept of PC and handheld remote control, firmware description and the implementation of error correcting code. The last part resumes the project. Keywords—RF; remote; control; toy; aid; ISM; 868MHz; RFM12B; FSK; Reed-Muller code I. INTRODUCTION Based on my experiences during my doctoral study I have made a decision to create a multipurpose aid for children with Specific learning disabilities. This aid should help a child to improve its skills in several fields. This aid is designed for preschool and elementary-school children. The aid will be designed as a plastic dice (cube) or ball (sphere) which will be held in child’s hands. The insideelectronics will provide the optical feedback through LEDs that will display various predefined shapes/symbols in various colors. It will also have an embedded haptic feedback made by vibrating motor and acoustic feedback (clicking) for children with visual impairment. That will be implemented with piezoelectric changer. The whole device will be remotely controlled by a teacher or pedagogical specialist. Therefore a remote control needs to be designed for this purpose. The visualization of the multipurpose aid is shown in the Figure 1. Even though this remote control is mainly designed for the multipurpose aid it could be adapt to other electronic devices and toys. II. HARDWARE CONCEPT The aid will be controlled both from PC remote control and from the handheld remote control. The block schematic of both types is shown in the Figure 2. The heart of the device is microcontroller. Communication between aid and remotes is provided via radio frequency communication module. The handheld type will be powered by a battery. It will be equipped with three push buttons which will send the corresponding commands to the aid. The PC type will be powered directly from the USB port. The commands will be prepared in the PC application, sent to microcontroller and transmitted to the aid. A. Microcontroller In the market there are lots of microcontrollers and platforms, which can be used for the intended purpose. In a PC version of remote control the microcontroller will be used as a converter between USB and a SPI bus. For this purpose the small microcontroller with appropriate communication peripherals is needed. I have a good experience with microcontrollers from Atmel Corporation. Development tools are free for use and they offer a big support like application notes, well-arranged datasheets, ready-to-use examples etc. AT tiny and AT mega families offer powerful AVR core and lot of peripherals, which can be used for this purpose. For PC remote control I have chosen the ATmega8A-AU which could handle This work was supported by institutional support for young researchers at University of West Bohemia in Pilsen, project SGS-2012-019. Fig. 2. Visualization of the multipurpose aid Fig. 1. Block schematic of the remote control types 8 Studentská konference Zvůle 2014 the software-only USB protocol without any bus converters. It also has a SPI bus for the communication with the radio module. Another situation comes to the handheld type. As the enclosure of the remote control has to be small, we have to focus on the package size. The reasonable pin count and the smallest hand-solderable package will save the layout size and fit to the intended enclosure. The ATtiny24A-20SSU [1] was chosen which has enough IO lines, flash and operating memory and it is embedded in the small 14 pin SMD package. The firmware can be downloaded via ISP bus. Microcontroller firmware is written in ANSI-C. Chosen parameters of the microcontroller can be found in Table. I. B. RF communication module For a communication with the aid I have chosen a cheap radio frequency module RFM12B-868S2P from HOPE MICROELECTRONIC [2]. This module works in the open ISM band at the frequency of 868MHz. As a modulation system the Frequency Shift Keying (FSK) is used. Inside the module we can find a whole analog front-end and a digital control unit. Therefore the use of this module is quite simple. The communication between the microcontroller and the module is provided via SPI interface. This interface is also used for transmitting and receiving the data. Chosen parameters of the RF module can be found in Table II. TABLE I. CHOOSEN PARAMETERS OF MICROCONTROLLER Flash, EEPROM, SRAM 2k x 8bit, 128B, 128B Max. clock frequency 20 MHz Supply range 2.7 V ÷ 5.5 V Number of IO lines 12 1x 16bit timer, 1x 8bit timer, ADC, USI, SPI TABLE II. CHOOSEN PARAMETERS OF RF MODULE Communication speed up to 115.2 kbps Supply range, maximal consumption 2.2 V ÷ 3.8 V, 23 mA Output power (max.) 5 dBm Receiver sensitivity (typ.) -110 dBm Range of transmitter (in open space) > 200 m III. FIRMWARE The algorithm of the firmware for both types of remote controls is straight and simple. The implementation of the communication between radio module and microcontroller is the same for both types. I have used the programming guide [3] from the HOPE Company as a reference and starting point of the implementation. The initialization of the module is adopted from the guide except of the setting module frequency. Transmit and receive low-level routines were extensively modified with help of “RFM12B and AVR — quick start” document [4]. In the PC remote control the communication with computer is implemented as a software-only USB which was overtaken from the Objective Development V-USB [5]. The PC SW is written in C#.NET. The main form contains three buttons which has the same role as the buttons on the handheld remote control. The handheld remote is battery powered. When one of three buttons is pressed, it connects the battery to the circuit and the microcontroller starts to run. At first the firmware determines which button is pressed. Then the radio module is initialized and the data are sent. For better performance in communication it is favorable to use an error-correcting code (ECC) and simple communication protocol. For this purpose the Reed-Muller code is implemented in the firmware. It is locally testable and decodable code which is named after Irving Reed and David Muller. RM codes are relatively easy to encode and decode and furthermore the first order code is very effective. RM(1,5) was even used in Mariner 9 – the NASA space orbiter that helped in the exploration of Mars. I decided to use this ECC in my project. IV. CONCLUSION The paper has described the remote control project which was designed for the multipurpose aid for children with specific learning disabilities. The prototypes of both types were made and measured. Finally the concept is ready to be implemented to the aid and tested in the real application. ACKNOWLEDGMENT The author gratefully acknowledges use of the services and facilities of the Department of Applied Electronics and Telecommunication at the University of West Bohemia in Pilsen. REFERENCES [1] “ATtiny24/44/84 Complete” datasheet [online], Atmel Corporation, 2010. http://www.atmel.com/Images/doc8006.pdf. [2] “RFM12B Universal ISM Band FSK Transceiver“ datasheet [online], Hope Microelectronics co., Ltd, 2006. http://www.hoperf.com/upload/rf/RFM12B.pdf. [3] “RF12B programming guide” [online], Hope Microelectronics co., Ltd, 2006. http://www.hoperf.com/upload/rf/RF12B_code.pdf [4] “RFM12B and AVR — quick start” [online], http://www.hoperf.com/upload/rf/RF12B_code.pdf. [5] “V-USB - A Firmware-Only USB Driver for Atmel AVR Microcontrollers” [online], Objective development, 2014. http://www.obdev.at/products/vusb/index.html. 9 Studentská konference Zvůle 2014 A Survey on Intrusion Detection and Prevention Systems Ibrahim Ghafir Faculty of Informatics Masaryk University Brno, Czech Republic ghafir@mail.muni.cz Martin Husak Institute of Computer Science Masaryk University Brno, Czech Republic husakm@ics.muni.cz Vaclav Prenosil Faculty of Informatics Masaryk University Brno, Czech Republic prenosil@fi.muni.cz Abstract—The World Wide Web has evolved from a system for serving an interconnected set of static documents to what is now a powerful, versatile, and large platform for application delivery and information dissemination. Companies and organizations have increasingly put critical resources and sensitive data online. Unfortunately, with the webs explosive growth in power and popularity has come a concomitant increase in both the number and impact of cyber criminals. The magnitude of the problem has prompted much interest within the security community towards researching mechanisms that can mitigate this threat. To this end, intrusion detection and prevention systems (IDPSs) have been proposed as a potential means of identifying and preventing the successful exploitation of computer networks. In this paper we present an overview of the current intrusion and prevention systems methodologies and offer a clear explanation for each methodology. In addition we provide a comparison between these methodologies to easily grasp the overall picture of IDPS. I. INTRODUCTION The Internet is omnipresent with a currently estimated size of approximately 1.37 billion unique pages as indexed by the major search engines [1] and the world wide web has evolved from a system for serving an interconnected set of static documents to what is now a powerful, versatile, and large platform for application delivery and information dissemination. Companies and organizations have increasingly put critical resources and sensitive data online. Unfortunately, with the webs explosive growth in power and popularity has come a concomitant increase in both the number and impact of cyber criminals. Cybercrime is attractive to criminals because they run a low risk at being caught and prosecuted for their crimes. The result is that a complete industry has evolved aimed at committing cybercrimes and virtually all organizations face increasing threats to their networks and the services they provide. Governments on the other hand have also found that cyberspace can be used to spy on other states and can be an arena for warfare. At present the cost of cybercrime, criminal activities on cyber infrastructures, is considered to somewhere between 100 billion to 1 trillion US dollars annually worldwide [2]. The magnitude of the problem has prompted much interest within the security community towards researching mechanisms that can mitigate this threat. To this end, intrusion detection and prevention systems (IDPSs) have been proposed as a potential means of identifying and preventing the successful exploitation of computer networks. As defined in [3], an intrusion is a sequence of related actions performed by a malicious adversary that results in the compromise of a target system. It is assumed that the actions of the intruder violate a given security policy. The remainder of this paper is organized as follows. Section II presents an overview of the current intrusion and prevention systems methodologies and offers a clear explanation for each methodology. A comparison between IDPS methodologies is provided in Section III. Section IV concludes the paper. II. INTRUSION DETECTION AND PREVENTION SYSTEMS (IDPS) METHODOLOGIES Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of possible incidents, which are violations or imminent threats of violation of computer security policies, acceptable use policies, or standard security practices. Incidents have many causes, such as malware (e.g., worms, spyware), attackers gaining unauthorized access to systems from the Internet, and authorized users of systems who misuse their privileges or attempt to gain additional privileges for which they are not authorized. Although many incidents are malicious in nature, many others are not; for example, a person might mistype the address of a computer and accidentally attempt to connect to a different system without authorization. An intrusion detection system (IDS) is software that automates the intrusion detection process. An intrusion prevention system (IPS) is software that has all the capabilities of an intrusion detection system and can also attempt to stop possible incidents. There are many different methodologies used by IDPS to detect changes on the systems they monitor. These changes can be external attacks or misuse by internal personnel. Among the many methodologies, four stand out and are widely used. These are the signature based, anomaly based, stateful protocol analysis based, and hybrid based. A. Anomaly-based methodology In [4] the authors present that anomaly based methodology works by comparing observed activity against a baseline profile. The baseline profile is the learned normal behavior of the monitored system and is developed during the learning period where the IDPS learns the environment and develops 10 Studentská konference Zvůle 2014 a normal profile of the monitored system. This environment can be networks, users, systems and so on. The profile can be fixed or dynamic. A fixed profile does not change once established while a dynamic profile changes as the systems been monitored evolves. A dynamic profile adds extra over head to the system as the IDPS continues to update the profile which also opens it to evasion. An attacker can evade the IDPS that uses a dynamic profile by spreading the attack over a long time period. In doing so, her attack becomes part of the profile as the IDPS incorporates her changes into the profile as normal system changes. Using a predefined threshold any deviations that fall outside the threshold are reported as violations. A fixed profile is very effective at detecting new attacks since any change from normal behavior is classified as an anomaly. Anomaly based methodologies can detect zero-day attacks to environment without any updates to the system. The general architecture of an anomaly based IDPS system is shown in Figure 1. The monitored environment is monitored by the detector that examines the observed events against the baseline profile. If the observed events match the baseline, no action is taken, but if it does not match the baseline profile and it is within the acceptable threshold range then the profile is updated. If the observed events do not match the baseline profile and falls outside the threshold range they are marked as an anomaly and alert is issued [5]. Update profile Threshold Monitored Environment Detector Alert No actionNormal Baseline No No Yes Fig. 1. Anomaly-based methodology architecture. Anomaly intrusion detection methodology uses three general techniques for detecting anomalies and these are the statistical anomaly detection, Knowledge/data-mining, and machine learning based. 1) Statistical-based A-NIDS techniques: In statistical-based techniques, the network traffic activity is captured and a profile representing its stochastic behavior is created. This profile is based on metrics such as the traffic rate, the number of packets for each protocol, the rate of connections, the number of different IP addresses, etc. Two datasets of network traffic are considered during the anomaly detection process: one corresponds to the currently observed profile over time, and the other is for the previously trained statistical profile. As the network events occur, the current profile is determined and an anomaly score estimated by comparison of the two behaviors. The score normally indicates the degree of irregularity for a specific event, such that the intrusion detection system will flag the occurrence of an anomaly when the score surpasses a certain threshold [6]. The earliest statistical approaches, both network oriented and host oriented IDS, corresponded to univariate models, which modeled the parameters as independent Gaussian random variables [7], thus defining an acceptable range of values for every variable. Later, multivariate models that consider the correlations between two or more metrics were proposed [8]. These are useful because experimental data have shown that a better level of discrimination can be obtained from combinations of related measures rather than individually. Other studies have considered time series models [9], which use an interval timer, together with an event counter or resource measure, and take into account the order and the inter-arrival times of the observations as well as their values. Thus, an observed traffic instance will be labeled as abnormal if its probability of occurrence is too low at a given time. Apart from their inherent features for use as anomaly-based techniques, statistical A-NIDS approaches have a number of virtues. Firstly, they do not require prior knowledge about the normal activity of the target system; instead, they have the ability to learn the expected behavior of the system from observations. Secondly, statistical methods can provide accurate notification of malicious activities occurring over long periods of time. However, some drawbacks should also be pointed out. First, this kind of A-NIDS is susceptible to be trained by an attacker in such a way that the network traffic generated during the attack is considered as normal. Second, setting the values of the different parameters/metrics is a difficult task, especially because the balance between false positives and false negatives is affected. Moreover, a statistical distribution per variable is assumed, but not all behaviors can be modeled by using stochastic methods. Furthermore, most of these schemes rely on the assumption of a quasi-stationary process, which is not always realistic. 2) Knowledge-based techniques: The so-called expert system approach is one of the most widely used knowledge-based IDS schemes. However, like other A-NIDS methodologies, expert systems can also be classified into other, different categories [7] [10]. Expert systems are intended to classify the audit data according to a set of rules, involving three steps. First, different attributes and classes are identified from the training data. Second, a set of classification rules, parameters or procedures are deduced. Third, the audit data are classified accordingly. More restrictive/particular in some senses are specification based anomaly methods, for which the desired model is manually constructed by a human expert, in terms of a set of rules (the specifications) that seek to determine legitimate system behavior. If the specifications are complete enough, the model will be able to detect illegitimate behavioral patterns. Moreover, the number of false positives is reduced, mainly because this kind of system avoids the problem of harmless activities, not previously observed, being reported as intrusions [11]. Specifications could also be developed by using some kind of formal tool. For example, the finite state machine (FSM) methodology a sequence of states and transitions among 11 Studentská konference Zvůle 2014 them seems appropriate for modeling network protocols [12]. For this purpose, standard description languages such as Ngrammars, UML and LOTOS can be considered. The most significant advantages of current approaches to anomaly detection are those of robustness and flexibility. Their main drawback is that the development of high-quality knowledge is often difficult and time-consuming [13]. This problem, however, is common to other A-NIDS methods for which the notion of normality is obtained exclusively by analyzing training data. 3) Machine learning-based A-NIDS schemes: Machine learning techniques are based on establishing an explicit or implicit model that enables the patterns analyzed to be categorized. A singular characteristic of these schemes is the need for labeled data to train the behavioral model, a procedure that places severe demands on resources. In many cases, the applicability of machine learning principles coincides with that for the statistical techniques, although the former is focused on building a model that improves its performance on the basis of previous results [14]. Hence, a machine learning A-NIDS has the ability to change its execution strategy as it acquires new information. Although this feature could make it desirable to use such schemes for all situations, the major drawback is their resource expensive nature. B. Signature-based methodology Signature based methodology works by comparing observed signatures to the signatures on file. This file can be database or a list of known attack signatures. Any signature observed on the monitored environment that matches the signatures on file is flagged as a violation of the security policy or as an attack. In [15] the authors mention that the signature based IDPS has little overhead since it does not inspect every activity or network traffic on the monitored environment. Instead it only searches for known signatures in the database or file. Unlike the anomaly based methodology, the signature based methodology system is easy to deploy since it does not need to learn the environment. This methodology works by simply searching, inspecting, and comparing the contents of captured network packets for known threats signatures [16]. Signature based methodology is very effective against known attacks/violations but it cannot detect new attacks until it is updated with new signatures [17]. Monitored Environment Detector No action AlertMatch Signature Database No Yes Fig. 2. Signature-based methodology architecture. In [5] they present the general architecture of a signature based methodology as it is shown in Figure 2. This architecture uses the detector to find and compare activity signatures found in the monitored environment to the known signatures in the signature database. If a match is found, an alert is issued and there is no match the detector does nothing. In [17] the authors declare that signature based IDPS are easy to evade since they are based on known attacks and are depended on new signatures to be applied before they can detect new attacks. Signature based detection systems can be easily bypassed by attackers who modify known attacks and target systems that have not been updated with new signatures that detect the modification. Signature based methodology requires significant resources to keep up with the potential infinite number of modifications to known threats. Signature based methodology is simpler to modify and improve since its performance is mainly based on the signatures or rules deployed. 1) Rule-based languages: The rule-based expert system is the most widely used approach to misuse detection. The patterns of known attacks are specified as rule sets and a forward chaining expert system is usually used to look for signs of intrusions. Here we find two rule-based languages, rule-based sequence evaluation language (RUSSEL) [18] and production-based expert system tool set (P-BEST) [19]. Other rule-based languages exist, but they are all similar in the sense that they all specify known attack patterns as event patterns. 2) Expression matching: The simplest form of misuse detection is expression matching, which searches an event stream (log entries or network traffic) for occurrence of specific patterns/signatures. A simple example would be ˆGET[ˆ$]*/etc/passwd$ this checks for something that look like an HTTP request for the UNIX password file. Signatures can be very simple to construct, however especially when combined with protocol-aware field decomposition [20]. 3) Petri Automata: In [21] and [22], it viewed misuse detection as a pattern-matching process. They proposed an abstract hierarchy for classifying intrusion signatures (i.e. attack patterns) based on the structural interrelationships among the events that compose the signature. Events in such a hierarchy are high-level event that can be defined in terms of low-level audit trail events and used to instantiate the abstract hierarchy into a concrete one. A benefit of this classification scheme is that it clarifies the complexity of detecting the signatures in each level of the hierarchy. In addition, it also identifies the requirements that patterns in all categories of the classification must meet to represent the full range of commonly occurring intrusions (i.e. the specification of context, actions and invariants in intrusion patterns). In [21], it adopted colored Petri nets to represent attack signatures, with guards to represent signature contexts and vertices to represent system states. User-specified actions (e.g., assignments to variables) may be associated with such patterns and then executed when patterns are matched. The adapted colored Petri nets are called colored Petri automata (CPA). A CPA represents the transition of system states along paths that lead to intruded states. A CPA is also associated with pre and post conditions that must be satisfied before and after the match, as well as invariants (i.e. conditions) that must be 12 Studentská konference Zvůle 2014 satisfied while the pattern is being matched. 4) Genetic algorithm: The GASSATA system [23], it uses a genetic algorithm to search for the combination of known attacks (expressed as binary vector, each element indicating the presence of a particular attack) the best matches the observed event stream. A hypothesis vector is evaluated based on the risk associated with the attacks involved and a quadratic penalty function for mismatched details. This technique, like the neural net approach, offers good performance but does not identify the reason for an attack match. In addition, expressing some forms of behavior and expressing simultaneous or combined attacks is not possible in this system. 5) Burglar alarms: A technique proposed in [24], to reduce the risk of false positives and allow identification of novel attacks, focusing on identifying events that should never occur. Applying dedicated monitors to search for instances of such policy violations effectively places traps for attackers. An example would be monitoring for any attempt to connect outward from an HTTP server (where this is contrary to the site policy) indicating a user error, or an attacker attempting to use the server as a relay point. C. Stateful protocol analysis based methodology The Stateful protocol analysis methodology works by comparing established profiles of how protocols should behave against the observed behavior. The established protocol profiles are designed and established by vendors. Unlike the signature based methodology which only compares observed behavior against a list, Stateful protocol analysis has a deep understanding of how the protocols and applications should interact/work. This deep understanding/analysis places a very high overhead on the systems [4]. Monitored Environment Detector Alert No action Expected Protocol Behavior Protocol Database No Yes Fig. 3. Stateful protocol analysis based methodology architecture. The general architecture of Stateful protocol analysis is shown in Figure 3. This architecture is identical to that of the signature based methodology with one exception, instead of the signature database the Stateful protocol analysis has database of acceptable protocol behavior [5]. Although the Stateful protocol analysis has a deep understanding of the monitored protocols, it can be easily evaded by attacks that follow and stay within the acceptable behavior of protocols. Stateful protocol analysis methodologies and techniques have slowly been adapted and integrated into other methodologies over the past decade. This has led to the decline of IDPS that utilize just stateful protocol analysis methodology. The majority of the research on IDPS methodologies mainly concentrates on anomaly, signature, and hybrid methodologies which further reduce the viability of stateful protocol analysis as a standalone IDPS methodology. D. Hybrid-based methodology The hybrid based methodology works by combining two or more of the other methodologies. The result is a better methodology that takes advantage of the strengths of the combined methodologies. The hybrid system detected more intrusions than the regular one. A general over view of a hybrid based methodology is shown in Figure 4, three other methodologies are combined. The monitored environment is analyzed by first methodology and passed to the next and then the last one. This produces a better system [5]. Monitored Environment Stateful Protocol Analysis Signature Anomaly Sanitized Environment Fig. 4. Hybrid based methodology architecture. III. COMPARISON TABLE I. COMPARISON BETWEEN IDPS METHODOLOGIES Methodologies Pros and Cons Anomaly-based Pros Effective to detect new and unforeseen vulnerabilities. Less dependent on OS. Facilitate detections of privilege abuse. Cons Weak profiles accuracy due to observed events being constantly changed. Unavailable during rebuilding of behavior profiles. Difficult to trigger alerts in right time. Signature-based Pros Simplest and effective method to detect known attacks. Detail contextual analysis. Cons Ineffective to detect unknown attacks, evasion attacks, and variants of known attacks. Little understanding to states and protocols. Hard to keep signatures/patterns up to date. Time consuming to maintain the knowledge. Stateful protocol analysis based Pros Know and trace the protocol states. Distinguish unexpected sequences of commands Cons Resource consuming to protocol state tracing and examination. Unable to inspect attacks looking like benign protocol behaviors. Might incompatible to dedicated Oss or APs. All the methodologies use the same general model and the differences among them is mainly on how they process information they gather from the monitored environment to determine if a violation of the set policy has occurred. Most current intrusion detection and prevention systems use the hybrid methodology which is the combination of other methodologies to offer better detection and prevention capabilities. Table I shows pros and cons of IDPS methodologies. 13 Studentská konference Zvůle 2014 IV. CONCLUSION The cost of criminal activities on cyber infrastructures is very expensive and intrusion detection and prevention systems continue to be an active research field. We have presented the main methodologies used in IDPS and offered a clear explanation for each methodology. Most current intrusion detection and prevention systems use the hybrid methodology which is the combination of other methodologies to offer better detection and prevention capabilities. In addition we have provided a comparison between these methodologies to easily grasp the overall picture of IDPS. Each methodology has pros and cons. Therefore, having a comprehensive view of IDPSs and application requirements is indispensable before practical usages. REFERENCES [1] M. de Kunder, “The size of the world wide web,” http://www. worldwidewebsize.com/, accessed: 2014-01-07. [2] N. Kshetri, The global cybercrime industry: economic, institutional and strategic perspectives. Springer, 2010. [3] F. Valeur and G. Vigna, Intrusion detection and correlation: challenges and solutions. Springer, 2005, vol. 14. [4] K. Scarfone and P. Mell, “Guide to intrusion detection and prevention systems (idps),” NIST Special Publication, vol. 800, no. 2007, p. 94, 2007. [5] D. Mudzingwa and R. Agrawal, “A study of methodologies used in intrusion detection and prevention systems (idps),” in Southeastcon, 2012 Proceedings of IEEE. IEEE, 2012, pp. 1–6. [6] P. Garcia-Teodoro, J. Diaz-Verdejo, G. Maci´a-Fern´andez, and E. V´azquez, “Anomaly-based network intrusion detection: Techniques, systems and challenges,” computers & security, vol. 28, no. 1, pp. 18– 28, 2009. [7] D. E. Denning and P. G. Neumann, “Requirements and model for idesa real-time intrusion detection expert system,” Document A005, SRI International, vol. 333. [8] N. Ye, S. M. Emran, Q. Chen, and S. Vilbert, “Multivariate statistical analysis of audit trails for host-based intrusion detection,” Computers, IEEE Transactions on, vol. 51, no. 7, pp. 810–820, 2002. [9] “Detecting hackers,” http://www.ensc.sfu.ca/people/grad/pwangf/ IPSW report.pdf. [10] D. Anderson, T. F. Lunt, H. Javitz, A. Tamaru, A. Valdes et al., Detecting unusual program behavior using the statistical component of the Next-generation Intrusion Detection Expert System (NIDES). SRI International, Computer Science Laboratory, 1995. [11] C.-F. Tsai, Y.-F. Hsu, C.-Y. Lin, and W.-Y. Lin, “Intrusion detection by machine learning: A review,” Expert Systems with Applications, vol. 36, no. 10, pp. 11 994–12 000, 2009. [12] J. M. Estevez-Tapiador, P. Garcia-Teodoro, and J. E. Diaz-Verdejo, “Stochastic protocol modeling for anomaly based network intrusion detection,” in Information Assurance, 2003. IWIAS 2003. Proceedings. First IEEE International Workshop on. IEEE, 2003, pp. 3–12. [13] R. Sekar, A. Gupta, J. Frullo, T. Shanbhag, A. Tiwari, H. Yang, and S. Zhou, “Specification-based anomaly detection: a new approach for detecting network intrusions,” in Proceedings of the 9th ACM conference on Computer and communications security. ACM, 2002, pp. 265–274. [14] J. M. Estevez-Tapiador, P. Garc´ıa-Teodoro, and J. E. D´ıaz-Verdejo, “Detection of web-based attacks through markovian protocol parsing,” in Computers and Communications, 2005. ISCC 2005. Proceedings. 10th IEEE Symposium on. IEEE, 2005, pp. 457–462. [15] M. Pradhan, S. K. Pradhan, and S. K. Sahu, “A survey on detection methods in intrusion detection system,” methods, vol. 3, no. 2, 2012. [16] A. Valdes and K. Skinner, “Probabilistic alert correlation,” in Recent Advances in Intrusion Detection. Springer, 2001, pp. 54–68. [17] I. Mukhopadhyay, M. Chakraborty, and S. Chakrabarti, “A comparative study of related technologies of intrusion detection & prevention systems.” J. Information Security, vol. 2, no. 1, pp. 28–38, 2011. [18] A. Mounji, B. Le Charlier, D. Zampunieris, and N. Habra, “Distributed audit trail analysis,” in Network and Distributed System Security, 1995., Proceedings of the Symposium on. IEEE, 1995, pp. 102–112. [19] U. Lindqvist and P. A. Porras, “Detecting computer and network misuse through the production-based expert system toolset (p-best),” in Security and Privacy, 1999. Proceedings of the 1999 IEEE Symposium on. IEEE, 1999, pp. 146–161. [20] “Focus-ids mailing,” http://www.securityfocus.com/focus/ids/list/focus idsfaq.html, 2001. [21] S. Kumar and E. H. Spafford, “A pattern matching model for misuse intrusion detection,” 1994. [22] S. Kumar, “Classification and detection of computer intrusions,” Ph.D. dissertation, Purdue University, 1995. [23] M. Ludovic, “Gassata, a genetic algorithm as an alternative tool for security audit trails analysis,” in Proceedings of the First International Workshop on the Recent Advances in Intrusion Detection, Louvain-laNeuve, Belgium, 1998. [24] M. Ranum, “Intrusion detection: Challenges and myths,” Network Flight Recorder, Inc, 1998. 14 Studentská konference Zvůle 2014 Simulation of Triple Play Services in NG-PON2 Networks Tomas Horvath Department of Telecommunications Faculty of Electrical Engineering and Communication Brno University of Technology Email: horvath@phd.feec.vutbr.cz Petr Munster Department of Telecommunications Faculty of Electrical Engineering and Communication Brno University of Technology Email: munster@feec.vutbr.cz Radim Sifta Department of Telecommunications Faculty of Electrical Engineering and Communication Brno University of Technology Email: sifta@feec.vutbr.cz Abstract—The next generation of passive optical networks promises to increase the bandwidth from 10 to 160 Gbit/s. This solution is called by ITU-T as ITU-T G.989 standard. The subscribers want to have realized all services (data, video, and voice) via only one connection. The current papers show only the measurement or theory concept of new standards. This article describes simulation of NG-PON2 networks with all services (data, video, and voice). The results of simulations show the maximum values of critical parameters for passive optical network (BER and attenuation). This paper also describes two broadcasting video ways: IP transmission and QAM modulation on separately wavelength. I. INTRODUCTION The new phenomenon in access networks is to substitute current copper cables over the optical fibres. On the other hand, when an ISP (Internet Services Provider) constructs new a LAN (Local Area Network) in some part of the city, ISP has two possibilities: greenfield (ISP does not have any previous network with older standard of passive optical networks) or brownfield (ISP has for example Gigabit passive optical networks GPON). Nowadays, passive optical networks have been widely expanded in access networks. NG-PON2 (Next Generation Passive Optical Network) technologies are not yet mature in the world. This standard was approved in March 2013 as 40-Gigabit-capable passive optical networks (NG-PON2): General requirements. At first, the recommendation defines only basic parameters, such as technology and basic coexistence scheme (see Fig. 1 [1]). However, the second document from ITU-T G.989 rows defines the following: attenuation classes, wavelength range, line codes and so on. Last above mentioned standard will be released in 2014. NG-PON2 is a standard that offers wide bandwidth for customers. The first scenario in [2] defines 10 Gbit/s per one unit in the distribution network. The total bandwidth in the first scenario is 40 Gbit/s (with using 4 pairs of wavelength). These sources are separated via WDM (Wavelength Division Multiplex) with a little channel spacing. The second scenario describes up to 160 Gbit/s speed in downstream from ISP to customer, in other words, from OLT (Optical Line Termination) to ONU (Optical Network Unit). The main contribution of this paper is the determination of the maximum bit error rate value in the transfer of 3 services Fig. 1. Simple coexistence scheme [1]. in the NG-PON2 model. This is followed by the research into effects of increased power of the laser on error rate, split ratio and the determination of critical parameters of all transmitted services. In recent years, many works related to describe NG-PON2 technology have emerged. The limitation of most of them is given by using only one line code, changing the power of the laser and not using the biggest split ratio (1:256) [3], [4], [5]. Most of the recent works deal with simulation or measurement only IP services. None of them use the QAM (Quadrature Amplitude Modulation) modulation for the transfer of video in analog format [6]. On the other hand, publication [7] used QAM, but not for transfer of a video signal. They increase downstream speed via OOK (On Off Keying), DSP (Digital Signal Processing) and 8QAM. At last, the first measurement of NG-PON2 was presented in [8]. The measurement was designed on a oexistence scheme with GPON, XG-PON (Next Generation PON) and TWDMPON (Time Wavelength Division Multiplex). This article [8] has many advantages, the first measurement on a real network, big wavelength range and analysis of BER (Bit Error Rate). On the other hand, they used EDFA (Erbium Doped Fibre Amplifier), which is not presented in [2] for short distance (up to 60 km) and they do not use RF-video overlay technique. In this study, data services were transferred via RZ/ /NRZ/Miller line code over distribution network with different split ratios, which is more frequent than in other works. 15 Studentská konference Zvůle 2014 II. NG-PON2 ARCHITECTURE AND SIMULATION MODEL At first, ITU-T should solve one task. What will be the main technology for the new standards of passive optical networks NG-PON2? There are many possibilities: WDM-PON (Wavelength Division Multiplexing PON) [9], OFDM-PON (Orthogonal Frequency Division Multiplexing PON) [10], UDWDM-PON (Ultra Dense WDM PON) and TWDM-PON (Time and Wavelength Division Multiplexing PON) [8]. In the final model TWDM-PON technology was selected because this technology does not require many changes on the subscriber side. On the contrary, one change is the most important. NGPON2 requires tunable filters in ONU (Optical Network Unit) units. Probably ONU units own services provider and they are only rented for customers with their valid period of contract. After the main technology was selected, ITU-T defined the using of wavelength. The first idea was to use free wavelengths of XG-PON standards. On the whole, this idea was not supported and study group 15 had to find new wavelengths. Over time wavelengths were selected from 1596 to 1603 nm for downstream and from 1524 to 1544 nm for upstream. As was mentioned, NG-PON2 uses the TWDM technology. In general, ITU-T defined the range of wavelengths and in [2] accurate step for wavelength multiplex, as shown in Table I. TABLE I. USED WAVELENGHTS FOR DOWNSTREAM. Downstream Channel f [THz] λ [nm] 1 187.8 1596.3389 2 187.7 1597.1894 3 187.6 1598.0408 4 187.5 1598.8931 5 187.4 1599.7463 6 187.3 1600.6004 7 187.2 1601.4554 8 187.1 1602.3113 The last important thing was to determinate attenuation classes. For NG-PON2 four attenuation classes were defined: 14-29 dB for class N1, 16-31 dB for class N2, 18-33 dB for class E1 and 20-35 dB for class E2. In our work all simulations were created with N2 class due to use of the optical splitter with 1:256 split ratios. Complete simulation model is described in the simulation chapter. III. OPTSIM OptSim is a special software for simulations of passive optical networks. It is developed by Synopsys company. This application has many advantages of simulation: CWDM (Coarse Wavelength Division Multiplexing), DWDM (Dense Wavelength Division Multiplexing), OTDM (Optical Time Domain Multiplexing) networks, FTTx (Fiber To The ...) methods of the termination optical fibres, analog modulations QAM (Quadrature Amplitude Modulation), QPSK (Quadrature Phase Shift Keying) etc. On the other hand, there are many disadvantages, the main one is missing of simulation upper layers for example data link layer for simulation GEM (Gigabit Encapsulation Method), PLOAM messages (Physical Layer Operations and Maintenance) and etc. IV. SIMULATION AND RESULTS At first, the main goal of our work was to design topology with TWDM technology completely in accordance with ITUT G.989.2. The final model is shown in Fig. 2. The most important thing was to solve tunable filters on ONU units. OptSim is not presented with these filters and in the final model delay filter in each ONU units were presented. Time multiplexing was solved by delay filter. In ODN (Optical Distribution Network), in other words, the signals with different wavelength are transmitted over one fiber because NG-PON2 uses the P2MP (Point to Multi Point) technology. Fig. 2. Proposed simulation model of NG-PON2 with RF overlay video. On the left side in Fig. 2, there are OLT units, which are on the services provider side and the last of them is video source represented via 64QAM modulation. Delay blocks help to achieve time division multiplex. In the middle in Fig. 2, there are WDM combiner, optical fiber, and splitter with 1:256 split ratio. This value of attenuation for splitter was never used in simulation with NG-PON2 networks. Set parameters for one OLT unit shown in Table II, other OLT units parameters are completely the same, only the wavelength is different. TABLE II. OLT UNIT PARAMETERS. Data source Bit Rate 10 Gbit/s Sequence Random Modulator driver REC NRZ LOW = 0 V High = 5 V CW laser CW power 6 dBm λ 187.1 nm The aim of this work was also to present the biggest split ratio in simulation model in comparison with the line codes in access optical networks. The values of attenuation were selected from 3 to 25 dB (these values are only for optical splitter). Total attenuation in proposed model was 3 dB on WDM combiner, 0.2 dB per 1 km = 0.2×20 = 4 dB, and split value. At final, maximum attenuation in model was 32 dB. In our work we need to choose current using line codes. One of the most used line codes are NRZ (Non Return 16 Studentská konference Zvůle 2014 Zero) with its variations, RZ (Return Zero) with its variations, and Millers code especially in NG-PON2 networks. Two of three above mentioned (NRZ/RZ) are presented in OptSim application. The Millers code is not presented and needs to be implemented from Matlab. The complete procedure of implementation is beyond the scope of this article. For more information about this implementation, see [11]. A. Simulation discussion As was mentioned, in our work only downstream (OLT → ONU) was simulated. Simulation of upstream is not important for this work, because we can show that current networks are prepared for using 1:256 split ratio with border value of attenuation in access networks. The main value of comparison was BER. With this parameter we can decide if proposal system is able to work or not. For NG-PON2 network critical value of BER is 1−10 , we compared our values with this border value. The final graph with dependence attenuation and BER is shown in Fig. 3. Fig. 3. Dependence of bit error rate on the value of splitter attenuation. At first, values were taken in logarithmic scale to make a clearer graph. The resulting values with splitter attenuation in the range from 3 to 12 dB were linearly dependant due to OptSim generation of BER with 1·10−40 values as zero BER. In OptSim it is not possible to get better value. From the results can be selected last values of BER before border value: NRZ rectangular 1.42·10−11 (23 dB of splitter values), NRZ raised cosine 1.42·10−11 (23 dB of splitter values), RZ raised cosine 4.21·10−10 (20 dB of splitter values), RZ rectangular 2.02·10−11 (22 dB of splitter values), and Millers code has 2.27·10−14 for the last simulated value of attenuation splitter. For Millers code represented with blue line we can see the better results in final graph. That is the reason why this line code is recommended for NG-PON2 networks because it eliminates DC component and allows using Millers code in networks with higher transmission speed. The simulations were not completed without video transfer. In the first simulation only IP (Internet Protocol) based services were simulated. Services were transferred from one block with guaranteed bandwidth, voice has the biggest priority, IPTV (Internet Protocol Television) has bigger priority, and the last one data did not have priority. Order of packets compared with voice is not important for data. For the second simulation video signal via 64QAM modulation was included. In the optical networks BER is not important for final signal, but eye diagram which is shown in Fig. 4. As you can see from Fig. 4, eight states are presented. The final eye diagram contains 8 states, which means that all states are detected correctly. Fig. 4. Eye diagram for video transmission on ONU side. V. CONCLUSION The main contribution of this paper is design of NG-PON2 network with TWDM technology. In this topology the biggest split ratio 1:256 with 25 dB of attenuation was proposed. With these values of attenuation 5 options of line codes were tested: NRZ rectangular, NRZ cosine, RZ rectangular, RZ cosine, and Millers code, which is recommended for NG-PON2 network by [2]. In first scenario simulation model contained only OLT units with named line codes and Triple Play services based on IP protocol. The second scenario added video signal transferred via 64QAM modulation. The implementation of FEC (Forward Error Correction) algorithm and using optical amplifiers are seen as further improvements. ACKNOWLEDGMENT This research work is funded by projects SIX CZ.1.05/ /2.1.00/03.0072. REFERENCES [1] ITU-T, “40-Gigabit-capable passive optical networks (NG-PON2): General requirements,” International Telecommunication Union, Tech. Rep. G.989.1, 2013. [Online]. Available: https://www.itu.int/rec/T- REC-G.989.1-201303-I/en [2] ——, “40-Gigabit-capable passive optical networks 2 (NG-PON2): Physical media dependent (PMD) layer specification,” International Telecommunication Union, Tech. Rep. G.989.2, 2013. [Online]. Available: http://www.itu.int/md/T13-SG15-140324-TD-PLEN-0170 [3] Z. Li, L. Yi, and W. Hu, “Key technologies and system proposals of TWDM-PON,” Frontiers of Optoelectronics, vol. 6, no. 1, pp. 46–56, 2013. [Online]. Available: http://dx.doi.org/10.1007/s12200-012-0305-7 17 Studentská konference Zvůle 2014 [4] L. Yi, Z. Li, M. Bi, W. Wei, and W. Hu, “Symmetric 40-Gb/s TWDMPON With 39-dB Power Budget,” Photonics Technology Letters, IEEE, vol. 25, no. 7, pp. 644–647, 2013. [5] M. S. Salleh, Z. A. Manaf, Z. A. Kadir, Z. M. Yusof, A. S. M. Supa’at, S. M. Idrus, and K. Khairi, “Simulation on physical performance of TWDM PON system architecture using multicasting XGM,” in Photonics (ICP), 2012 IEEE 3rd International Conference on, 2012, pp. 46–50. [6] J. Kim and C. Park, “Optical design and analysis of {CWDM} upstream {TWDM} {PON} for NG-PON2,” Optical Fiber Technology, vol. 19, no. 3, pp. 250–258, 2013. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1068520013000242 [7] N. Iiyama, J. Kani, J. Terada, and N. Yoshimoto, “Two-phased capacity upgrade method for NG-PON2 with hierarchical star 8-QAM and square 16-QAM,” in Optical Fiber Communication Conference and Exposition and the National Fiber Optic Engineers Conference (OFC/NFOEC), 2013, 2013, pp. 1–3. [8] Y. Luo, X. Zhou, F. Effenberger, X. Yan, G. Peng, Y. Qian, and Y. Ma, “Time- and Wavelength-Division Multiplexed Passive Optical Network (TWDM-PON) for Next-Generation PON Stage 2 (NG-PON2),” Lightwave Technology, Journal of, vol. 31, no. 4, pp. 587–593, 2013. [9] J. Yu, Z. Jia, P. N. Ji, and T. Wang, “40-Gb/s Wavelength-DivisionMultiplexing Passive Optical Network with Centralized Lightwave Source,” in Optical Fiber communication/National Fiber Optic Engineers Conference, 2008. OFC/NFOEC 2008. Conference on, 2008, pp. 1–3. [10] B. Liu, L. Zhang, X. Xin, and J. Yu, “Constellation-masked secure communication technique for OFDM-PON,” Optics express, vol. 20, no. 22, pp. 25 161–25 168, 2012. [11] Horvath Tomas Fujdiak Radek and M. Jiri, “Using Miller’s Code in NG-PON2 Networks,” Elektrorevue, vol. 5, no. 2, 2014. 18 Studentská konference Zvůle 2014 Subjective Quality Assessment for HEVC Eva Klejmová Department of Radio Electronics Faculty of Electrical Engineering and Communication, Brno University of Technology Brno, Czech Republic xklejm00@stud.feec.vutbr.cz Martin Slanina Department of Radio Electronics Faculty of Electrical Engineering and Communication, Brno University of Technology Brno, Czech Republic slaninam@feec.vutbr.cz Abstract— This paper deals with standard objective and subjective video quality assessments and with analysis of their applicability to HEVC. The main focus of this work is a suggestion of method for objective video quality assessment, its application and associated data collection. Final data are statistically analyzed and their correlation with objective tests is discussed. Keywords— video quality; objective video quality; subjective video quality; H.265; HEVC I. INTRODUCTION The increasing video quality requirements, whilst also maintaining a low bit rate, necessarily lead to the development of new methods and standards for video compression. The most recent standard for video coding is with the High Efficiency Video Coding (HEVC) by ITU-T Video Coding Experts Group. With the development of new methods there are also changes in the structures of the encoded video, which manifest particularly with high levels of compression. Due to these changes some of the established metrics for objective quality evaluation may however show poor correspondence with the subjective Mean Opinion Score (MOS) obtained from human observers. In this paper we perform a subjective quality assessment and give experimental results for three objective metrics compared with subjective results. II. SUBJECTIVE TEST CONDUCTION Method in which the information is obtained from a large group of observers in normal (not laboratory) conditions was used for data collection. The motivation for this was that the video content is nowadays played in various environments. We have adjusted the test methodology to that. One of the main reasons for choosing this method was the ease of obtaining a diverse and thus statistically significant panel of evaluators. With conventional methods there is the need to ensure the presence of an observer or group of observers in a laboratory or in another suitable environment. Due to time requirements of conventional methods, it is difficult to find evaluations from certain groups (e.g. based on age) willing to participate in the session. By choosing this method, the problem can be mitigated due to the fact that the evaluation itself can be moved to an environment that is accessible for observers. The evaluation is then carried out on portable devices, one evaluator at a time. The experimenter also performs configuration of this device, knows its parameters and its behavior. In this method, the experimenter can also affect the environment for evaluation. For the evaluation two portable devices were used. The first one was a cell phone Samsung S8530 Wave II. For purposes of this evaluation the original system Bada OS, v1.2 was replaced by Android Jellybean 4.3.1, thus ensuring compatibility with used applications. The second device was Asus Padfone with docking station Asus Padfone Station. Selected parameters of both devices are shown in Table I. TABLE I. PARAMETERS OF PORTABLE DEVICES Asus Padfone Station Samsung S8530 Wave II OS Android 4.1.1 Android Jellybean 4.3.1. CPU Qualcomm Snapdragon S4 1,5 Ghz Cortex-A8 1 GHz GPU Adreno 225 PowerVR SGX540 Memory 1 GB LPDDR2 512MB Display 10,1“ Super AMOLED, 1280 × 800 3.7“ Super Clear LCD, 800 x 480 Given that neither of the two devices did not allow continuous playback of H.265 video sequences or uncompressed YUV format, it was necessary to re-encode the sequences to H.264 format. From the created H.265 database selected sequences were converted to H.264 format using FFmpeg. The H.264 compression was almost lossless to avoid additional distortion. The PSNR values of H.264 sequences and their H.265 counterparts were approximately 60 dB. Taking into account the chosen method of data collection, subjective test method ACR-HR was selected. The main criteria for this selection were simplicity and low time requirement, which is best met by ACR method. Given that reference sequences were also available, the variant ACR-HR was chosen. This variant contains a hidden reference and the evaluator does not have information about its presence. Time structure of the test itself is shown in the Fig. 1.Set of selected sequences was played one at a time in pseudorandom order. This order was unique for each evaluator. Five-level scale for rating overall quality was used [1]. In the beginning of the test one training sequence with duration of 10 seconds was played. The actual beginning of the playback of each sequence was determined by the evaluator. Studentská konference Zvůle 2014 19 Fig. 1. Time structure of ACR test [1]. III. TESTING DATABASE To cover a wide range of different scenarios ten source video sequences were selected. All of them were stored as raw video, progressively scanned, with YUV 4:2:0 color sampling and 8-bits per sample Selected frame speed was 25 fps. Duration of whole sequence was 10 seconds. This time period was chosen to match the value commonly used in subjective tests. The source sequences were selected so as to contain almost still images, and images with fast-moving. The characteristics of the dominant textures in the image (e.g. its topography and its change over time and demands on the encoder) and the overall representation of the colors in the scenes were also taken into account. Fig. 2 shows the spatial information (SI) and temporal information (TI) indexes of the luminance component of each video sequence. It is observed that Park_Joy and Crowd_Run have large TI and SI values, while Sunflower and 2001 shows a small TI and SI index. Sequences Skyfall_1, Skyfall_2, Knight, Bolt and Tractor have large TI value but relatively small SI value. Remaining sequence Ducks_take_off shows approximately same SI and TI value. The video sequences were compressed with HEVC using HM-12.0 Park_Joy Ducks_take _off Crowd_R un Tractor Skyfall_1 Bolt Skyfall_2 Knight Sunflower 2001 0 5 10 15 20 0 5 10 15SI [-] TI[-] Fig. 2. Spatial information (SI) versus temporal information (TI) indexes of the selected contents For purposes of subjective test, two testing sets were created. This was due to display resolution of used portable devices. First set contains sequences with SD resolution (420p) and were used on both devices – Samsung and Asus. Second set includes sequences with HD resolution (720p) and were used only on Asus Padfone Station. The resolution slightly varies depending on source sequence. Table II shows these specific values. Each of created sets contains 5 video sequences for testing with various quality ranges for every source video sequence in sum 50 sequences. All ten source sequences were also used as hidden reference sequences. Altogether with one training sequence one set contains 61 video sequences (50 testing + 10 references +1 training sequence). TABLE II. RESOLUTION OF OF THE SELECTED VIDEO SEQUENCES Name Resolution 420p 780p Sunflower 848x480 1280x720 Tractor 848x480 1280x720 Park_Joy 848x480 1280x720 Ducks_take_off 848x480 1280x720 Crowd_Run 848x480 1280x720 Skyfall_1 1152x480 1728x720 Skyfall_2 1152x480 1728x720 Knight_and_day 1152x480 1728x720 2001 1056x480 1600x720 Bolt 848x480 1280x720 With the duration of 10s per one sequence, the total time required for the actual is 610s. Assuming the time required to evaluate a single sequence is in the range of five to ten seconds, we obtain required time in range 305-610 sec. altogether an overall duration of this test is in the range of 915 to 1220 seconds per one respondent. This corresponds to approximately 15 to 20 minutes. This time should be acceptable for most evaluators and should not be a problem for them to maintain attention. The actual assembly of the testing set was carried out in several steps. Initially we created a database of pre-selected sequences based on their PSNR values. These sequences were screened to the group of three evaluators, who had previous experience with similar assessment. MOS values were obtained from these observers along with verbal commentary to each one of the screened sequences. Based on this information five sequences from each source video were selected so that the MOS values were represented on scale from one to five. The test set obtained with this selection was screened to a group of five evaluators who had no prior experience with video quality evaluation. Again the MOS values were calculated for each sequence. The representation of the MOS values was found unsuitable for several sequences thus these sequences were replaced with one of slightly different quality. This assembly procedure remains unchanged for the sequence in 480p resolution, as well as sequences in 720p resolution. The actual subjective test was then applied on testing set obtained with this method. Studentská konference Zvůle 2014 20 IV. OBJECTIVE TEST CONDUCTION Selected objective video quality metrics were applied on coded sequences. Given that, reference sequences were available, two full reference metrics were used, namely PSNR and SSIM. Reduced reference method VQM was also used. V. RESULT POSTPROCESSING A. Video quality evaluation To detect and remove subjects whose scores appear to deviate strongly from the other scores in a session, outlier detection was performed. The results of these evaluators are not necessarily meaningful and may negatively affect the overall result of the test. The score sij of sequence i from subject j was considered valid based on following formula [2]. 1,5( ) 1,5( )3 3 1 1 3 1s q q q s q q qij ij       , (1) where q1 and q3 are the 25th and 75th percentiles of the scores distribution. This range corresponds approximately to 2,7 the standard deviation or 99.3% coverage of normally distributed data. Subject was considered as an outliner if 20% or more of his scores were invalid based on (1). In this particular test one subject was considered as outliner. Given that scores for the reference sequence were available, the scores for the testing sequences were converted to differential values using formula (2). By this the influence of the reference video was partially eliminated. ( ) ( ) ( ) 5DV PVS V PVS V REF   , (2) where V(REF) is the score of reference sequence, V(PVS) is the score of test sequence and DV(PVS) is the resultant differential value for the sequence. After applying this formula DV = 5 represents excellent quality and DV=1 poor quality. To avoid the case where the DV value is greater than five (test sequence would appear better than the reference) the following formula is used [1]. _ (7 ) / (2 ) 5crushed DV DV DV if DV    , (3) where DV is differential score of test sequence and _crushed DV is its reduced value. From these values differential MOS (DMOS) was calculated. The calculation was conducted according the following formula: _N crushed DVj ij DMOSi N   , (4) where N is the number of the valid evaluators and _crushed DVij is reduced differential value of sequence i from subject j. B. Statistical tests For evaluation of correspondence between subjective and objective methods for video quality assessment we use Spearman’s and Pearson’s correlation coefficients [3]. The Pearson‘s Linear Correlation Coefficient (PLCC) yx  between two samples , , 1,...,X Y i ni i  can be defined as cov(X,Y) =XY var(X) var(Y)  , (5) where var(X)>0, var(Y) . The Spearman‘s Rank Correlation Coefficient (SROCC) is defined similarly as the Pearson correlation coefficient between the ranked variables. For the samples , , 1,...,X Y i ni i  we can calculate [3] 2 1 2 6 ( 1) 1 n i di n n      , (6) where d X Yi i i  is the difference between ranks. We also use confidence interval for evaluation of MOS values in the form  1 /2, 1 iCI ti N N      , (7) where N is the number of respondents,  1 /2, 1N t   is the quintile of Student’s distribution, 5%  is risk and i is the standard deviation of indicator DMOS defined above (4). VI. EXPERIMENTAL RESULTS A total of 69 people took part in the test campaign. Age of the subjects was in range from 15 to 75 years old, with a median of 33 years. Males and females were approximately equally represented and 10% of subjects considered themselves as video quality experts. DMOS scores obtained from subjective tests were compared with objective metrics values. For this comparison The Spearman's Rank Correlation Coefficient (SROCC) and The Pearson‘s Linear Correlation Coefficient (PLCC) were used. With increasing DMOS score the quality of video sequence also increases, whereas with increasing VQM value the quality decreases. Due to this fact the VQM the correlation coefficients results in negative numbers. For purpose of this work and for easier comparison absolute values are shown. Studentská konference Zvůle 2014 21 Results are shown in Table III. For the SD resolution higher correlation coefficients were obtained. Furthermore, it is seen that the correlation is better for devices with a smaller display. When comparing selected objective metrics VQM seems to be most appropriate for H.265 sequences. Its correlation with DMOS scores reaches the value 0.8. Metrics PSNR and SSIM seem to be weaker. The correlation coefficients for these metrics are in range from 0.6 to 0.7, while PSNR metric shows slightly lower values. Plots of subjective versus objective results are presented in Figs. 3, 4 and 5 for each content separately. It can be noticed that the monotonicity of values is greater for VQM than for PSNR and SSIM. This indicates that VQM has the better accuracy than PSNR and SSIM. The plots also show that the spatial information and temporal information (i.e. the video content) have influence on the overall score of both subjective and objective methods. 1 2 3 4 5 22 24 26 28 30 32 34 36 38 40 PSNR [dB] DMOS[-] 2001 Tractor Bolt CrowdRun Skyfall_1 Ducks_take_off Knight_and_day Park_Joy Skyfall_2 Sunflower Fig. 3. PSNR versus DMOS. 1 2 3 4 5 0,6 0,65 0,7 0,75 0,8 0,85 0,9 0,95 1 SSIM [-] DMOS[-] CrowdRun Ducks_take_off Knight_and_day Sunflower Park_Joy Skyfall_1 Skyfall_2 Tractor Bolt 2001 Fig. 4. SSIM versus DMOS. 1 2 3 4 5 0,1 0,2 0,3 0,4 0,5 0,6 0,7 VQM [-] DMOS[-] 2001 Bolt CrowdRun Ducks_take_off Knight_and_day Park_Joy Skyfall_1 Skyfall_2 Tractor Sunflower Fig. 5. VQM versus DMOS. TABLE III. SROCC AND PLCC VALUES PSNR SSIM VQM SROCC 720p 0,58 0,56 0,79 480p 0,72 0,77 0,81 PLCC 720p 0,60 0,58 0,78 480p 0,71 0,72 0,81 VII. CONCLUSION In this paper, a description of the subjective quality evaluation performed on HEVC video compression standard for SD and HD content has been presented. The results of this evaluation were compared with values obtained from selected objective video quality metrics. The test results show that most accurate method is VQM with correlation close to 0.8. ACKNOWLEDGMENT The research published in this submission was financially supported by the Brno University of Technology Internal Grant Agency under project no. FEKT-S-14-2177 (PEKOS). REFERENCES [1] Rec. ITU-R P.910. Subjective video quality assessment methods for multimedia applications. Geneva: ITU, 2008J. [2] HANHART, Philippe, Martin RERABEK, Francesca DE SIMONE, Touradj EBRAHIMI a Andrew G. TESCHER. Subjective quality evaluation of the upcoming HEVC video compression standard. Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia – CrowdMM ‚13 [online]. New York, New York, USA: ACM Press, 2013, vol. 22, issue 12, 84990V- [cit. 2014-05-20]. DOI: 10.1117/12.946036. ¨ [3] MYERS, Jerome L a A WELL. Research design and statistical analysis. 2nd ed. Mahwah, N.J.: Lawrence Erlbaum Associates, 2003, xviii, 760 p. ISBN 08-058-4037-0 Studentská konference Zvůle 2014 22 Stochastic Differential Equations in Biology Marie Klimešová Department of Mathematics FEEC BUT Brno, the Czech republic xklime01@stud.feec.vutbr.cz Abstract: Stochastic differential equations (the SDE) are used to describe physical phenomena. Solution of the stochastic model is a random process. Target of the study of random processes is the construction of a suitable model, which allows understanding the mechanisms. On their basis observed data are generated. Knowledge of the model also allows predicting the future and it is possible to regulate and optimize the activity of the applicable system. In the presented contribution is to first defined probability space and Wiener process. On this basis it is defined the SDE and the basic properties are indicated. The final part contains example illustrating the use of the SDE in practice. Keywords—stochastic process; Brownian motion; Wiener process; stochastic differential equations; application. I. INTRODUCTION Real biological systems will always be exposed to influences that are not fully understood, and therefore there is an increasing need to spread the deterministic models to models that include more difficult differences in the dynamics. A method of demonstrating these elements is by including stochastic influences or noise. A natural extension of an ordinary differential equations model is a system of stochastic differential equations, where corresponding parameters are define as stochastic processes, or stochastic processes are added to the system equations. All biological dynamical systems evolve under stochastic influence, if we define stochasticity as the parts of the dynamics that we cannot predict or understand. To be realistic, models of biological systems should include random influences, since they are concerned with subsystems of the real world that cannot be sufficiently isolated from outer effects to the model. The physiological explanation to include erratic behaviors in a model can be found in the many factors that cannot be controlled, like hormonal oscillations, blood pressure variations, respiration, variable neural control of muscle activity, enzymatic processes, energy requirements, the cellular metabolism, sympathetic nerve activity, or individual characteristics like body mass index, genes, smoking, stress impacts, etc. Also external influences, like small differences in the experimental procedure, temperature, differences in preparation and administration of drugs, if this is included in the experiment or maybe the experiments are conducted by different experimentalists that inevitably will exhibit small differences in procedures within the protocols. Different sources of errors will require different modeling of the noise, and these factors should be considered as carefully as the modeling of the deterministic part, in order to make the model predictions and parameter values possible to interpret. II. BASIC DEFINITIONS Definition 1. If is a given set, then a -algebra on is a family of subsets of with the following properties: (i) (ii) , where is the complement of in (iii) , , … ⋃ . The pair ( , ) is called a measurable space. Definition 2. A probability measure 𝑃 on a measurable space ( , ) is a function 𝑃: → [0, 1] such that (i) 𝑃( ) 𝑃( ) . (ii) if and { } is disjoint (i.e. if ) then 𝑃(⋃ ) ∑ 𝑃( ). The triple ( , , 𝑃) is called a probability space. III. BROWNIAN MOTION One of the simplest continuous-time stochastic processes is Brownian motion. This was first observed by botanist Robert Brown. He observed that pollen grains suspended in liquid performed an irregular motion. The motion was later explained by the random collisions with the molecules of the liquid. The motion was described mathematically by Norbert Wiener who used the concept of a stochastic process ( ), interpreted as the position at time t of the pollen grain w. Thus this process is also called Wiener process. A. Basic properties of Wiener process Definition 3. The stochastic process is called Brownian motion or Wiener process if the process has some basic properties: 23 Studentská konference Zvůle 2014 (i) (ii) has distribution ( ) for (iii) has independent increments, i.e. are independent for all . Note. The unconditional probability density function at a fixed time ( ) √ ( ) The expectation is zero [ ]  for The variance is [ ] Theorem 1. Let be Wiener process. Then [ ] { } for Proof: [6], pp. 14. B. M-dimensional Wiener process Definition 3. Let ( ) be a stochastic process. Then m-dimensional Wiener process represents ( ) ( ( ) ( )). IV. STOCHASTIC DIFFERENTIAL EQUASTION Definition 4. Let ( ) ( ( ) ( )) be mdimensional Wiener process and [ ] , [ ] be measurable functions. Then the process ( ) ( ( ) ( )) [ ] is the solution of the stochastic differential equation (the SDE) ( ) ( ) ( ) ( ) , where is the stochastic process. After the integration of equation (1) we give the stochastic integral equation   ∫ ( ) ∫ ( )   Equation (2) can we rewrite in the differential   ( ) ( )   We formally replace the white noise by and multiply by V. EXISTENCE AND UNIQUENESS OF SOLUTION Theorem 2. Let and [ ] , [ ] be measurable functions satisfying next conditions: (i) There exists a constant C such that | ( )| | ( )| ( | |) for [ ] (ii) There exists a constant D such that | ( ) ( )| | ( ) ( )| (| |) for [ ] (iii) Let Z be a random variable which is independent of the σ-algebra and [| | ]. Than the stochastic differential equation (3) has a unique tcontinuous solution that [∫| | ] Proof: [8], pp. 65. VI. APPLICATIONS In practical situations we meet with random events which take place in time. Stochasticity is very important in physics, technics, economy and biology. A. Physical and technical sciences For example, it can be the seismic record in geophysics, series of daily maximum temperatures in meteorology, course of the output signal of the electric devices, changes in the number calls on a phone line. B. Social science There it can be processes of mortality and disability of the population, changes in population. C. Economics In economy it can be changes in demand for a product, analysis of the development of exchange shares on the stock exchange, volume of agricultural production, the number of air transport pending. D. Biological and technical sciences In biology it can be a monitoring of various parameters of air pollution, EEG, EKG records in medicine, multiplication processes (e.g., bacteria), etc. VII. USING IN PRACTISE Because the future work will be oriented on biomedical engineering, an example of using stochastic differential equations is chosen in the field of biomedical sciences. 24 Studentská konference Zvůle 2014 The following text shows two SDE models of tumor dynamics. The first model is a single dimensional stochastic differential equation for the influence of chemotherapy on cancer cells, and the second model is a pair of SDEs that describe an immunogenic tumor. Method generates independently and identically distributed (i.i.d.) samples and in contrast, method generates independent but non-identically distributed (non-i.i.d.) samples. Method for statistical model checking was used Bayesian statistics. A. Lefever and Garay model This model studies the growth of tumors under immune surveillance and chemotherapy using the following stochastic differential equation: ( ) ( ) ( ) ( ) Here, x is the amount of tumor cells, A0cos(ωt) denotes the influence of a periodic chemotherapy treatment, r0 is the linear per capita birth rate of cancer cells, K is the carrying capacity of the environment, and β represents the influence of the immune cells. Note that Wt is the standard Brownian motion. Starting with a tumor consisting of a billion cells is there at least a 1% chance that the tumor could increase to one hundred billion cells under immune surveillance and chemotherapy. The following specification captures the behavioral specification: 𝑃 ( ( )) Fig. 1 contrasts the number of samples needed to decide whether the model satisfies the property using i.i.d. and noni.i.d.. As expected, the number of samples required increases linearly in the logarithm of the Bayes factor regardless of whether i.i.d. or non-i.i.d. sampling is used. However, non-i.i.d. sampling always requires fewer samples than i.i.d. sampling. Moreover, the difference between the numbers of samples increases with the Bayes factor. That is, the lines are diverging. B. Nonlinear immunogenic tumor model There is the analysis of the studies immunogenic tumor growth. The immunogenic tumor model explicitly tracks the dynamics of the immune cells (variable x) in response to the tumor cells (variable y). The SDEs are as follows: ( ) ( ( ) ( ) ( )) ( ( ( ) ) ( ( ) ) ( ) ( ) ( ( )( ( )) ( ) ( )) ( ( ( ) ) ( ( ) ) ( )  Fig. 1. Comparison of i.i.d. and non-i.i.d. sampling. The parameters x1 and y1 denote the stochastic equilibrium point of the model. Briefly, the model assumes that the amount of noise increases with the distance to the equilibrium point. For this model, the following property is considered: starting from 0.1 units each of tumor and immune cells, there is at least a 1% chance that the number of tumor cells could increase to 3.3 units. The property can be encoded into the following specification: 𝑃 ( ( )) Fig. 1 contrasts the number of samples needed to decide whether the model satisfies the property using i.i.d. and noni.i.d.. The same trends are observed as in the previous model. The difference between these methods increases with the Bayes factor. The property also considered that the number of tumor cells increases to 4.0 units. This property is true with probability at least 0.000005 under a Bayes Factor of 100 000. The i.i.d. sampling algorithm did not produce an answer even after observing 10 000 samples. The non-i.i.d. model validation algorithm answered affirmatively after observing 6 736 samples. Once again, the real impact of the proposed algorithm lies in uncovering rare behaviors and bounding their probability of occurrence. C. Results Results confirm that non-i.i.d. sampling reduces the number of samples required in the context of hypothesis testing, when the property under consideration is rare. However, that if the property isn't rare then a non-i.i.d. sampling strategy will actually require a larger number of samples than an i.i.d. strategy. CONCLUSION Stochasticity is necessary when considering biological systems and processes. Some systems actually rely upon Brownian motions in order to operate efficiently. 25 Studentská konference Zvůle 2014 In general, stochastic effects influence the dynamics, and may enhance, diminish or even completely change the dynamic behavior of the system. ACKNOWLEDGMENT The work presented in this paper has been supported by Grant FEKT-S-11-2-921 of Faculty of Electrical Engineering and Communication, BUT. REFERENCES [1] S. Ditlevsen, Stochastic differential equation models in biology. Modeling of physiological systems [online]. 2009 [cit. 2014-06-11]. Available at: http://www.math.ku.dk/~susanne/kursusMedicine Technology/SDEnoter.pdf [2] S. Ditlevsen, A. Samson, Stochastic Biomathematical Models, Lecture Notes in Mathematics 2058, Berlin Heidelberg, 2013. DOI 10.1007/978- 3-642-32157-3 1. [3] M. Forbelská, Stochastické modely časových řad. Brno, 2013. Učební text. Masarykova univerzita, Přírodovědecká fakulta, Ústav matematiky a statistiky. [4] S.K. Jha, C.J. Langmead, Exploring behaviors of stochastic differential equation models of biological systems using change of measures. 2012, BMC bioinformatics. [5] M. Klimešová, Stochastic Differential Equations. In Student EEICT. Tábor 43a, 61200 Brno: LITERA, 2014. s. 150-154. ISBN: 978-80-214- 4924- 4. [6] E. Kolářová, Stochastické diferenciální rovnice v elektrotechnice, VUT Brno, 2005. [7] M. Navara, Pravděpodobnost a matematická statistika, Skriptum FEL ČVUT, Praha, 2010. [8] B. Øksendal, Stochastic Differential Equations, An Introduction with Applications, Springer-Verlag, 1995. [9] J. Seidler, Vybrané kapitoly ze stochastické analýzy, Praha, Matfyzpress, 2011. ISBN 978-80-7378-145-3. [10] Techmania - edutorium: Brownův pohyb [online], 2008, [cit. 24.3.2014]. Available at: http://www.techmania.cz/edutorium/clanky.php?key=292 26 Studentská konference Zvůle 2014 Wavelet Transform Based m-QAM Classification Jaroslav Kostrhoun Department of Communication and Information Systems University of Defence Brno, Czech Republic jaroslav.kostrhoun@unob.cz Abstract—This paper deals with the possible method of recognition and classification of digital m-QAM modulations. Classification method is based on the wavelet transform of an analyzed signal and its correlation with specific patters in the wavelet domain. The paper describes the main solved issues within the classification algorithm. The classification algorithm performance was tested on both, artificial and real segments of m-QAM. Keywords—QAM; classification; wavelet transform; templates; correlation I. INTRODUCTION Modern digital communication systems use a wide variety of different modulation schemes. Modulation techniques with variable envelope, where amplitude and phase of the high frequency carrier signal are varied at the same time, are one of the oldest. Nevertheless, they are concurrently important and widely used modulation schemes which can be found in many fields of electronics and communication technologies. The correct recognition of used modulation scheme is essential in all the systems where the modulation scheme can change in dependence on conditions e.g. state of transmission channel or signal to noise ratio. The issue of recognition and classification of m-ary quadrature amplitude modulation, or m-QAM, is discussed in many science articles. There can be several different possible approaches to reach the desired outcome — from techniques based on statistical pattern recognition, i.e. the number of occurrences of specific amplitude or phase, constellation diagram shape recognition or techniques using theoretical decision, i.e. computing signal characteristics and comparing them to typical values. The classification method described in this paper is based on the wavelet transform of the analyzed signal and its correlation with the prepared specific patterns in wavelet domain. II. CLASSIFICATION METHOD This work focuses on creation of an m-QAM classification algorithm based on the wavelet transform, which would be able to process segments of real signals from telecommunications. The algorithm performance was subsequently tested. Moreover, this work states and describes some real issues which occurred during the classification algorithm design. The original idea of using the wavelet transform for classification purposes of digital modulations is in detail described in [1]. The mail advantage of wavelet transform, as it is mentioned in [2], is more precise capture of non-stationary signals and aperiodic or quick step changes. Fundamentals of this classification method can be described in brief as follows. First, it is needed to have prepared necessary templates which are basically nothing more than wavelet transformation of a sine wave with specific amplitude and phase. Then, the wavelet transform of an analyzed signal is computed and correlated with the prepared templates. From the set of obtained correlation coefficients can be made the final decision. In [1], the correlation in wavelet domain is defined as:       bay n n baxR nnWnnWW a b yx ,.,,    In (1) the symbol Wx,y stands for the wavelet transform of signals x,y; na are discrete points of wavelet scaling and nb are the discrete points of wavelet time shift. III. THE CLASSIFICATION ALGORITHM This classification method always makes the final decision as a selection from the set of known modulation schemes. If other modulations were processed, they would not be classified correctly. The classification algorithm was designed to work with 4 types of m-QAM which have a rentaculal constellation diagram, i.e. 4-QAM, 16-QAM, 64-QAM and 256-QAM. A. Symbol transfer to the first quadrant of IQ plane To simplify the classification algorithm, all symbols are transferred to the first quadrant of IQ plane due the signal processing. Otherwise, all the correlations would have to be calculated separately for every quadrant of IQ plane and the algorithm would need four times more templates — different ones for every quadrant. Instead of this, only 2 auxiliary templates are added to recognize symbol position. These auxiliary templates have the meaning of sine wave with 0 and π/2 phase shift. According to the sign of correlation with the auxiliary templates, the symbol transfer is performed:  Symbols in 3 quadrant are multiplied by -1,  symbols in 2 and 4 quadrant are shifted in phase ±π/2, thus corresponding number samples per period. 27 Studentská konference Zvůle 2014 This sort of signal processing may not seem ideal, but realizing, that the task is not demodulation but correct m-QAM classification, it is still suitable for the main purpose. B. Adaptation of constelation diagram size Another issue comes with the amplitude of analyzed signal. The classification algorithm expects the exact definite size of constellation diagram. Regardless, the amplitude of real signal is generally unpredictable and can vary significantly from the expected values. Therefore, the amplitude of real signal has to be multiplied by a constant. This part of processing uses the same two auxiliary templates already mentioned above to find desired constant. The algorithm looks for the symbol with the lowest amplitude in constelation or at least one of the symbols which are close to the in-phase or quadrature axis. The auxiliary templates are chosen so they give correlation coefficient with those requisite symbols equal 1. Other values different from 1 are then the desired constant used to adapt the constelation size. Considering the sequence of symbols as a random value, the probability of occurrence of such symbol at least once in the analyzed sequence can be calculated using [3]:    knk k pp k n P         1..   In equation (2), n stands for the sequence length, i.e. the number of symbols, k stands for the number of occurrences of suitable symbol and p is the probability of suitable symbol occurrence. The aggregate probability is pretty high, e.g. it is 0.944 for modulation 64-QAM and very short sequence of 5 symbols and even 0.997 for 10 symbols long sequence. C. The proper templates The templates which work as decision-making thresholds were chosen again using probability. The algorithm works with 4 kinds of m-QAM, so three thresholds are needed. Set of symbols of one m-QAM modulation could be seen as a subset of symbols for different m-QAM modulation with greater constellation diagram. This time, it is important to choose the decision-making threshold in order that the majority of symbols in the analyzed sequence would be above or under the threshold. Equation (2) as used once more to calculate the probabilities that the most of the symbols are above or under the threshold. So the templates ta, tb and tc was derived from signals with parameters shown in (3), f is signal frequency and t s time.  )2cos(6.5)2sin(6.5)( )2cos(0.3)2sin(0.3)( )2cos(6.1)2sin(6.1)( ftfttt ftfttt ftfttt C B A         E.g. the template tc, which makes the decision between 64-QAM and 256-QAM, gives the probability 0.912 for the very short sequence of 5 symbols and 0.938 for 10 symbols. Similarly in the case of the other templates, where the probabilities are even higher. D. Computional demands The calculation of the wavelet transform for every symbol gives a two dimensional array of coefficients. To provide a sufficient accuracy of classification, the array must be large enough and ideally have the size of power of 2. So, the signal must be very densely sampled. Although, the lack of samples per period can be compensated by samples interpolation followed up by resampling at needed sample rate. Above stated facts raises a further question, how time demanding is the whole classification algorithm. To answer this, the algorithm has undergone the series of tests to determine the classification accuracy and time consumption depending on the number of samples per signal period. It was find out, that the time consumption rises approximately with the square of samples per period. The dependency of classification accuracy on samples per period under the condition of the same test parameters is shown in the graphic form in Fig. 1. IV. RESULTS The efficiency and classification accuracy of the ready completed algorithm was tested at both, artificially generated segments and real segments of m-QAM. The tests were run first separately for each of the considered modulation and then as a complex of randomly chosen m-QAM. The main parameters set during the testing were signal to noise ratio (SNR), the length of processed sequence and the number of test repetition. The sequence length was set from 5 to 40 symbols. For longer sequences the accuracy did not significantly rise anymore. SNR was set from 0 dB to 30 dB. The number of test repetition was 20000 time at most. Generally, it can be declared, that the accuracy is significantly higher for longer sequences of symbols. For the shortest 5 symbol long sequences the accuracy goes to the maximum of approximately 70 %, whereas for the 40 symbol sequences the accuracy reaches almost the absolute value 100 %. Fig. 1. Algorithm classification accuracy for the different number of samples per period. 28 Studentská konference Zvůle 2014 Fig. 2. Classification accuracy for artificially generater and real segnemts of m-QAM modulations Also the results for generated sequences were slightly higher than those ones for real pieces of m-QAM. Actually, this was expected behavior from the beginning. The main outcome is classification accuracy in situation, where the algorithm gets a completely random segment of m-QAM modulation on its input. The sequences were 40 symbols long. The results are shown in Fig. 2. In the case of artificially generated segments the accuracy goes to 100 % for SNR 10 dB and higher. In the case of real modulation segments is the accuracy lower but still very high. It goes close to the value 94 %. V. CONCLUSION Looking back through the results, the correct function of classification algorithm was successfully proved artificially generated and real segments of m-QAM at the range of SNR from 0 dB to 30 dB. The advantages and disadvantages of the classification method presented in this paper can be summarized as follows:  The first advantage is the high noise resistance and so resulting high classification accuracy for signals with 10 dB SNR and higher.  Another feature to depict as the advantage is comparatively small number of symbols needed to process to obtain the outcome.  On the other hand, the algorithm requires for its function the very dense sampling of analyzed signal. This leads to high computational demands related with the signal preprocessing and the wavelet transform computation. This is a serious disadvantage.  The second disadvantage is the fact, that the algorithm makes the final decision only as a selection from the set of known m-QAM modulations. Hence, it cannot classify the other modulation schemes with different constellation diagrams. In conclusion, method using wavelet transform is a possible technique of m-QAM classification which can reach the remarkably high accuracy, but for the cost of above outspoken negatives at the same time. REFERENCES [1] HO, Ka Mun. Automatic recognition and demodulation of digitally modulated communications signals using wavelet-domain signatures. New Brunswick, New Jersey, 2010, 201 p., Dissertation thesis, The State University of New Jersey. [2] WALNUT, David F. An Introduction to Wavelet Analysis. corrected 2nd printing. 451 p. Basel: Birkhäuser, 2003. ISBN 978-1-4612-0001-7. [3] TUCKWELL, Henry C. Elementary Applications of Probability Theory. second edition. London: Chapman & Hall, 1995. ISBN 0-412-57620-1. 29 Studentská konference Zvůle 2014 Compression Tool for Aeronautical Data Dominik Kovac Department of Telecommunications Brno University of Technology Brno, Technicka 3058/10 Email: xkovac23@phd.feec.vutbr.cz Pavel Masek Department of Telecommunications Brno University of Technology Brno, Technicka 3058/10 Email: xmasek12@phd.feec.vutbr.cz Michal Jelen Department of Telecommunications Brno University of Technology Brno, Technicka 3058/10 Email: xjelen05@stud.feec.vutbr.cz Abstract—In today’s world, it is still necessary to easily access accurate information. Aeronautical information shall be made available quickly for safety reasons and must simultaneously maintain their accuracy. Today’s system is introduced in paper form, the paper form is integrated and than interpreted according to the needs of different categories of users. In this paper form each time there are the risks of data loss and their consistency and integrity. This can compromise the safety and this data can not be accepted. In future applications, there are aeronautical data with a high level of integrity. The concept of AIXM is the model for the exchange of aeronautical information in digital form. Model AIXM is important for use in aviation, where the trend is to transmit information not only on paper but increasingly also in the electronic version. When transferring data in this form it is very appropriate to limit the transmitted data only on current information, since the capacity of the transmission channel is limited. Therefore, it is beneficial to use compression of AIXM. This article is focused on XML compression and describes created compression tool for AIXM. Tool uses lossy and lossless compression methods. Generally, the lossless compression used in tool saves about 90% of the total size input file. When combined with lossy compression is expected to achieve even better results. I. INTRODUCTION Compression or data processing in order to reduce their volume is important concept in computer science, and not only in her. It is mainly used to achieve a smaller bit rate or reduce the resource needs for working with specific data. Use of compression is immeasurable useful for archiving or transferring data over a network or through to the telecommunications. Advantage of using compression can be observed in almost all kinds of data. This work focuses on the compression of XML, which is dedicated to opening chapter. This chapter opens up the theoretical part and describes characteristics and important concepts of XML and that are necessary to further understanding. In conclusion of this chapter,some of the advantages and disadvantages of XML are described. Following chapter explains and describes the AIXM format, its characterization and modelling conventions. The article introduces the typical model concepts of AIXM, such as temporality model and Digital NOTAM. Model AIXM is important for use in aviation, where the trend is to transmit information not only on paper but increasingly also in the electronic version. When transferring data in this form it is very appropriate to limit the transmitted data only on current information, since the capacity of the transmission channel is limited. Therefore, it is beneficial to use compression of AIXM. Third chapter deals with the compression of XML data. Although it deals with the general characteristics of compression, the core of this chapter are mainly types of specific groups of compression. An important point is the internal structure of the XML document and its impact on the overall compression. Last chapter deals with application, which was created for the purpose of lossy and lossless compression AIXM data. It mentions the principle of operation of each type of compression in the program and which compression mechanisms are used. The following are described in greater detail each part of the application program. II. EXTENSIBLE MARKUP LANGUAGE (XML) XML (eXtensible Markup Language) is a generic and open markup language standardized by the W3C (World Wide Web Consortium), to represent the data structure [1]. Like HTML (HyperText Markup Language) based on the older and more complex metalanguage SGML (Standard Generalized Markup Language). XML is used to describe documents and data in a standardized, text-based form. Therefore, it may be forwarded through various traditional transport protocols. A. Characteristics of the XML There were several objectives in the design of XML in 1998, but they can be summed up in simplicity, generality and applicability [1]. Outside the goals that XML should meet were also set out certain pillars on which it is built - extensibility, structure and validity. Initial validation standard XML file was DTD (Data Type Definition), which was replaced by another alternative, also based on the so-called XML schemas, XSD (XML Schema Definition). XML focuses on the substantive content and does not deal with appearance. It can edit style languages, which they are many. Most known examples include CSS (Cascading Style Sheets). They are usually only used in cases where you want a simple formatting. A more complex option is the family of languages XSL (eXtensible Stylesheet Language). This style allows more edit document, transform, or select individual parts to generate contents and index using XSLT (XSL Transformations), XSL-FO (XSL - Formatting Objects) and XPath (XML Path Language) [2]. Example XML using XSL formatting is shown in Fig 1. Using possibilities of XML are plentiful, from serialized data to produce metadata and configuration files, exchange of 30 Studentská konference Zvůle 2014 data between applications in the cloud, smart Web sites, electronic publishing through universal data format or for transfer the XML into HTML using XSLT. It is especially suited for documents containing structured information. Almost every document can observe a certain data structure. Fig. 1. Principle of formatting XML document using XSLT and XSL-FO B. Advantages and Disadvantages of the XML 1) Advantages: Among one of the most important advantages is the fact that using XML we can write your own markup language. The developer is not limited to any specific elements, tags or attribute values. Additionally, creating custom markup languages for different purposes and different types of data is easy. Anyone therefore can invent their own tags and rules that very well define the exact structure of each XML document as needed. In well written code too uninformed person may not have extensive prior knowledge of the language. Another important feature is the portability XML and the resulting compatibility. With a well-formed document meets the W3C recommendation, therefore there is no problem in communicating with other institutions or applications that support the same specifics of the transmitted XML data. Typically, the documents are also easily workable for parsers, as well as other tools adapted to work with XML [3]. Finally, it is also a data format with strong support of Unicode. 2) Disadvantages: It must be acknowledged that despite the numerous advantages, XML is not exactly compact. Due to its verbosity, data are represented usually in much larger scale than their original form. XML does not address issues regarding disk space or bandwidth of transmission media. III. AERONAUTICAL INFORMATION EXCHANGE MODEL (AIXM) The concept of AIXM (Aeronautical Information eXchange Model) is the model for the exchange of aeronautical information in digital form [4]. Primary objective is to enable effective management and distribution of aeronautical information services (AIS) in international civil aviation, which increases the safety and efficiency of air navigation service. AIXM falls under the AIM (Aeronautical Information Management), all backed by then the ICAO (International Civil Aviation Organization). A. Characteristics of the AIXM AIXM is based on the conceptual model of the AICM (Aeronautical Information Conceptual Model), while utilizing the UML (Unified Modeling Language), GML (Geography Markup Language) and XML. AICM is based on defined linkages and relationships between objects and their properties. It is generally based on the recommendations of ICAO and practice, industry standards, and data concepts published in aeronautical reports. It should be pointed out that the AIXM is only one possible implementation of AICM, which does not use all its possibilities. When using the appropriate formats for data exchange can therefore implement other solutions [5]. The development of AIXM cooperates European organization for the safety of air navigation EUROCONTROL with U.S. organization FAA (Federal Aviation Administration). The origins of the draft AIXM date back to the 90s of the last century and development and also improvement of the model continue. Latest available version AIXM 5.1 brought several key principles. Model is now modular and extensible, closely utilizes local standards organization ISO (International Organization for Standardization) is working with the comprehensive model of temporality, including support for the information contained in the digital NOTAM (Notice to Airmen) reports, among others, supports the current identification codes of airports and user data may include types of barriers against normal operation, terminal procedures and airport mapping database [4]. AIXM consists of two major components. Conceptual model of AIXM and AIXM XML schema. The conceptual model is dedicated to aviation domain, describes the internal AIXM objects, their properties and attributes of association. Can be used as a logical basis for the AIM database. In contrast with scheme, the scheme has been more inclined to exchange aeronautical data and the actual implementation of the conceptual model. Thanks to schema AIXM is what it is. B. Temporality model Time is one of the most important aspects of aeronautical information domain where each notification about the change coming well in advance of its validity. After aeronautical information systems are usually required to ensure the current situation and to prepare for future changes. Information with expired validity period must also be archived for possible investigation. Due to operational reasons, the difference between the permanent and the temporary changes in status is usually applied. Permanent changes are valid until the next permanent change or the end of life components. In contrast, the temporary status is a change of limited duration. It is considered overlap state components, respectively its properties over time. It can be described by the concept of overlap. Once the change of temporary status ends, the statuses of the components are changed to their original value by the concept of recovery [6]. Model of temporality cares about the exact expression of the status of individual components and events in time. It also allows the implementation of digital NOTAM messages to AIXM. A key assumption of model of temporality is that part of the properties may change over time. An exception to this case is global unique identifier, which in this way can not be changed. 31 Studentská konference Zvůle 2014 Due to the temporary nature of the model degree of versatility can be further divided into time slots. To the basic time slots or temporary ones. Basic time slots apply to all properties of the component parts and describe the condition as a result of permanent changes, while the temporary one applies only to one of the properties and describes transient state components overlap in accordance with the concept of overlay and restoration. Temporary time lapse is sometimes called the temporary delta [6]. Thanks to them can be applied to digital NOTAM messages in AIXM. C. Digital NOTAM Classical NOTAM messages are in the form of a notice distributed by means of telecommunication services. The first arose in the mid of the last century. Nowadays contain information dealing with organizations, state and changes in any aeronautical facility. Further describes the services, procedures, and possible hazard, the timely knowledge of which is necessary for personnel in flight operations. NOTAM messages are unreadable for person without knowledge. NOTAM messages are created by a national aviation authority, as defined in the aviation laws. Those responsible institutions of individual countries are exchanging NOTAM messages between them. Their validity may be several hours or to other possible changes. Digital NOTAM messages are formed by transfer of plain text of the classic NOTAM structured into a digitized form that allows automatic processing of information [6]. They are in the form of updating data files, and compared to conventional processing personnel they are processed using specialized systems [7]. Example of Digital NOTAM messages: 280847 EBBRYNYN (A1143/02 NOTAMN Q) EBBU/QOBCE/IV/M/A/000/999/5027N00427E002 A) EBCI B) 0208280800 C) 0210301400EST E) CRANE ERECTED 25M AGL - 200M AMSL AT 1200M ARP ON A MAG BRG OF 40 DEG. AND AT 450M RIGHT SIDE OF THR25. NO ICAO MARKINGS.) IV. XML COMPRESSION Compression can be generally called as a operation that identifies and removes redundancy from arbitrary data. XML data compression can be used to find smaller size of the original document as much as possible. This reduction produces different solutions for different benefits. Reduction of the original file is reflected, on the amount of space used on the storage media or on the used bandwidth of transmission capacity of the line. A. Data compression Data compression can be classified into two basic types: 1) Lossless compression: Depending on the type of input data usually does not reduce the file size dramatically, but always retains complete information. After decompression are reconstructed the original data. It is used primarily where the loss of even a single character can mean irreparable damage to the file. IT Can be applied to the above-mentioned multimedia data, but usually is used to compress text data. 2) Lossy compression: It is more efficient than lossless compression, but at the cost of reconstruction accuracy. The algorithms usually reduce the amount of data to a fraction of its original size. Typically uses are for shrinking audio, video or pictures. It is a mere approximation of the original data, while the majority uses the imperfection of human senses [8]. When this compression is losing less important information that cannot be retrospectively reconstructed. B. Compression mechanism The compression mechanism is divided into two main groups, depending on how the XML document is seen depending on the structure [9]. 1) Compressing XML as text: When compressing XML as a text document regarded as a normal file, which is compressed as a whole, regardless of their internal structure. Since XML is basically a text file that can be applied to any of efficient algorithms specializing in text compression. 2) XML-aware compression: As the name suggests, these compression methods use to they advantage knowledge of the internal structure of the XML document, so they are focusing directly on the semantic information stored in the XML data. These in turn are used to prepare the XML data to the next stage of compression. At that applied already common compression algorithms, so as to achieve the best results. Depending on the availability of XML schema we can also divide these methods to depending on the scheme and the independent scheme. In the case of the compressor according to the decompressor also have access to the XML schema of the file to compression adequately perform. Independent methods for its full functionality of the scheme required. Although dependent mechanisms are able to achieve better compression ratios, in practice, not because of the uncertainty continued availability of preferred scheme [10]. V. COMPRESSION TOOL FOR AIXM We created an application that allows compressing AIXM data. In the fact of contemporary knowledge it was appropriate to use the textual form of the data and apply methods for lossy and lossless compression. When lossy compression to through the remove unimportant tags and their content, while for lossless compression we use mechanisms LZMA and XMill. They represent modern open-source solutions of fast and methods with high quality compression. Both methods available have been selected with regard to the earlier test results XML data compression. LZMA compression represents an efficient mechanism, belongs to the compression of XML as text, while XMill represents the faster and belongs to a group of XML-aware compression. Now will be described individual elements of the GUI. GUI can be divided into three areas. These are the numbers 1, 2 and 3 shown in Fig 2. The first region (shown as 1) consists of a control panel which will always be found in the left part of the navigation element (NAVIGATION). Control panel serves for the passage of program components INPUT, SELECTION 32 Studentská konference Zvůle 2014 and OUTPUT. In the event that the program has some of the control elements, they will be plotted on the right part. Equally important part is the GUI browser currently processed data, or preview panel (labeled as 2). It can display a listing of XML code tree structure created by parsing. Both options are used to the best possible description of the currently processed data. For better orientation, the user can in part INPUT and OUTPUT take advantage indicate any element tree structure, wherein the operation designates also the associated line of code in Listing XML data. Last major part of the user interface is an information panel (indicated as 3). He often displays useful information every time the program runs. Text output is still a different color depending on what type of information is presented to the user. Text in cyan hue are for information only and the user need not take them into account, it is usually a confirmation of the correct execution of the act. Opposite produce reports with a yellow tinge. Such information usually indicates an error, or deficiency, which is the correct continuation needs to be corrected. Program itself consists of three main program areas. Their names are buttons on the left side of the control panel. • INPUT - This part is active immediately after starting the application. The following options to retrieve AIXM data and NOTAM or decompress previously compressed output. In all three cases, the entry after carrying out the necessary tasks loaded into the preview block consisting of two part. Left part contains the code and right part contains a tree structure. • SELECTION - In the selection, the user can choose from the loaded elements and when selecting only certain elements leads to lossy compression, because only selected elements will be taken into the output format. • OUTPUT - The output part displays the selected elements and allows to user store them in AIXM format or applying lossless compression. VI. CONCLUSION AND FUTURE WORK This paper presents an application that uses lossy and lossless compression mechanisms for XML, respectively AIXM data. This has been achieved by using the information processed in this work. The program can recognize dozens of AIXM objects. Program have the user interface with a variety of display options, and output can be saved with lossless compression of LZMA or XMill, or with a combination of lossy compression and lossless compression. Generally, the lossless compression saves about 90% of the total size of the input file. When combined with lossy compression is expected to achieve even better results. Of course, taking into account the user’s choice. With such a compressed data applications can then also work, as well as feedback allows decompression and the related possible further compression according to the choice of found elements. ACKNOWLEDGMENT This research work was supported by the project CZ.1.07/2.3.00/30.0005 of Brno University of Technology. Fig. 2. Appearance GUI application when retrieve AIXM data REFERENCES [1] T. Bray, J. Paoli, C. M. Sperberg-McQueen, E. Maler, and F. Yergeau, “Extensible Markup Language (XML) 1.0 (Fifth Edition),” World Wide Web Consortium, Recommendation REC-xml-20081126, Nov. 2008. [2] Extensible Stylesheet Language (XSL) Version 1.1. [Online]. Available: http://www.w3.org/TR/xsl11/ [3] S. Sol, “Advantages and disadvantages of XML.” [Online]. Available: theukwebdesigncompany.com/articles/ xml-advantages-disadvantages.php [4] EUROCONTROL, “Aeronautical Information Exchange,” 2006. [Online]. Available: eurocontrol.int/services/aixm.aero/public/standard\ page/introduction.html [5] ——, “Aeronautical Information Exchange Model,” 2006. [Online]. Available: eurocontrol.int/ services/aeronautical-information-exchange-model-phase-3-p-09 [6] EUROCONTROL AND FEDERAL AVIATION ADMINISTRATION, “AIXM Temporality Model,” 2010. [Online]. Available: aixm.aero/ public/standard\ page/download.html [7] EUROCONTROL, “Digital NOTAM.” [Online]. Available: eurocontrol. int/articles/digital-notam-phase-3-p-21 [8] K. Sayood, Introduction to data compression, 4th ed. Waltham, MA: Morgan Kaufmann, 2012. [9] S. Sakr, “Investigate state-of-the-art XML compression techniques,” 2011. [Online]. Available: https://www.ibm.com/developerworks/xml/ library/x-datacompression/ [10] ——, “XML compression techniques,” Journal of Computer and System Sciences, vol. vol. 75, no. issue 5, pp. 303–322, 2009. [Online]. Available: http://linkinghub.elsevier.com/retrieve/pii/S0022000009000142 33 Studentská konference Zvůle 2014 Aircraft Wiring and Transients Caused by Lighting David Krutílek Dept. of Radio Electronics Brno University of Technology Brno, Czech Republic xkruti01@stud.feec.vutbr.cz Abstract— This paper deals with simulations of lightning effects on the aircraft. They can be divided into two broad categories: direct and indirect. Furthermore, production of suitable simulative models for explored object, semi-composite aircraft EV-55, will be discussed. Selected computational methods and real measurement results will be compared as well. Keywords— Simulation; EMC; aircraft; CEM; Direct and indirect effects of lightning I. INTRODUCTION The development of electronics in several last decades, especially the beginning of microprocessor technologies, personal computers and electronic communication systems led to a development of the new scientific and technical discipline – Electro Magnetic Compatibility (EMC) [1], [2] in the sixties of the 20th century. In present, this branch has become an inseparable part of everyday life. Concrete reasons, why this branch is more and more frequently discussed are especially medical, safety, technical and economical. Nonobservance and breach of EMC requirements may lead to various consequences, from user unacceptable situations – e. g. interference of radio receiver by mobile phone signals – to disasters, such as the concrete case that happened in 2002, when aircraft L-410 was holding 15 km from the destination airport at a height of 2 500 ft AGL (Altitude above ground level), due to bad weather at Anjouan (island in the Indian Ocean that forms part of the Union of the Comoros). When conditions improved clearance was obtained for the final approach. The airplane was struck by lightning. A go-around was attempted, but the airplane lost the artificial horizons and gyrocompasses as a result of the lightning strike. Control was lost and the airplane crashed [3]. II. EM THREATS This part is focused on environment and situations affecting the aircraft that may appear during normal operating conditions. We would like to discuss especially the environment with high intensities of electromagnetic field, which is abbreviated as HIRF. Illumination of the aircraft by such kind of electromagnetic field is shown in Fig. 1. Flight through a HIRF environment is more common than being struck by lightning. However, we cannot exclude that the aircraft will not be struck by lightning at all. The effects of lightning can be either direct or indirect. The direct and indirect effects of lightning usually appear when the aircraft becomes a part of lightning channel. Fig. 1 Surface current at 400 MHz. A. Direct effect of lighting During a lighting strike it is possible that the aircraft may be mechanically and thermally damaged and sparking as well as arc may appear at seams of its conductive parts. Sparking is dangerous primarily in the area of fuel tanks and arc may weld and damage door locks to prevent passengers to leave the plane in case of emergency landing. To prevent from such unexpected situations, electrical bridging is used as the most suitable solution. Lightning diverters are used for composite parts´ protection so that lighting current will be moved on airframe. In fact, these are copper stripes, which are shown in Fig. 2. Fig. 2 Lightning diverters on EV-55. 34 Studentská konference Zvůle 2014 The aircraft is divided into so called zones. This means that the most critical places on the aircraft are determined and these are the places where natural lightning statistically the most frequently enters and leaves the aircraft [4]. These places must be protected in a special way. B. Indirect effect of ligtning Concerning the indirect effects of lightning, especially susceptibility of airborne systems is examined. Under certain circumstances, strike of lightning may cause dangerous transients in cabling and therefore a damage airborne devices including threat of secure flight disruption. Values of interference caused by lighting are determined by European standards [5], [6]. Cabling can be protected by shielding. III. LIGHTINHG SIMULATION Aircraft is struck by lightning for several times a year. Simulation of lightning stroke into the aircraft is useful especially by development of the aircraft, when functional prototype is not produced. These simulations are located to places where lightning current goes through, or eventually the construction may be modified so that it complies with given requirements. This solution saves costs on development, certification and flight safety. For simulations is used numeric model of aircraft EV-55. This aircraft is being developed by Evektor Ltd [7]. The following chapters are concerned with creation of numeric model and verification of calculated results. a) b) c) Fig. 3 The process of model creation: a) CAD data, b) numerical model, c) flying prototype. A. Creation of numerical model The original CAD data of aircraft EV-55 can be seen in Fig. 3a. It is a very detailed model. This kind of model has great demands on calculation. That is the reason why simplified numerical models are created, where rivets and small holes are neglected. The particular parts have zero thickness. Such kind of simplified model are shown in Fig. 3b. Functional flying prototype of aircraft EV55 is in Fig. 3c. However, characterization of materials is simplified. Metal parts are considered as PEC material and layered materials are replaced by analytical models with the zero thickness B. Verified Method Speaking about simulation models, it is important to verify the results. The results from HIRF SE project [8] were applied on verification of computed results. Low Level Direct Drive method is used for measurement of aircraft at low frequency. We have measured surface currents and also transients on cabling. Return grid is created around aircraft and fuselage of the aircraft is excited as shown in Fig. 4. Analogy of the aircraft test excitation can be found in a coaxial cable. Fig. 4 Excitation LLDD, simulation. If we look at the results of a simulation on the real measurement configuration, it is obvious that a very good correspondence occurs (Fig. 6). The red line corresponds with the simulation and the blue one with the measurements. In the sketch plane is marked position of the surface current probe blue rectangular between wings (Fig. 5). Platform: EV-55 Test point: SC12b Excitation: Direct Drive SC sensor orientation: Along fuselage Frequency band: 10 kHz – 20 MHz Fig. 5 Position of SC probe. 35 Studentská konference Zvůle 2014 Fig. 6 Data comparison [9] . IV. CONCLUSION To conclude, creation of EV55 aircraft electromagnetic model has shown that simplifying, repairing and cleaning of CAD models before their meshing is a key element in the whole process of model preparation for the simulation. These procedures depend on manual operations and they are time consuming. Common estimation of their creation takes usually about 85 percent of the total amount of time, which need to be taken into consideration. Analysis and simulation of properly prepared model therefore represents 15 percent of the necessary time. Method LLDD was selected for measurement in similar frequency spectrum, as lightning has. Good result conformity at low frequencies in view of the fact that direct drive excitation of fuselage is similar as it is with lightning strike. It is great assumption for lightning strike simulation. ACKNOWLEDGMENT Research presented in this paper was financially supported by the European FP7 project "High Intensity Radiated Field Synthetic Environment". The project is solved in the cooperation with EVEKTOR Ltd., Kunovice, Czech Republic. REFERENCES [1] SVAČINA, J. Elektromagnetická kompatibilita. Lectures. Brno, Dept. of Radio electronics [2] PAUL, C., R. Introduction to Electromegnetic Compatibility. John Wiley, New York, 1992. 784 stran. ISBN 978-0471549277. [3] AVIATION SAFETY NETWORK [online] - [30 Jul 2014] available at: http://aviation-safety.net/database/record.php?id=20021227-0 [4] EUROCAE ED-91 / SAE ARP 5414 Aircraft Lightning Zoning Standard. [5] EUROCAE ED-14D / RTCA DO-160G (December 2010) – Section 22. [6] EV55/CRI F02 Lightning Protection - Indirect Effects (IEL), EASA 2007. [7] EVEKTOR [online]. Kunovice: EVEKTOR spol. s r. o., 2014 - [30. Jul 2014] avilable at na: www: http://www.evektor.cz/. [8] HIRF Synthetic Enviroment [online] - [30 Jul 2014] available at: http://ec.europa.eu/research/transport/projects/items/hirf_se_en.htm. [9] TOBOLA, P., TK7.2-EV55-LLSF-1 RESULT ANALYSES REPORT, internal document created by the project HIRF SE, 2013, GA 205594 (FP7) 10 4 10 5 10 6 10 7 10 -3 10 -2 Frequency [Hz] SC[A/m] Meas. SC12b Simulation SC12b 36 Studentská konference Zvůle 2014 Equivalent Circuits of Three–Element Filtering Antenna Array Fed by Apertures Martin Kufa, Zbynek Raida Dept. of Radio Electronics Brno University of Technology Brno, Czech Republic martin.kufa@phd.feec.vutbr.cz, raida@feec.vutbr.cz Jordi Mateu Dept. of Signal Theory and Communication Universitat Politcnica de Catalunya (UPC) Castelldefels, Spain jmateu@tsc.upc.edu Abstract—The paper is focused on a design of a threeelement filtering patch antenna array using an equivalent-circuit approach. The array is requested to exhibit a prescribed frequency response of a realized gain which corresponds to the output signal of a band-pass filter. When implementing the antenna, no filter elements are allowed to be used. Such an antenna can become a good candidate for the design of filtering antennas (filtennas). We present an equivalent circuit of such an antenna array. The equivalent circuit is validated by a full-wave model of the antenna. The equivalent circuit is transformed to a low-pass prototype filter. The low-pass prototype filter is intended to be exploited for the synthesis of filtering antenna arrays. Keywords—Filtering antenna array; filtenna; pass-band filter; low-pass prototype filter; low-pass transformation I. INTRODUCTION Today’s wireless communication systems are expected to be small to provide an easy mobility. Therefore, all components of a transmitter and a receiver should have minimal dimensions. The minimal dimensions of the transmitter or the receiver can be obtained by an integration of the filter to the antenna. The topic has been discussed in several papers:  In [1], authors presented a hairpin band-pass filter connected to a patch antenna. The patch played the role of the last resonator of the filter. That way, dimensions of the whole structure were partially reduced.  In [2], attention was turned to evolving the filter antenna from a three-pole filter consisting of square-loop resonators. The output port of a filter was replaced by a Γ-shaped antenna.  In [3], a five-pole filter in a substrate integrated waveguide (SIW) was integrated with a six-element printed Yagi antenna. The SIW filter played the role of a balun at the same time.  In [4], a four-element patch antenna was completed by a three pole planar filter. The first pole was formed by a power divider, the second pole was created by a balun and the last pole was generated by rectangular patch antennas. This brief overview of the latest development in field of filtering antennas shows that many designs implement both filtering and radiation by a single, compact planar layout. On the other hand, all the discussed designs were driven intuitively. In this paper, we analyze a three-element filtering patch antenna array fed by apertures. The filtering array is represented by an equivalent circuit. Then, the equivalent circuit is transformed to a low-pass prototype filter. Since the low-pass prototype representative of an aperture-fed patch array is available, the design procedure starting with a low-pass prototype and resulting in a requested filtering radiating structure can be applied. The described approach can be a good candidate for the design of an antenna array which behaves like a filtering antenna (filtenna) without applying any filter. II. THREE-ELEMENTS FILTERING PATCH ARRAY FED BY APERTURES We designed a three-element filtering patch array fed by apertures. The array did not exhibit parasitic resonances and the main lobe direction stayed perpendicular to the aperture within the operation band. The array was designed for the substrate ARLON 25N (relative permittivity r = 3.38, thickness h = 1.524 mm, neglected losses). Fig. 1 shows that the bottom layer is created by a 50 Ω transmission line. The ground plane with rectangular apertures is located in the middle layer and patch antennas are placed on the top layer. A distance between neighboring patches is wavelength approximately. In phase feeding of patches ensures the main lobe direction being perpendicular to the substrate. The length of patches is L = 13.2 mm, and the width of patches equals to W = 12.5 mm. Apertures are La = 6.5 mm long and Wa = 0.6 mm wide. The microstrip feeder is designed to exhibit the characteristic impedance Z0 = 50 Ω. The width of the transmission line is w = 3.3 mm, and the length of the open end lo equals to the quarter of the wavelength, approximately. The designed antenna array fed by apertures was analyzed by CST Microwave Studio. Fig. 2 shows that the array is Research described in this paper was financially supported by the Czech Science Foundation under grant no. P102/12/1274. The research is a part of COST Action IC1301 (grant no. LD14057 provided by Czech Ministry of Education). Measurements and simulations were performed at the SIX Research Center (grant no. CZ.1.05/2.1.00/03.0072). 37 Studentská konference Zvůle 2014 designed for the frequency 5.64 GHz. The 10 dB frequency bandwidth of the array is about 3 % and reflection coefficient is better than 22 dB. No parasitic resonances appear in the operation band. Fig. 1 Three element patch array fed by apertures. The frequency response of a normalized realized gain creates an equivalent of a band-pass filter (green line). For the gain, the 3 dB frequency bandwidth is 7.4 % wide, and the maximal realized gain is 10.7 dB. Fig. 1. Frequency response of reflection coefficient (blue) and frequency response of normalized realized gain in direction perpendicular to substrate (green) of three element filtering patch array fed by apertures. Selectivity of the equivalent band-pass filter is better than 39 dB/GHz. Suppression in the stop band equals to 20 dB. Fig. 2 shows that the three-element filtering patch array fed by apertures radiates in the frequency range from 5.47 GHz to 5.89 GHz only. The main lobe direction varies from –5° to 0°. III. EQUIVALENT CIRCUIT OF THREE-ELEMENT FILTERING PATCH ARRAY FED BY APERTURES Fig. 3 shows an equivalent circuit of the three-element filtering patch array fed by apertures. The equivalent circuit is composed of three parallel RLC resonant circuits, three J–inverters and four parts of a transmission line. The parallel RLC resonant circuits simulate the behavior of the patch. In order to meet outputs of full-wave simulations, parameters of the RLC resonator were set to R = 53.0 Ω, L = 58.6 pH, and C = 14.2 pF. The J–inverter simulates coupling between the microstrip transmission line and the individual patch. The coupling equals to 0.0189. The width of the microstrip feeding is 3.3 mm, and its characteristic impedance equals to 50 Ω. The length of the first part of the feeder (from the source to the first patch) is 20 mm. The length of the second part of the feeder and the third part equals to 30 mm. The length of the last part of the feeder (from the last patch to the open end) is 8.8 mm. 000 Port1 P=8.8mm W=3.3mm 1 2 3 W1=3.3mm W2=3.3mm W3=3.3mm 1 2 3 W1=3.3mm W2=3.3mm W3=3.3mm 1 2 3 W1=3.3mm W2=3.3mm W3=3.3mm P=30mm W=3.3mm P=30mm W=3.3mm P=20mm W=3.3mm JINV JINV JINV 53ohm 58.6pH 14.2pF RLC313 53ohm 58.6pH 14.2pF RLC314 53ohm 58.6pH 14.2pF RLC315 Fig. 2. Equivalent circuit of three-element filtering patch array fed by apertures: ANSYS DESIGNER model. The equivalent circuit was verified in a circuit simulator of ANSYS Designer and in MATLAB by a script exploiting ABCD matrices [5], [6]. The ABCD matrix of the transmission line follows                     llY lZl DC BA c c   coshsinh sinhcosh (1) Here, Zc is the characteristic impedance of the feeder, and Yc, is the characteristic admittance of the feeder, l is the length of the feeder and γ is the complex propagation constant. For the lossless transmission line  j (2) where  is the phase constant. Following [6], equation (1) can be rewritten to                     lljY ljZl DC BA c c   cossin sincos . (3) The ABCD matrix of the J–inverter is                0 1 0 jJ jJ DC BA  (4) and the ABCD matrix of the parallel combination of the parallel RLC equals to               10 1 2 LjRLCR RLj DC BA   . (5) The using equations (4), (5) and considering the condition for the short end of a one-port network [5], the ABCD matrix for a parallel combination of resonant circuits can be obtained Substrate Apertures in ground plane Transmission line Planar patch antennas L lo W d La Wa w 38 Studentská konference Zvůle 2014                1 01 2 2 LjRLCR RLJj DC BA   . (6) In the next step, we can calculate the total ABCD matrix of the equivalent circuit of the whole antenna structure by multiplying equations (1) and (6): o s MTLMPBMTLMPB MTLMPBMTLTOTAL   322 11 ... (7) Here, MTLs is the ABCD matrix of the transmission line from the source to the first patch and MTLo is the ABCD matrix of the transmission line from the last patch to the open end of the transmission line. MTL1 is the ABCD matrix of the transmission line between the first patch and the second patch, and MTL2 is the ABCD matrix of the transmission line between the second patch and the third patch. MPB1, MPB2 and MPB3 are ABCD matrices of parallel branches (patches). The reflection coefficient of the equivalent circuit may be calculated from the total ABCD matrix (7) by using [5], [6] DCZBYA DCZBYA S    00 00 11 . (8) Figure 4 compares results of full-wave analysis in CST Microwave Studio (green line), results of circuit analysis in ANSYS Designer (red line) and of simulations in MATLAB (blue line). Obviously, frequency responses of reflection coefficient computed in ANSYS Designer and MATLAB script agree well. The frequency shift between the full-wave model and the equivalent circuit from MATLAB is 63 MHz approximately. Matching of the full-wave model is for about 5.3 dB worse compared to the MATLAB model. Hence, the equivalent circuit model is proven to fully replace the full-wave model in faster, less accurate calculations. Fig. 3. Comparison of the full-wave model implemented in CST Microwave Studio (green line), the equivalent-circuit model implemented in ANSYS Designer (red line), and the equivalent-circuit model implemented in MATLAB (blue line). IV. LOW-PASS TRANSFORMATION In this chapter, we present a transformation of an equivalent-circuit model of the patch array fed by apertures to a normalized low-pass prototype filter. The frequency transformation is given by the relation              0 0FBW C (9) The transformation from the low-pass filter to the band-pass filter can be calculated using FBWj g FBW g jgj CC     0 0     (10) Here, ΩC is cutoff frequency of the low-pass prototype filter, ω0 denotes the center angular frequency, FBW is the fractional bandwidth and g is the normalized value of the low-pass prototype filter. After several mathematic operations, the equation (10) can result in the relationship for the transformation from the bandpass filter to the low-pass prototype filter C FBW Cg   0 (11) Using the above described process and considering (11), we can calculate the frequency response of the reflection coefficient of the low-pass prototype filter in MATLAB (see Fig. 5). Fig. 4. Frequency response of reflection coefficient of the low-pass prototype filter. V.MEASUREMENT The three-element filtering antenna array fed by apertures was verified and measured. Figure 6 shows a confrontation of the simulated results of the frequency response of the reflection coefficient (blue line) and measured results of the three–element filtering antenna array fed by apertures (red line). As can be seen, simulated results and measured one are in good agreement. 39 Studentská konference Zvůle 2014 Fig. 6. Frequency response of the simulated reflection coefficient (blue line) and the measured one (red line). Fig. 7 shows a confrontation of the simulated results of the frequency response of the realized gain (blue line) and measured results of the three–element filtering antenna array fed by apertures (red line). As can be seen, simulated results and measured one are in good agreement. Fig. 7. Frequency response of the simulated realized gain (blue line) and the measured one (red line). VI. CONCLUSIONS In the paper, we analyzed the three-element filtering patch array using the full-wave solver of CST Microwave Studio. The antenna shows properties of a filtering antenna (filtenna) without an explicit filter. The frequency bandwidth of the antenna (S11 < 10 dB) is 3 %, approximately. Reflection coefficient at the input of the structure is lower than –22 dB at the frequency 5.64 GHz and the maximal realized gain equals to 10.7 dB. The shift of the main lobe in the working band is from –5° to 0°. The frequency response of the realized gain is similar to the output signal of the band-pass filter. We therefore represented the patch array by the equivalent band-pass filter with the same center frequency and bandwidth 7.4 % (3 dB decrease of transmission) and the selectivity 39 dB/GHz. The suppression in the stop band is better than –20 dB. The equivalent circuit of the structure was verified in the circuit simulator of ANSYS Designer and the script in MATLAB. Results are in good agreement with the full-wave model. The equivalent circuit was transformed to the low-pass prototype filter. This procedure can create a basis for the design of filtering antenna array using the conventional filter-synthesis approach. In the next step, we will develop the design procedure for the filtering antenna array (filtenna) using the low-pass prototype filter including the calculation of the frequency response of the realized gain from the equivalent-circuit model. REFERENCES [1] J. Verdu, J. Perruisseau-Carrier, C. Collado, J. Mateu and A. Hueltes. “Microstrip patch antenna integration on a bandpass filter topology.” In proc. 12th Mediterranean Microwave Symposium (MMS2012), no. EPFL-CONF-179874. 2012. [2] W. J. Wu, Y. Z. Yin, S. L. Zuo, Z. Y. Zhang and J. J. Xie. “A New Compact Filter-Antenna for Modern Wireless Communication Systems,” IEEE Antennas and Wireless Propagation Letters, vol. 10., pp. 1131–1134, 2011. [3] S. Yu, W. Hong, Ch. Yu, H. Tang, J. Chen and Z. Kuai. “Integrated Millimeter wave filtenna for q-linkpan Application,” 2012 6th European Conference on Antennas and Propagation (EUCAP). [4] Ch. K. Lin and S. J. Chung, “A Filtering Microstrip Antenna Array”, IEEE Transactions on Microwave Theory and Techniques, vol. 59, no. 11,, pp. 2856 – 2863, november 2011. [5] J. S. Hong and M. Lancaster. “Microstrip filters for RF/microwave applications”, New York: John Wiley, 2001. ISBN 04-713-8877-7. [6] D. M Pozar. “Microwave engineering”, 3rd ed. Hoboken: John Wiley, 2005. ISBN 978-0-471-44878-5. 40 Studentská konference Zvůle 2014 USRP setup for energy detection-based Cooperative Spectrum Sensing for Cognitive Radio Networks Demian Lekomtcev Department of Radio Electronics Brno University of Technology Country Brno, Czech Republic xlekom00@stud.feec.vutbr.cz Roman Marsalek Department of Radio Electronics Brno University of Technology Country Brno, Czech Republic marsaler@feec.vutbr.cz Abstract— The cognitive radio technology allows solving one of the main issues of current wireless communication technologies, namely a deficit of vacant spectrum. A dynamic spectrum access used in the cognitive radio networks (CRN) gives an ability to access to an unused spectrum in real time. Cooperative spectrum sensing is the most effective method for spectrum holes detecting. It combines sensing information of multiple cognitive radio users. The aim of this paper is to present the experiments with the cooperative spectrum sensing making use of the two USRP 2 software defined radio devices In this work we present a setup for the evaluation of the cooperative spectrum sensing using the Universal Software Radio Peripheral (USRP) devices synchronized through a MIMO cable and with further processing in the GNU Radio and Matlab software. The paper presents first results of evaluation of the several cooperative schemes on the IEEE 802.22 – like signals gathered with this setup. Keywords— Energy Detection; Fusion Rules; Receiver Operating Characteristic (ROC); GNU Radio; USRP; MIMO. I. INTRODUCTION In 2014, the IEEE Standards Association announced the creation of “IEEE forms study group to explore standardization for spectrum occupancy sensing technology” [1]. The main objective for this working group is to optimize the use of radio spectrum for wireless broadband services for IEEE 802.22 standard. That is why, despite the fact that there are many investigations aimed at studying the spectrum sensing, this question is still open. In our work, we will focus on three hard decision methods for cooperative sensing because these technologies are effective, simple and low-cost implemented. The aim of this paper is to present the use of the experimental workplace consisting of the synchronized Universal Software Radio Peripheral (USRP) devices for the evaluation of the cooperative spectrum sensing techniques and to demonstrate its capabilities on the selected OR, AND and MAJORITY fusion rules for sensing of the IEEE 802.22-like standard. The performance of these fusion rules is evaluated on the example of sensing of the real digital TV transmission. The data are processed in Matlab and GNU Radio software environments. The rest of this paper is organized as follows. In Section 2, we introduce the theoretical model for energy detection in CRN. Section 3 presents three hard fusion rules for cooperative spectrum sensing. In Section 4, we provide the description of the experimental setup. Section 5 presents the first simulation results and Section 6 concludes the paper. II. SPECTRUM SENSING USING THE ENERGY DETECTION The detection of a signal within a noisy measure over a specific frequency band is the key problem associated with spectrum sensing. The solution of this problem is to decide between two hypotheses:      1 0 ),()( ),( )( Htnts Htn tr (1) where r(t) is the received signal at SU, s(t) represents the transmitted signal of the PU observed at SU, and n(t) is the additive white Gaussian noise (AWGN). In this paper a band of the spectrum ∆f (470-478 MHz, a twenty first TV channel) is observed. The spectrum sensing is performed as follows. First fifty thousand samples of the received signal from the investigated TV channel are received by each SU. Then this selected part of the signal is divided into segments of fifty elements (Nsamples). Whereby, the timesegmented signal is obtained. Hereinafter the signal energy for each segment is calculated. For our energy detector, calculations are based on Neyman-Pearson criterion, namely:   N n tr N Y 1 2 ))(( 1 (2) where Y presents the output of the energy detector which serves as the test statistic. To make a decision about presence or absence of a signal, we introduce γ as the threshold which varies depending on the noise variance δn 2 . Probability of false alarm (Pfa) and detection probability (Pd) provide a way to characterize the performance of an energy detector, as it is given as follows [2].           N QYPP n n fa / )H|>( 2 2 0    (3) 41 Studentská konference Zvůle 2014            12/ )1( )H|>( 2 2 1 SNRN SNR QYPP n n d    (4) where Q(x) presents the complementary distribution function of the standard Gaussian and is given as     x u duexQ 2/2 2 1 )(  (5) SNR is a signal to noise ratio, which can be defined in terms of the signal and noise variance (δs 2 and δn 2 ) as 2 2 n s SNR    (6) Therefore for a constant false alarm rate (Pfa) the threshold is given by [2] N PQ nfa n 21 2 )(      (7) Values of Pd are calculated using the following equation (8) segmentsobservedofNumber H|>havewhichsegmentsofNumber 0Y Pd  (8) According to equations are represented above, each SU calculate the values of Pd and Pfa. These calculations are then used for different fusion rules, which are discussed in the next section. III. DECISION FUSION RULES To improve signal detection in the CRNs, the cooperative spectrum sensing is used. In this instance there are M SUs that sense the PU. Each of them makes its own decision regarding the presence or absence of the PU, and forwards the binary decision (1 or 0) to FC for further processing. SUs are located negligible to each other compared to the distance from them to the PU. Thus the primary signal received by all the SUs has the same local mean signal power. For simplicity we have assumed that the noise, fading statistics and average SNR are the same for each SU, and the channels between SUs and FC are ideal (i.e. there is no loss of information). A final decision on the presence of the PU is made by k out of M SUs can be described by binomial distribution based on Bernoulli trials where each trial represents the decision process of each SU. The generalized formula the probability of detection at the fusion center is given by [3], [4]:   lM ifa l ifafa PP l M P           ,, M kl 1 (9)   lM id l idd PP l M P           ,, M kl 1 (10) where [4] )!(! ! lMl M l M        (11) Pd,I and Pfa,I are the probabilities of detection and false alarm respectively for each SU as defined by (3) and (4). In this paper three rules of the hard combination scheme are discussed. A. Logical OR-Rule In this rule, a decision that the PU is present is made if any of the SUs detect the PU. The cooperative probability of detection (false alarm) using OR fusion rule can be evaluated by setting k=1 in eq. (9, 10):    M l ifafa PP 1 ,11 (12)    M l idd PP 1 ,11 (13) B. Logical AND-Rule In this rule, a decision that the PU is present is made if all SUs have detected the PU. The cooperative probability of detection (false alarm) using AND fusion rule can be evaluated by setting k=M in eq. (9, 10):   M l ifafa PP 1 , (14)   M l idd PP 1 , (15) C. Logical MAJORITY-Rule In this rule, a decision that the PU is present is made if half or more of the SUs detect the PU. The cooperative probability of detection (false alarm) using MAJORITY fusion rule can be evaluated by setting k=M/2 in eq. (9, 10):   lM ifa l ifafa PP l M P           ,, M M/2l 1 (16)   lM id l idd PP l M P           ,, M M/2l 1 (17) IV. USRP SETUP DESCRIPTION Our measurement setup is presented in Fig. 2. Fig. 1. Experimental setup with the signal of real commercial TV transmission. It consists of one personal computer (PC), which is connected to USRP2 (that is carrying out a SU1 role), through a Gigabit Ethernet port. For correct work of the cooperative 42 Studentská konference Zvůle 2014 scheme, the precise synchronization between the SUs must be realized. Ettus ResearchTM provides several convenient solutions for synchronization. For example, two USRP can be synchronized using a MIMO cable. It is also possible to synchronize more than two units using the Ettus Research OctoClock. Optional GPS-disciplined oscillators provide the capability to synchronize devices to the GPS standard over a large geographic area [5]. In this setup, the second USRP2 (which is carrying out a SU2 role) is connected to the first USRP through the MIMO cable for building an easily 2X2 synchronized system. In this experiment the SBX daughter board is used for both USRPs. Each SU received a signal from ether by its own antenna, which placed as shown in Fig. 2. One SU had antenna behind the window and it connected with amplifier to enhance of the received signal, other had it on the laboratory table and a distance between them is about 4 meters. With such a scenario, two received-signal replicas are created. The PC is running Fedora 16, the signal processing applications is also done using GNU Radio version 3.7.2.1, which is an open source software. Each of SUs sensed this channel. By applying the GNU Radio the sensing results are recorded into the data files for subsequent processing. After that in Matlab based on the recorded data files, values of Pd and Pfa are calculated for each SU. Fig. 2. Experimental setup location on the floor plan with the signal of real commercial TV transmission. In accordance with fusion rules for cooperative sensing for both schemes a final decision about a vacant channel is made. Calculations and results are presented below. V. FIRST RESULTS To evaluate the performance of a particular method of cooperative spectrum sensing techniques such as the key characteristics as Pd and Pfa are used [6]. The Receiver Operating Characteristics (ROC) curve graph visually shows the dependence between these parameters. All calculations for constructing diagrams are performed in Matlab. Fig. 3 presents the ROC curves for our 2X2 MIMO synchronized system. This picture shows cooperative and noncooperative sensing graphs. As can be seen from figure, SU1 with antenna placed behind the window and connected with amplifier has better detection properties than SU2 with antenna placed inside laboratory. And even AND or MAJORITY (which have the same ROC curves because NSU=2) hard decision rules show worse graphs than SU1. But the OR rule has better detection performance than other rules. This is due to the fact that for OR rule FC decides the PU is present when at least one SU detects it. While all SUs and half or more SUs must detect PU signal for AND and MAJORITY rule respectively. Fig. 3. ROC for hard fusion rules under real TV channel. As is known, for the IEEE 802.22 standard Pd should be 90% or higher at Pfa = 0.1 [7]. In our experiment, OR rule meets these requirements under other equal conditions for all hard decision rules (SNR, number of SUs, Nsamples). Thus, it's appropriate to use OR rule in our further investigations for cooperative spectrum sensing. VI. CONCLUSION In this paper a measurement setup for the evaluation of the performance of the cooperative spectrum sensing using the commercial software defined radios USRP 2, synchronized through the MIMO cable is presented. Using this setup, the comparison of hard fusion rules for cooperative sensing is presented. It is shown that OR rule outperform AND or MAJORITY rules, but at high Pd values this difference becomes lower. In our future work, we will use the investigated setup for wireless cooperative sensing in a city area on complete available TV channels. Different reception schemes like a 4X4, 8X8 MIMO or a solution with a GPS disciplined oscillators will be investigated and compared. Such the extended setup would be in the near future also used for the in-depth comparison of various spectrum sensing techniques, e.g. cyclostacionary or cyclic prefix correlation-based methods with the simple energy detector in the real scenarios of IEEE 802.22 signal detection. 43 Studentská konference Zvůle 2014 ACKNOWLEDGMENT The research published in this submission was financially supported by the Brno University of Technology Internal Grant Agency under project no. FEKT-S-14-2177 (PEKOS). The described research was performed in laboratories supported by the SIX project; the registration number CZ.1.05/2.1.00/03.0072, the operational program Research and Development for Innovation. The participation in the collaborative COST IC1004 action was made possible through the MEYS of the Czech Republic project LD12006 (CEEC). REFERENCES [1] IEEE 802.22 Working Group on Wireless Regional Area Networks. Enabling Rural Broadband Wireless Access Using Cognitive Radio Technology in TV Whitespaces. Recipient of the IEEE SA Emerging Technology Award. [Online] Available: http://grouper.ieee.org/groups/802/22/ (May 31, 2014). [2] P.R. Nair, A.P. Vinod and A.K. Krishna, “An adaptive threshold based energy detector for spectrum sensing in cognitive radios at low SNR,” IEEE International Conference on Communication Systems (ICCS), pp. 574 - 578, 2010. [3] Yang, X. and Fei, H., Cognitive Radio Networks, 2nd Ed. Taylor & Francis Group, LLC, 2009. [4] D. Ruilong, et al., “Energy-Efficient Cooperative Spectrum Sensing by Optimal Scheduling in Sensor-Aided Cognitive Radio Networks,” IEEE Transactions on Vehicular Technology, vol. 61, pp. 716 - 725, 2012. [5] Application Note Synchronization and MIMO Capability with USRP Devices Ettus Research. [Online] Available: http://www.ettus.com/content/files/kb/mimo_and_sync_with_usrp.pdf (May 31, 2014). [6] D., Teguig, et al., “Data fusion schemes for cooperative spectrum sensing in cognitive radio networks,” Military Communications and Information Systems Conference (MCC), pp. 1 - 7, 2012. [7] "IEEE Std 802.22™-2011 IEEE Standard for Information Technology Telecommunications and information exchange between systems Wireless Regional Area Networks (WRAN) - Specific requirements Part 22: Cognitive Wireless RAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: Policies and Procedures for Operation in the TV Bands", 2011. 44 Studentská konference Zvůle 2014 Surveillance Face Recognition: Challenges and Solutions Tobias Malach EBIS, spol. s r.o. and Department of Radioelectronics, Brno University of Technology Email: tmalach@phd.feec.vutbr.cz Abstract—This paper focuses on face recognition in surveillance camera systems. Face recognition basics are explained at the beginning of the paper. Challenging problems of face recognition such as face pose or face misalignment are outlined. A promising solutions to contemporary problems are mentioned. The rest of the paper is focused on one of promising solutions - face template creation. Several methods for face template creation are outlined. The paper concludes with finding, that face template creation methods are worth researching, as an appropriate template creation method can enhance face recognition performance. Keywords—Face Recognition; Surveillance Camera Systems; Face Template Creation; Face Pose; Illumination. I. INTRODUCTION Machine and computer vision has become an important tool for automation, robotics and many other disciplines. Machine or computer vision common tasks are for example object detection, object inspection in details of which a human is not able or a general image description. Image acquisition devices and processing algorithms are being gradually improved to a level when machines are able of reliable operation in some applications. This suggests that computer or machine vision may be utilized for other, more challenging tasks. An example of such a task is face recognition. Recognizing humans according to their faces has a broad range of use in various applications such as computer logging on; human identity verification in banks; at cash dispensers; or in security and surveillance applications in casinos or facilities with limited access and many others. Face recognition has shown the potential to become an effective approach to human identification and verification. However, there are several issues to be improved in order to maintain reliable face recognition system operation in real-world applications. This paper is focused on face recognition application in surveillance camera systems. The first section of the paper explains a general algorithm for face recognition. In the second section a challenging problems of surveillance face recognition are outlined. The following section describes some of contemporary promising solutions. Subsequently, one of promising solutions - face template creation is discussed. II. FACE RECOGNITION The face recognition is a task that should result in assigning a particular identifier to an unknown person e.g. a name or an ID. A human is able to visually recognize a known person in various poses and distances, according to body shape, face, other features or their combinations. Human recognition based on faces appears to be the best approach for machine vision. A. How Does it work? Face recognition is a complex task consisting of many sub tasks, which are natural for humans e.g. image preprocessing or face detection (face localization) in an image. These sub tasks have to be satisfactorily carried out to enable successful face recognition. The whole face recognition process consist of two stages (please see Fig. 1): • Training stage (please see training stage in Fig. 1). Face images of persons that are to be recognized are labeled according person’s identity. Face images are input to a training algorithm. These face images are processed, then face templates are computed and stored in a template database. These templates are used as a reference during the recognition stage. • The recognition stage (please see recognition stage in Fig. 1). The recognition stage begins with an image acquisition and continues with face detection (face localization) and face alignment. Face alignment determines the image area to be cropped and used for recognition. Faces are described and characterized by features. These features aim to extract as much information about the face as possible. Features are subsequently compared by a classifier with all face templates obtained during the previous training stage. The classifier determines which face template is the most similar to a features representing an unknown face. The unknown person is assigned the identity of the most similar face template. III. CHALLENGES IN FACE RECOGNITION The face recognition process has been described in the previous section. In this section a discussion on single recognition steps is given together with an outline of problematic aspects. An important aspect is the practical application of face recognition. Different applications enable acquisition of an image in a different quality, which requires different image preprocessing and different methods for recognition. An example of different applications could be face recognition at passport control and surveillance face recognition at an airport using surveillance cameras. It can be assumed, that face recognition becomes a challenging task when a face is captured in a natural environment with variable lightning conditions or 45 Studentská konference Zvůle 2014 Fig. 1. Block scheme of the face recognition process. Fig. 2. Several sample images from surveillance camera system database and the traditional LFW dataset [1]. when a face is captured in a various poses (non-frontal view to a camera) etc., such conditions are refered to as an uncontrolled environment. As this paper is focused on face recognition in surveillance camera systems, face recognition challenges in an uncontrolled environment will be outlined. Image quality. Despite contemporary high quality IP cameras the image quality may be very low. Capturing a moving person under low illumination results in blurred images or images severely affected by noise. Low image quality introduces severe problems to face recognition process. Face detection and alignment. Face images have to be aligned which maintains that features always represent the same face area. For rough estimation of face position a Viola-Jones detector is being used as it is fast, robust and reliable algorithm [5]. An exact face location is determined by various alignment methods. These methods may be based on the detection of face features such as eyes and nose etc. Face detection and alignment algorithms can determine a face position only with limited accuracy, so there is a certain amount of misaligned faces, which causes face recognition failure. Face pose, face expression. People are captured by camera system in a natural position and with natural expression. In surveillance face recognition there is an extra issue. Surveillance cameras are usually mounted at a height of approx. three meters above ground, faces are thus captured from above and person’s direct view to camera is very rare. The face pose is considered to be the most challenging problem of surveillance face recognition. In general face recognition applications, face pose is an issue too, but not to the extent as in the case of surveillance face recognition (which can bee seen in Fig. 2). The major challenges of face recognition are: low image quality; issues of face detection and troubles introduces by face pose. The difference between a general face recognition application and surveillance face recognition may be roughly expressed by the performance of face recognition. There are two graphs in Fig. 3 and Fig. 4. They depict ROC (Receiver Operating Characteristics) curves, which describe performance of different face recognition systems tested on the LFW dataset (figure 3) and on a surveillance camera dataset (figure 4). The ROC curve expresses the dependency of correct recognition (true positive rate or correct classification rate) on false classification of a person without a face template to a template of 46 Studentská konference Zvůle 2014 Fig. 3. ROC curves of state of the art algorithm on LFW dataset [1]. Fig. 4. ROC curves of algorithms on surveillance camera dataset. another person 1 . For details on ROC curves please see [6]. The results obtained by tests on the traditional LFW dataset in Fig. 3 are better compared to results obtained by tests conducted on surveillance camera dataset depicted in Fig. 4. The performance difference between tests conducted on LFW dataset and the surveillance camera dataset roughly expresses the influence of uncontrolled - natural face recognition. IV. CURRENT SOLLUTIONS AND RESEARCH ORIENTATION A discussion on challenging problems at the field of face recognition is given in the previous section. This section outlines possible solutions and contemporary research orienta- tions. 1Tests of face recognition are conducted on two sets, one set contains images/video sequences of persons who have face templates. Test on this set produce number of correct recognitions. The second set contains images/video sequences of persons who do not have face templates, an impostors. Tests on this set produce number of falsely classified persons without template as persons with template. 1 κ 5 1 3 4 7 0 2 6 L(s,ˆs) = κ 0+···+ 8 8 Lmax(s,ˆs) = κ max{ 0, . . . , 8}Fig. 5. Marked face landmarks that can be detected by flandmark detector [9]. Face alignment. Face alignment becomes a challenging and indefinite problem when person’s view is diverted from a camera. A tool detecting several significant face landmarks [9] has been developed for face alignment (see Fig. 5). Based on positions of these landmarks, face pose can be estimated, which supports correct face alignment. However, this approach does not solve the issue of a hidden or displaced face parts when face is diverted. Face description and characterization. Features are being used for face description. Features are of various nature and properties. A huge effort has been made to find optimal features with the following properties: high discrimination power; high robustness; low computational complexity and low storage demands. The state of the art features gradually approach the optimal properties. From the surveillance face recognition point of view, robustness (i.e. illumination, face pose and face expression independence) of features remains an unsolved issue. Face template creation. As it can be seen in Fig. 1, face templates are references for every individual. If reference (face template) would be appropriately determined, negative influence of the face pose and expression, illumination and face alignment may be reduced. The face template creation methods are described further in this paper. V. TEMPLATE CREATION The process of template creation has been proved to influence the face recognition performance [2]. Even though face template creation may contribute to increase the face recognition performance, it has been paid no attention. For both these reasons we thoroughly investigate the issue of template creation. Traditional approaches to template creation are two simple methods. The first one is based on a simple extraction of features from the training database. These features are used as templates i.e. training algorithm is provided with n training images of one person, then n face templates of given person is produced. The first approach produces exactly the same number of face templates as the number of training images. This method actually leaves out any face processing. The second method proposed in [3] uses K-Means (please see [8]) algorithm that produces selected number of face templates. These templates are computed based on features extracted from training images. Both approaches may produce several face templates per person. These face templates have to be stored 47 Studentská konference Zvůle 2014 in a template database. Number of persons in a database is high in the case of surveillance face recognition systems. If several face templates per person is stored, long classification time would be needed. Classification time is the time needed for the comparison of all face templates with features extracted from a single face image of an unknown person. The surveillance recognition has to operate in near real time. In order to keep classification time in acceptable bounds, we search for appropriate face template creation methods producing one representative face template per person. A very first method was the Centroid method [2]. The Centroid method is based on the idea that features representing training images of one individual may be represented by one point in a space - the Centroid. The ultimate face template is computed as a mean of one individual’s feature vectors. The face template Ti,j is determined as follows: Ti,j = 1 ni n Fi,j,n, (1) where Fi, j, n is a value of the nth feature vector of the ith individual in the jth dimension, and ni is the total number of feature vectors for the ith individual. Though the Centroid method is simple approach to face template creation, it has been proven to perform very well. We have proposed several sophisticated methods in order to outperform the Centroid method, however they failed [4]. The following methods failed to outperform the Centroid: 1) A method based on fitting a Gaussian Mixture Model on a histogram of one feature [4]. The template was computed as a sum of component’s mean values weighted by component’s significance. 2) A method that computed template as a weighted centroid [4]. Initially a centroid template is computed according to eq. 1. Distances among centroid template and feature vectors are computed. Feature vectors are assigned weights according to distances. The bigger distance, the smaller weight and vice versa. The ultimate template Twi,j is computed according to the following equation: Twi,j = Ti,j + wi,n(Fi,j,n − Ti,j), (2) where Ti,j is the centroid template, for individual ith individual in the jth dimension, wi,n is weight of nth feature vector and Fi,j,n is a nth feature value of ith individual in jth dimension. 3) A method that modeled characteristics of features by fitting various probability density functions on feature’s histogram. The template was determined as a mean of fitted probability density function. Although several fails, a novel method, which computes template as a quantile of a probability density function fitted on a feature’s histogram, appears to outperform the Centroid method. A performance improvement reached by using of templates determined as probability density function quantiles is approximately 3-5% of correct classification rate. This fact encourages our effort in further investigation of template creation methods. VI. CONCLUSION This paper deals with face recognition deployment to surveillance camera systems. At the beginning, basic face recognition principles are described. The purpose of training and recognition stages is explained. The application of face recognition in camera systems introduces several problems such as low image quality, diverted face views from a camera, illumination issues and imperfect face detection and alignment. State of the art approaches reducing negative influence of real world operating conditions are outlined. Among possible solutions reducing negative influence of a face pose, illumination, and a face alignment belong creation of representative and general face templates. Our previous research has shown that differently created face templates influence the face recognition performance. Therefore a research at this field is being carried out. The paper briefly describes five methods for face template creation. The Centroid method has been presented as a very first method but it has proved itself to perform very well. Several methods has been proposed in order to outperform the Centroid method but they failed. Finally, a method using quantile of fitted probability density function to a feature’s histogram appears to outperform the Centroid method. The improvement is approximately 3-5%. Such an success suggests further investigation of face template creation methods. ACKNOWLEDGMENT This paper was supported by the Ministry of Industry and Trade of the Czech Republic under grant IVECS, No. FR-TI3/170 and supported by the SIX project CZ.1.05/2.1.00/03.0072, the operational program Research and Development for Innovation. REFERENCES [1] Gary B. Huang and Manu Ramesh and Tamara Berg and Erik LearnedMiller”Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments”, University of Massachusetts, Amherst, Technical Report 07-49, 2007. [2] Tobias Malach and Jiri Prinosil”Face templates creation surveillance face recognition system”, in Proceedings of the 3rd International Conference on Pattern Recognition Applications and Methods. Portugal: SCITEPRESS, 2014, s. 724-729. ISBN 978-989-758-018-5. [3] Johannes Stallkamp and Hazim Kemal Ekenel and Rainer Stiefelhagen”Video-based Face Recognition on Real-World Data”, In IEEE 11th International Conference on Computer Vision. Rio de Janeiro, 14th to 21st October 2007. pp: 1-8. [4] Tobias Malach and Jitka Pomenkova”Face Template Creation: Is Centroid Method a suitable approach?”, In Proceedings of 24th International Conference Radioelektronika 2014. Bratislava, Slovak Republic, 15th to 16st April 2014. pp: 105-108. [5] Paul Viola and Michael Jones”Robust Real-Time Object Detection”, International Journal of Computer Vision. 2001. [6] Tom Fawcett”An introduction to ROC analysis”, In Pattern Recognition Letters 27. 2006. pp. 861874. [7] P. J. Phillips, H. Moon, P. J. Rauss, and S. Rizvi, ”The FERET evaluation methodology for face recognition algorithms”, In IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 10, October 2000. [8] Konstantinos Koutroumbas and Sergios Theodoridis, ”Pattern Recognition”, Academic Press, 2008. [9] Michal Uricar, Vojtech Franc and Vaclav Hlavac, ”Detector of Facial Landmarks Learned by the Structured Output SVM”, In Proceedings of the 7th International Conference on Computer Vision Theory and Applications, 2012, pp. 547-556. 48 Studentská konference Zvůle 2014 A New Software Tool for Physical Protection System Effectiveness Evaluation Tereza Malachová EBIS, spol. s r.o. Brno, Czech Republic Abstract—The paper describes a new tool HUSFO for Physical Protection System effectiveness evaluation. The paper also summarizes approaches and metrics in Physical Protection employed in the nuclear sector that are generally applicable in critical infrastructure high-risk facilities. The main contribution of this paper and the R&D project - that develops the tool HUSFO - is to significantly enhance and refine the effectiveness evaluation process. Keywords—Physical Protection System; Modelling; Effectiveness, Security. I. INTRODUCTION The events of 9/11 and other terrorist attacks on civil targets changed the world in many ways. Security of critical infrastructure has been one of the key areas that came to the fore. The need to protect critical infrastructure facilities against malevolent acts is ever-increasing. The paper describes current approaches and metrics in Physical Protection Systems effectiveness modelling. Furthermore, it focuses on the development of a new SW tool HUSFO that makes it feasible to evaluate PPSs effectiveness. The tool has been initially designed for nuclear facilities, however it is generally applicable to high-risk critical infrastructure facilities. The approaches presented in this article are based on conservative attitudes from the nuclear industry, where the level of potential consequences may very high. II. PPS MODELLING The objective of PPS is to prevent an adversary from achieving an undesirable or unacceptable event. PPS integrates people, procedures (administrative and other measures) and equipment. The main PPS functions are detection, delay and response [3]. The ability of a PPS to withstand a possible attack and prevent an adversary from achieving their objectives is called Physical Protection System Effectiveness PE. It is defined as the product of the Probability of Interruption PI and the Probability of Neutralization PN of the adversary by response forces [1], [3]. The reasons to evaluate the PPS effectiveness are as follows:  verification that the design and the implementation fulfil requirements,  identification of possible deficiencies in the design or the implementation,  analysis of upgrades to improve system performance,  comparison of upgrade or design costs in relation to improvements in the system performance [3]. The system effectiveness should be evaluated regularly in order to analyse possible changes to physical protection and to update it to a current threat. The process of the design and the evaluation of PPS includes several steps, these are the definition of PPS requirements, the design of PPS and the overall evaluation of PPS effectiveness. The process is depicted in Fig. 2 in more detail [3]. A. The Process of Evaluating PPS Effectiveness Various analysis and modelling tools are used to evaluate the system effectiveness. These are based on the following approaches: Path Analysis, Multi-Path Analysis, Scenario Analysis, Neutralization Analysis and Insider Analysis, etc. [1], [2], [4]. The first step in the system effectiveness evaluation is to create a mathematical model that represents a PPS. The model consists of protective elements and is characterized by metrics that describe the ability to perform the functions of the PPS (detection, delay and response). The metrics' values for the models are acquired by performing physical tests or by eliciting expert opinion. The metrics for the PPS functions are described in Chapter 3 in more detail. The next step in the process is Path Analysis. A path describes in space and time the way an adversary follows Fig. 1. PPS components and its functions. 49 Studentská konference Zvůle 2014 ,R PC (usually overcomes protective elements) to reach his target and complete his objective – theft or sabotage [1], [2]. Path Analysis includes determining the most vulnerable paths. The main result of a Path Analysis is the Probability of Interruption PI [2]. Scenarios are developed and analysed in the next step. Scenario Analysis is a PPS effectiveness evaluation technique based on adversary attack scenarios. The scenario is described by a detailed timeline of all events and the associated performance of protective elements happening during an attack – adversary actions, delays, detection of those actions, communication, assessment, response force actions, etc. The paths selected for the Scenario Analysis are those which were identified in the previous step with the lowest Probability of Interruption [1]. B. Risk Approach The main goal of an effective PPS is to lower the risk of an adversary attack to an acceptable level and prevent unacceptable consequences. Risk models are used to link PPS effectiveness to the consequences. The risk is defined as the likelihood of a defined set of consequences. The risk is the (1) where P is the probability of consequence occurrence, C is the level of consequences. An important factor of the PPS evaluation is the probability of a successful attack on a facility (assuming such an attack occurs). The equation may be therefore expressed as follows [1]: ( | )AR P P S A C (2) where PA is the probability of an attack occurrence, P(S|A) is the conditional probability of a successful attack (assuming the attack occurs). The probability that an attack will be successful decreases as the PPS effectiveness increases. The conditional probability that the attack will be successful, assuming the attack occurs, might be expressed as follows [1]: ( | ) 1 EP S A P  (3) where PE is PPS effectiveness. Equation 2 might be then expressed as follows [1], [2]: (4) The probability that malevolent acts may occur at facilities with high potential consequences - such as nuclear - cannot rely on statistical data due to the lack of such data and difficulties of predicting human behaviour. The conservative approach assumes the probability that a facility will be at some time subject to an attack (it is only a matter of time), therefore PA = 1. According to the conservative approach, that is accepted in the nuclear sector, for example, the possible success of an attack are likely to result in a high consequence, therefore C = 1. In reference to these assumptions, Equation 4 will be expressed as follows [1]: (5) The higher the PE is, the lower the risk. The PPS effectiveness can be expressed as the product of the Probability of Interruption and Neutralization [1]: (6) where PI is the Probability of Interruption, i.e. the probability that a response force will be deployed at the right place in time to interrupt the adversary´s actions, PN is the probability that a response force will gain complete physical control of the adversary. III. METRICS OF PPS FUNCTIONS AND PROTECTIVE ELEMENTS IN THE PPS MODEL A PPS consists of protective elements with the main function to detect and delay an adversary. Protective elements (1 ) .A E R P P C  1 .E R P  ,E I N P P P Fig. 2. PPS design and evaluation process. 50 Studentská konference Zvůle 2014  , , , ,D S C AS P f P T NAR P  , , ... ,C T DRFT f T T T are described by metrics that characterize the PPS model. The quantitative expression of PPS effectiveness (for specific threat and metrics values of protective elements) is the value of PE. The key metrics of a PPS (used by the most frequently used model – SAVI, for example) are described below [5]. A. Detection The Probability of detection PD is the metric of detection. PD describes the probability of detecting an adversary's action (covert or overt) [1], [2]. A sensor (e.g. a Passive Infrared Sensor) detects abnormal activity and initiates an alarm. Alarm information is then transferred to the central alarm station and alarm assessment is performed. An alarm without an assessment cannot be considered as a detection [1]. (7) where PS is the Probability of Sensing, TC is Time for Communication and Assessment, NAR is Nuisance Alarm Ratio, PAS is Probability of Assessment. B. Delay The main goal of delay is to slow down an adversary on his way to the target. Delay can be effected by mechanical barriers, personnel, activated delays etc. The metric for the delay is the Time needed by an adversary to defeat a protective element. Any delay before detection is ineffectual from the point of PPS effectiveness evaluation [1], [2]. C. Response Response is defined (from the point of physical protection) as an action of response forces that leads to attack interruption and adversary neutralization. Neutralization is defined as an action by the response force that kills or captures an adversary or makes the adversary abandon the attack and flee. Response includes the following activities: communication of an alarm, preparation and deployment of forces, attack interruption and neutralization of an adversary [1]. To interrupt an attack, response forces need to be fast enough to deploy their forces before the Critical Detection Point (CDP). The CDP is a point on the adversary's path to the target where there is sufficient time for the response force to interrupt the adversary. Beyond this point, the response force cannot prevent the adversary from reaching the target To neutralize an adversary, sufficient strength of the response force is required (e.g. number of response force members, weapons, training etc.) to be deployed to an appropriate location. The metric for response is Response Force Time, which is given by the communication and deployment of response forces [1]: (8) where TC is the Probability of Communication to response force, TT is the Probability of Deployment to adversary's location, TD is Time to Deploy. The data is compared while modelling the neutralization threat and response force ability. These include skills, capabilities and training, equipment, strategies, transport and the number of adversaries or response forces. Various methods are used for model creation including: table-top analysis, numerical calculations, Markov chains, expert opinion elicitation, and Monte Carlo simulation. The tools currently used in the nuclear sector include AVERT or JCATS [1], [4]. Fig. 1. AVERT Tool. D. Current Tools for PPS Effectiveness Evaluation Modelling The first work on PPS effectiveness evaluation was done in the Sandia National Laboratories (in the 1970s). A methodology was elaborated dealing with the design and the evaluation of PPS based on one-dimensional models e.g. EASI, which is still a standard for single path analysis. Multipath evaluation was effected using SAVI, with its simplified version MP VEASI used for teaching purposes. The above-mentioned models are based on the principle of Timely Detection, the Probability of Attack Interruption and Neutralization [5], 0. Other currently-used models include the two-dimensional model SAPE developed in Korea, VEGA which is used in the Russian federation, EVA developed in France, AVERT developed in the USA that utilizes the Monte Carlo method and a 3D modelling environment. More details about current methods can be found in the article Evaluation of Physical Protection System Effectiveness [4], [5]. The EASI and SAVI tools have been used in the Czech (Czechoslovak) nuclear facilities since the mid-1980s. These models and additional tools are still used these days mainly due to the ability to compare the results. That is necessary in order 51 Studentská konference Zvůle 2014 to prove that PPS effectiveness has not been lowered when making changes to the design or upgrading. IV. THE NEWLY-DEVELOPED SW TOOL FOR PPS EFFECTIVENESS EVALUATION A new tool is being developed in order to keep up with the current progress in technology (map views, 3D modelling, etc.) and in order to improve some deficiencies of the previouslymentioned tools. The newly-developed tool HUSFO enables PPS effectiveness evaluation based on the principles described in this paper. HUSFO is a multipath, two-dimensional modelling tool, utilizes algorithms that search paths with the lowest probability of an attack interruption, e.g. the most vulnerable paths that are graphically depicted on a map. This feature is crucially important for the expert evaluation and further analysis (Scenario Analysis). The tool makes it possible to effectively and easily change the level of threat and specify the adversary's characteristics and tactics in detail. Due to response force training, the evaluation of balanced protection and cost optimization of design and upgrades, the tool facilitates the creation of a number of sensitivity analyses (response force time, protection element features and capabilities changes and other). The model works with an advanced and comprehensive protective element structure. One of the main advantages is the ability to input and create a custom-based database of protective elements with custom metrics. The input data for the analysis is of crucial importance, since the HUSFO tool utilizes its own input data acquired from the physical tests performed and expert opinion evaluation. The tool will make it feasible to depict a potential attacker's path in a 3D model. The tool respects all the current best practices and approaches in evaluating the effectiveness of the physical protection system. It provides added-value mainly in the graphical representation of results, particularizes the model (input data, metrics, threat capabilities, etc.), provides the analyst with additional extensive features and offers an up-todate user interface. Fig. 2. 3D hypothetical facility for PPS modelling. Fig. 3. An example of a 2D analysis in the tool HUSFO. V. CONCLUSION The main goal of the paper was to present an overall overview of PPS effectiveness evaluation modelling and to summarize current approaches used in the nuclear sector that are generally applicable on the mission-critical facilities of critical infrastructure. Physical protection of nuclear facilities is a very specific field where most of the information and data is classified. Therefore, various countries develop their own tools. Moreover, they have to create their own databases describing the characteristics of protective elements and adversary interaction with them. The data is the most valuable, costly and difficult-to-transfer part of the Physical Protection Effectiveness Evaluation. Within the Project mentioned Acknowledgement, several physical tests were conducted together with expert opinion elicitation to obtain data for modelling. ACKNOWLEDGMENT This paper was supported by the Ministry of Interior of the Czech Republic and EBIS, spol. s r.o. in the project “The Evaluation of Physical Protection System Effectiveness based on its Modelling”, VG 20112015039. REFERENCES [1] International Atomic Energy Agency, Regional Training Course on the Physical Protection of Nuclear Materials and Facilities. Beijing: International Atomic Energy Agency, 2010. [2] Mary Lynn Garcia, Vulnerability Assessment of Physical Protection Systems. Boston: Elsevier Butterworth-Heinemann, 2006. [3] Mary Lynn Garcia, The design and evaluation of physical protection systems. Boston: Elsevier Butterworth-Heinemann, 2008. [4] World Institute For Nuclear Security, Modelling and Simulation for Nuclear Security Planning and Assessment: A WINS International Best Practice Guide. Vienna: World Institute For Nuclear Security, 2011. [5] Z. Vintr, M. Vintr and J. Malach, "Evaluation of Physical Protection System Effectiveness," In: Proceedings - 46th Annual IEEE International Carnahan Conference on Security Technology. Piscataway: IEEE, 2012, pp. 15-21. T. Malachová, "Hodnocení účinnosti systémů fyzické ochrany ve vztahu k hrozbě," In: Sborník abstraktů mezinárodní konference Bezpečnostní management a společnost. Brno: Univerzita obrany, 2013 52 Studentská konference Zvůle 2014 Influence of M2M Communication on LTE Networks Pavel Masek Department of Telecommunications Brno University of Technology Brno, Technicka 3058/10 Email: xmasek12@phd.feec.vutbr.cz Jiri Hosek Department of Telecommunications Brno University of Technology Brno, Technicka 3058/10 Email: hosek@feec.vutbr.cz Marek Dubrava Department of Telecommunications Brno University of Technology Brno, Technicka 3058/10 Email: xdubra01@stud.feec.vutbr.cz Abstract—Machine-to-Machine (M2M) communication becomes in these days one of the most discussed research topics. This technology enables in comparison with traditional Humanto-Human (H2H) sending the data between individual nodes or between nodes and central node (gateway) without human interaction. Everyone realizes the big potential (technical, financial) of this research area. M2M should be used e.g. in these fields: Smart metering, Intelligent transport systems or E-healthcare. For telecommunication operators and their cellular mobile networks, M2M communication represents a big challenge. In most cases the M2M data will be transmitted over LTE mobile network. LTE network is designed and optimized as IP based architecture. In comparison with broadband type of applications (H2H), the M2M traffic presents a real challenge. It is a consequence of small packet size of the M2M data which is transmitted as irregular bursts by a large number of end stations. In this paper we focus on two most important questions for telecommunication operators. The first question is what is the maximum number of end stations which will be served by the base station in the LTE network without overloading of this base station. The second question is dealing with the maximum amount of data transmitted across the LTE network without overloading the base station. On the base of the obtained results from simulation made in OPNET Modeler, we present some improvements and recommendations to avoid or at least minimize overloading of the base station in LTE network. I. INTRODUCTION The ubiquitous wireless technologies are considered as an integral part of the current modern life-style. The number of mobile devices is still growing, therefore there is no wonder that full attention from the telecommunication operators is devoted to the research and development of wireless technologies in recent years. In these days the deployed 3G mobile networks are successfully replaced by 4G networks (LTE networks) with promising benefits for customers. The biggest challenge for telecommunication operators lies in a rapidly growing number of mobile devices accessing the current 3G cellular networks and causing overloading of these networks. In accordance with forecasts to year 2018 from many researches and companies [1], [2] there will be over 10 billion mobile-connected devices. The total mobile data traffic throughout the world will exceed 10 exabytes per month, with a compound annual growth rate of 61% from 2013 to 2018. In recent years, a major percentage of (mobile) data traffic have been generated by human controlled devices [1]. Nowadays, on the other hand, the Internet of Things (IoT) is a new paradigm for devices that are becoming connected to the Internet and are able to communicate with each other without human interaction. According to [3] we can assume that the IoT will extend the existing Internet with a various type of devices in different fields of communication as Smart metering, Intelligent transport systems or E-healthcare [4], see Fig 1. In consequence, the cellular broadband connectivity have reached the flat rates for end customers and ubiquitous over the last few years. We can assume that the Machine-to-Machine (M2M) devices will be widely deployed in the near future [5]. Networked industries Networked society Networked consumer electronics First wave Secondwave Third wave Fig. 1. The three waves of connected device development Furthermore, LTE is expected as a primary technology for providing M2M services [1]. In comparison with Human-toHuman (H2H) communication, M2M applications are expected to generate a diverse range of services, including narrowband applications which will be transmitting data infrequently [6]. LTE network is primary developed as IP based architecture for broadband type of applications, hence the narrowband M2M applications with rather low data rates and with the small packet size might have a considerable impact on these LTE networks. From the information given above it is clear that the proper integration of M2M services into LTE networks will be crucial for the telecommunication operators [7]. The most frequently asked questions are how many end stations should be handled by base station in LTE network and what is the maximum amount of transmitted data before the overload of base station occurs [1]. In this paper we present the simulations which focus on the identification of the critical networks parameters: overload the base station via network traffic congestion/number of 53 Studentská konference Zvůle 2014 connected end stations and the traffic prioritization. Based on the obtained results, the optimization for solving problems that causes M2M communication in today’s LTE networks will be presented. II. LONG TERM EVOLUTION (LTE) Long Term Evolution (LTE) represents the next major step in the mobile radio network communications. This new standard is introduced in 3rd Generation Partnership Project (3GPP) Release 8 and uses Orthogonal Frequency Division Multiplexing (OFDM) as a radio access technology together with advanced antenna technology MIMO (Multiple-input and Multiple-output) at both the radio transmitter and radio receiver to improve communication performance level. LTE is IP-based flat network architecture (see Fig. 2) which was designed for transfer rates up to to 100 Mb/s [8] which were evolved from the previous mobile network [9]1 . eUTRAN Signaling Media Signaling UE eNodeB_1 S-GW MME PDN-GW PCRF IMS HSS SD eNodeB_2 eNodeB_N Fig. 2. LTE network architecture A detailed description of the entities mentioned above is given in [10]. In these days for the telecommunication operators Service domains (SD) entities which include various sub-systems are very interested. SD in turn may contain several logical nodes which can be divided into two sections [11]: • IP Multimedia subsystem (IMS) services, • Non-IMS services. Telecommunication operators try to use IMS which is mandatory for LTE network and enable for their customers re-used “old” communication devices. The concept of interconnection of all home devices (simple PSTN (Public Switched Telephone Network) alarms; advanced alarm systems which send events in form of SIP messages to the ARC (Alarm Receiving Centre) or directly to end users; IP-based services as Smart meters) is known as Smart Home Gateways [12], [13]. All services are running on one device (SH-GW) which enables access to the Internet through operator’s network [12]. III. MACHINE-TO-MACHINE (M2M) Machine-to-Machine devices or so-called machines are the devices with the ability to communicate with each other device without human interaction [14]. The value of information 1The data rate depends on the used MIMO technology. Data rate is 100 Mb/s for 10 MHz with 2x2 MIMO. If 4x4 MIMO is used then the capacity would increase to 403,2 Mb/s. from individual sensors rapidly increase when the devices (M2M devices) are networked. Then they are able to exchange information between themselves or send it to the central node (server or gateway). M2M services are in majority cases based on Global System for Mobile communications / General Packet Radio Service (GSM/GPRS) or on the 802.11 standards which fulfil the requirements of these applications. The massive growth of M2M devices/communication is expected [1], therefore there is a need from the side of telecommunication operators to properly set up the LTE networks which will be the primary communication technology for M2M. In comparison with H2H communication, the M2M traffic follows some specific traffic patterns [14]: • Amount of data per transmitted packet: Transmitted packet is usually very small (several bytes). The amount of bytes refers to the nature of the generated data which contains data from sensors (temperature, humidity) or e.g. the status of the alarm system. • Large number of M2M messages: M2M devices are/will be deployed in transport vehicles, buildings, emergency and eventually in human body would transmit large number of messages. As mentioned above, the size of the message could be a few bits (value of actual temperature) or megabytes when using very specific scenarios such as video from alarm systems. As mentioned above, in most cases the value of M2M data will be small (several bytes). For getting more accurate point of view on this issue, we have to know what is the smallest possible unit in LTE. In LTE networks, the smallest unit which can be allocated to a UE is Physical Resource Block (PRB). Each resource block is of 180 kHz wide in the frequency domain and sub carrier spacing is ∆f= 15 kHz. A PRB consists of the 12 sub carries and each sub carries has 7 OFDM symbols per 0,5 ms slot [8]. Regarding the used modulation scheme [8], a PRB can transmit several kilobytes of data. In comparison with the minimal value of data (e.g. value of temperature), the spectral efficiency of the LTE network could decline severely. The transmitted data in LTE network could have also different requirements for delivering (delay, jitter, throughput) so the Quality of Service (QoS) is important topic when the M2M messages are transmitted through the LTE network. IV. SIMULATION PARAMETERS AND RESULTS Based on the M2M requirements for the LTE network the simulation model which should be able to perform the evaluation of the performance of base station (eNodeB) was proposed. OPNET Modeler was chosen as a simulation environment. The simulation model is divided into two individual scenarios. The first scenario focuses on the maximum number of end stations which are served by the base station in the LTE network without overloading of this base station. The second scenario deals with the question what is the maximum amount of data transmitted across the LTE network without overloading the base station. A. Overloading of the eNodeB due to the amount of transmitted data in LTE network This simulation was performed under the settings displayed in Table I. From the beginning the simulation is carried out 54 Studentská konference Zvůle 2014 by considering 100 static end stations which are in range of eNodeB. In simulation time of 100 seconds these end stations start downloading the file (100 kB) from server. Next 100 mobile end stations are connected to the network in simulation time of 200 seconds and simultaneously with the static end stations try again to download the file from server. When the first group (static end stations) were downloading the file, the transmission delay was in range from 12 ms to 33 ms and there were no requests for a connection re-establishment between end devices and eNodeB. 0 50 100 150 200 250 300 350 0 20 40 Simulation time [s] Delay[ms] Delay 0 50 100 150 200 250 300 350 0 20 40 0 1 Reconnectattempts[-] Reconnect attempts Fig. 3. Delay and requests for a re-connection due to depleted network capacity In case of second phase (200 connected end stations simultaneously), the value of delay was ranging between 21 and 42 ms, but there were recognized some requests for a re-connection due to depleted network capacity (see Fig. 3). The value 1 on right vertical axe means the request for a reconnection. TABLE I. SIMULATION PARAMETERS FOR FIRST SCENARIO Parameter Setting Cell Layout 1 eNodeB, 1 sector Duplex Format LTE-FDD System Bandwith 5 MHz (∼ 25 PRBs) Pathloss Model Free space UE Velocity 80 km/h Number of UE 100, 200 Frequency Reuse Factor 1 Scheduling Mode Link Adaptation and Channel Dependent MIMO Transmission Technique Spatial Multiplexing 2 Codewords 2 Layer M2M File transfer traffic model File Size 100 000 byte File Inter-request Time Constant (120s) Based on the results obtained from the first scenario, the maximum number of the end stations which should be allowed to download simultaneously the file from the eNodeB without the overloading the eNodeB, is about 180 end stations. B. Overloading of the eNodeB due to the amount of connected mobile devices The second scenario was created by a modification of the first one. This scenario focuses on finding the maximal number of connected end stations, simultaneously communicating via eNodeB without overloading this base station. The simulation parameters for this scenario are displayed in Table II. Considering the fact that the eNodeB does not have exactly given the maximum number of simultaneously connected end stations, in the first phase of this scenario we performed a theoretical assumption based on this equation [11]: maxNumber = RB ∗ 12 ∗ 75% ∗ CELL , (1) where the symbols represents: • maxNumber - maximal number of connected end stations. • RB - Resource Blocks (RB). The number for bandwidth 5 MHz is 25 RB. For bandwidth 10 MHz exists 50 RB. • 12 - The constant of the sub cariers. • 75 % - Coding protection for the transmitted data. • CELL - Number of cells/sectors in eNodeB. When we use the parameters from the Table II and substitute into equation 1, then we obtain the theoretical value of the maximum end stations for one eNodeB (see Eqn. 2): maxNumber = 25 ∗ 12 ∗ 0.75 ∗ 1 = 225. (2) TABLE II. SIMULATION PARAMETERS FOR SECOND SCENARIO Parameter Setting Cell Layout 1 eNodeB, 1 sector Duplex Format LTE-FDD System Bandwith 5 MHz (∼ 25 PRBs) Pathloss Model Free space UE Velocity 80 km/h Number of UE 150, 200, 250, 350 Frequency Reuse Factor 1 Scheduling Mode Link Adaptation and Channel Dependent MIMO Transmission Technique Spatial Multiplexing 2 Codewords 2 Layer M2M File transfer traffic model File Size 10 000 byte File Inter-request Time Constant (120s) The simulation was made for 150, 200, 250 and 350 end stations. In the case of 150, 200 or 250 mobile devices connected to eNodeB, LTE network was able to handle the traffic for all devices without any connection loss. When 350 mobile devices attempted to connect to eNodeB at the same time (in 180 seconds after the beginning of simulation), the eNodeB was not able to handle all requirements. The rejected connections were performing the re-connection procedure which can be seen in Fig. 4 (the value 1 on vertical axe means the request for a reconnection). 0 50 100 150 200 250 300 350 0 1 Simulation time [s] Reconnectattempts[-] Fig. 4. Attempts of re-connection performed by mobile clients 55 Studentská konference Zvůle 2014 V. PROPOSED IMPROVEMENTS As mentioned in section III (Machine-to-Machine), one of the biggest issue for eNodeB is the time interval when the M2M devices send data. In most cases the interval value is set in a whole hour. When thousands of the M2M devices attempt to send some information at the same time, it will occur the overloading of the eNodeB2 . The first improvement that could be is to send the data irregularly. This assumption was proved by simulation (see Fig. 5). 0 50 100 150 200 250 300 350 0 10 20 Simulation time [s] Networkload[Mb/s] 0 50 100 150 200 250 300 350 0 10 20 Regular Spread Fig. 5. Reducing LTE network load due to spreading requests on sending M2M data in time from end stations. The second proposed improvement is to increase the number of cells/sectors in eNodeB. In case we change the number of sectors to 3, then regarding Eq. 1 the maximum number of simultaneously connected end stations should be 675. In simulation time of 180 seconds, the 500 mobile end stations attempt to connect to LTE network. As displayed in Fig. 6, in case of 1 sector the overloading of the eNodeB occurred. ENodeB is not able to handle that amount of end stations and the network is congested by control data (user data is not transmitted). In comparison with the 1 sector, when 3 sectors were used, the throughput on the MAC (Media Access Control) layer attacked the maximum theoretical value of throughput [8] since the eNodeB is not overloaded and could transfer user data (control data and throughput were in this case much smaller than in case of 3 sectors in eNodeB). 0 50 100 150 200 250 300 350 0 5 10 15 Simulation time [s] Throughput[Mb/s] 1 sector 3 sectors Fig. 6. Improvement of the maximum throughput due to set up the number of sectors in eNodeB to 3. 2The telecommunication operators can financially reward random access to the network before a scheduled access. VI. CONCLUSION AND FUTURE WORK For the service providers and network operators, emerging M2M services represent a very promising business case. On the other hand, it brings also several issues where the most important question is overloading of the eNodeB (base station in LTE network). Therefore, it is important to analyse the impact of deployment M2M services and find the capacity limits or weak points in order to avoid ineffective expenditures. This paper presents a simulation-based study of the capacity performance of LTE network. We were trying, in two independent scenarios, to overload the eNodeB and find the limits for specific M2M file downloading scenario and for the maximum end stations connected simultaneously to eNodeB. Based on the results from the simulation, we can conclude that we identified the scenario where the eNodeB was not able to handle network traffic generated by mobile clients and network resources were exhausted. Other important thing is that in our second scenario we found that the eNodeB is able to serve up to 250 end stations (1 sector) or up to 675 end stations for 3 sectors used in eNodeB. ACKNOWLEDGMENT This research work was supported by the project CZ.1.07/2.3.00/30.0005 of Brno University of Technology. REFERENCES [1] Cisco, “Global Mobile Data Traffic Forecast Update, 2013 2018,” p. 40, 2014. [Online]. Available: http://goo.gl/KQ9cxi [2] “Data, data everywhere,” 2010. [Online]. Available: http://www. economist.com/node/15557443 [3] L. Coetzee and J. Eksteen, “The Internet of Things Promise for the Future ? An Introduction,” pp. 1–9, 2011. [Online]. Available: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=\&arnumber=6107386 [4] G. Wu, S. Talwar, K. Johnsson, N. Himayat, and K. D. Johnson, “M2M: From mobile to embedded internet,” Communications Magazine, IEEE, vol. 49, no. 4, pp. 36–43, 2011. [5] Ericsson, “More than 50 billion connected devices taking connected devices to mass market and profitability,” 2011. [6] E. Omar and H. Olivier, M2M communications, 1st ed., D. Boswarthick, Ed. Chichester: John Wiley: John Wiley, 2012. [7] M.-Y. Cheng, G.-Y. Lin, H.-Y. Wei, and C.-C. Hsu, “Performance evaluation of radio access network overloading from machine type communications in LTE-A networks,” in Wireless Communications and Networking Conference Workshops (WCNCW), 2012, pp. 248–252. [8] D. Erik, P. Stefan, and S. Johan, 4G LTE/LTE-Advanced for Mobile Broadband. [9] Ericsson, “Overhead Impacts on Long-Term Evolution Radio Networks,” 2007. [10] S. Martin, 3G, 4G AND BEYONDBRINGING NETWORKS, DEVICES AND THE WEB TOGETHER, 2nd ed. [11] R. Nossenson, “Long-term evolution network architecture,” in Microwaves, Communications, Antennas and Electronics Systems, 2009. COMCAS 2009. IEEE International Conference on, LTE overview, Ed., 2009, pp. 1–4. [12] P. Masek, J. Hosek, M. Ries, D. Kovac, M. Bartl, F. Kr¨opfl, and A. Wien, “Use Case Study on Embedded Systems Serving as Smart Home Gateways 2 Alarm Use Case Concept.” [13] J. Hosek, P. Masek, D. Kovac, M. Ries, and F. Kropfl, “Universal Smart Energy Communication Platform,” in Proceedings of 1st International Conference on Intelligent Green Building and Smart Grid (IGBSG 2014), 2014, pp. 134–137. [14] D. Boswarthick, O. Elloumi, and O. Hersent, M2M Communications: A Systems Approach. Wiley, 2012. [Online]. Available: http: //books.google.cz/books?id=7Xdz3ryx0TIC 56 Studentská konference Zvůle 2014 Roman Mego Department of Radio Electronics Brno University of Technology Brno, Czech Republic roman.mego@phd.feec.vutbr.cz Abstract—This paper deals with the hardware acceleration in the form of processor peripheral. The main goal is to show, how this type of acceleration behaves in the real-time operating system. The demonstration is made on the field-programmable gate array with implemented soft processor. The accelerated function is the AES encryption algorithm. The performance is measured in the instruction cycles needed encrypt or decrypt data. The running tasks are visualized to show the process of switching between tasks which uses the same accelerator. Keywords—FPGA; hardware acceleration; Nios II; AES; FreeRTOS I. INTRODUCTION Hardware acceleration is one of the best ways how to increase the performance of the cryptographic, signal processing of other computationally intensive functions. This approach could bring faster execution in many applications compared to the software implementation. The main advantage is, that the hardware unit could execute many operations simultaneously, what is commonly impossible on CPU, or it is strongly limited. The hardware accelerators could be realized as special peripherals on CPU connected thorough the communication bus, mapped in the memory space, or implemented as special instructions of the CPU. The advantages of the hardware accelerators are obvious. The question is how this approach behaves on real-time operating systems (RTOS). In many cases, the system contains only one hardware accelerator. This means, that there is possibility, that this hardware will be used in two different tasks. This paper is divided as follows. Section 2 describes the system, its parts and software. Section 3 is dealing with the software structure and its modification for use with the RTOS. Section 4 shows measured times and the behavior of the program when multiple tasks are created. Section 4 concludes the results II. TESTED SYSTEM The field-programmable gate arrays (FPGA) are suitable for creating entire digital system with CPU, integrated peripherals and hardware accelerators on one chip. This chapter will describe tested hardware and software components. A. Altera Cyclone IV The Cyclone IV E [1] FPGAs are designed for low-cost and low-power application. The devices can contain up-to 115k logic elements (LE), 4 Mb of embedded memory, 266 embedded multipliers and 4 phase-locked loops (PLL). The device under the test is the EP4CE40. The overview of the tested FPGA is shown in Table I. TABLE I. EP4CE40 OVERVIEW LEs 39 600 M9K memory blocks 126 Embedded memory 1 134 kb 18-bit x 18-bit multipliers 116 PLLs 4 B. Nios II embedded processor The Nios II processor [2] is soft-core processor that can be implemented in all Altera’s FPGAs. The architecture is 32-bit RISC. The soft-core processors allow the designer to modify their features. The Nios II can be extended with custom instructions, memory protection unit (MPU), memory management unit (MMU), or with the custom peripherals. The peripherals are connected through the Avalon Interface [3]. C. Avalon AES ECB-Core The Avalon AES ECB-Core [4] was chosen for demonstration of the hardware acceleration. This core is a hardware implementation of Advanced Encryption Standard (AES) [5] block cipher able to work with the key length of 128, 192 and 256 bits. This accelerator has the Avalon Interface, so it can be used with Nios II processor. The AES core is mapped into the processor memory space. This memory space is divided into the input data, output data, key and control register (Table II.). The research published in this submission was financially supported by the Brno University of Technology Internal Grant Agency (FEKT-S-14- 2177). The described research was performed in laboratories supported by the SIX project; the registration number CZ.1.05/2.1.00/03.0072, the operational program Research and Development for Innovation. 57 Studentská konference Zvůle 2014 Behavior of Hardware acceleration on Real-time Operating Systems TABLE II. MEMORY SPACE OF THE AES CORE Offset Description 0x00 – 0x07 Key 0x08 – 0x0B Input data 0x10 – 0x13 Result 0x14 – 0x1E Reserved 0x1F Control and status D. FreeRTOS For evaluating the hardware accelerator under the RTOS, the FreeRTOS [6] was chosen. It is RTOS for embedded devices with available ports for 34 architectures, starting with 8-bit microcontrollers. There is also support for Nios II processor. The FreeRTOS provides tasks, mutexes, semaphores, queues and software timers. The scheduler can be set to preemptive or cooperative operation mode. The FreeRTOS offers additional packages such as TCP/IP stack, file system or POSIX-like interface for peripheral drivers. One of these packages is FreeRTOS+ Trace [7], which is a tool for analyzing and visualizing the runtime behavior of entire application. E. System configuration The Nios II/s version of the processor is used in the final design. This means, that the processor has the instruction cache, branch prediction and hardware multiplier and divider. The instruction cache could be set up to 64 kB, but in this case is set to 4 kB. The processor‘s clock speed is 48 MHz. The program and data memory is in the external 16 MB SDRAM with 100 MHz clock signal. Fig. 1 shows the block diagram of the system. The system clock tick rate for FreeRTOS kernel is set to 100 Hz. Fig. 1. Block diagram of the system III. SOFTWARE REALIZATION This section will describe the structure of the evaluated software. The hardware accelerator will be also compared to the software implementation of the AES algorithm, which was taken from the OpenSSL library [8]. All software is compiled without any optimization. A. Performance of the individual parts The performance of the software implementation and the hardware accelerator is evaluated as bare-metal application. This solution was chosen, because the scheduler of the FreeRTOS is preemptive and it could interrupt current program execution. For Altera’s performance counter peripheral [9] was used was used for time measurement. The OpenSSL AES functions are divided into two groups. The first group expands the key into the separate round keys. The second group is the encryption or decryption of one block itself. The hardware AES core behaves as the standard peripheral. The only thing to do is to copy the key and data into the memory space of the peripheral and run the encryption or decryption by setting the corresponding bit. B. FreeRTOS modifications If the peripheral is used in RTOS, it has to be ensured, that only one task can access the peripheral. For this reason, the mutex was added into the code. The mutex is wrapped outside the one data block processing, not around the whole processing chain. This solution was chosen for the case, that there is a task with higher priority requiring access to the AES peripheral. This solution needs to check, if the AES key was not changed by another task. It is realized by storing the last task ID, which accessed the peripheral. The AES core is also able to generate interrupt after the end of the operation. This feature is used to wait until the wait of the operation through the semaphore instead of controlling the status register of the AES peripheral. The software implementation of AES does not require any modification, because all tasks create its own copy of the variables on the stack. IV. MEASURED RESULTS Tables III and IV show the measured execution time of the individual parts on the bare-metal application. The OpenSSL implementation of AES encryption takes nearly 19000 instruction cycles, including key processing. TABLE III. OPENSSL PERFORMANCE Function Time [s] Cycles Encryption key processing 0.00015 7109 Decryption key processing 0.00014 6689 Data block encryption 0.00024 11629 Data block decryption 0.00024 11462 TABLE IV. AVALON AES ECB-CORE PERFORMANCE Function Time [s] Cycles Copy key 0.00001 578 Copy data 0.00001 512 Encryption 0.00001 66 Copy result 0.00001 511 The hardware implementation brings significant improvement. The encryption itself takes only 66 cycles. The whole encryption process for one block takes less than 1700 instruction cycles, which is more than 10-times better than the software implementation. ~ FPGA SDRAM PLL PLL 48 MHz 48 MHz 100 MHz Nios II core SYS CLK AES core 58 Studentská konference Zvůle 2014 Tables V. and VI. show the performance of the AES implementations called from one task of the FreeRTOS. The software implementation is now slower in compare with the bare-metal application, because there are some overheads of the operating system. The performance degradation is even stronger for the hardware implementation. The first factor, which causes it, is the mutex checking. The next and most significant factor is the semaphore controlled in the interrupt routine. TABLE V. OPENSSL CHAIN PERFORMANCE Function Encrypt Decrypt Time [s] Cycles Time [s] Cycles AES 128 ECB 16b 0.00040 19087 0.00076 36358 AES 128 ECB 32b 0.00060 28627 0.00095 45799 AES 128 CBC 16b 0.00045 21729 0.00084 40090 AES 128 CBC 32b 0.00071 33877 0.00110 52849 AES 128 CTR 16b 0.00046 22175 0.00045 21643 AES 128 CTR 32b 0.00071 34286 0.00070 33787 TABLE VI. AVALON AES ECB-CORE CHAIN PERFORMANCE Function Encrypt Decrypt Time [s] Cycles Time [s] Cycles AES 128 ECB 16b 0.00196 93937 0.00195 93549 AES 128 ECB 32b 0.00348 166867 0.00348 166932 AES 128 CBC 16b 0.00197 94634 0.00197 94709 AES 128 CBC 32b 0.00351 168354 0.00352 169040 AES 128 CTR 16b 0.00198 94808 0.00196 94263 AES 128 CTR 32b 0.00351 168661 0.00350 168129 Fig. 2 shows the AES processing in time captured by FreeRTOS+ Trace. There can be seen, that the semaphore is taken after the 906 µs, which is approximately half of the overall time. This is caused by context switching when the program is entering and leaving the interrupt. Fig. 2. Kernel objects usage in one task Fig. 3 shows the situation when the AES functions are called from two tasks with same priority. In this case, one peripheral is shared using the mutex. The tasks are now blocked for 1.5 ms in average. Fig. 3. Kernel objects usage in multiple tasks V. CONCLUSION The hardware acceleration has its place in embedded systems. In this paper, the AES algorithm was used for demonstration. The bare-metal application shows, that the hardware accelerator can rapidly increase the final performance of the application. For this case, it was more than 10-times. When the accelerator is used, the right usage should be considered. In this paper, every possibility for the peripheral was used, specifically the mutex releasing after every data block processing and waiting for the end of the operation using semaphore controlled in the interrupt routine. This approach is not always ideal. In this case, the system overheads and context switching takes more time than the algorithm itself. The better solution would be to use the mutex only around the whole data processing instead of the wrapping it around the individual blocks. The interrupt should be also disabled and control the end of the operation through the status register. REFERENCES [1] Altera Corporation. Cyclone IV FPGA Family: Lowest Cost, Lowest Power, Integrated Transceivers [online]. Available: http://www.altera.com/devices/fpga/cyclone-iv/cyiv-index.jsp 59 Studentská konference Zvůle 2014 [2] Altera Corporation. Nios II Processor: The World's Most Versatile Embedded Processor [online]. Available: http://www.altera.com/devices/processor/nios2/ni2-index.html [3] Altera Corporation. Avalon Interface Specifications [online]. Available: http://www.altera.com/literature/manual/mnl_avalon_spec.pdf [4] Thomas Ruschival. Avalon AES ECB-Core [online]. Available: http://opencores.org/project,avs_aes [5] National Institute of Standards and Technology. Federal Information Processing Standard Publication 197 - Advanced Encryption Standard [online]. Available: http://csrc.nist.gov/publications/fips/fips197/fips- 197.pdf [6] Real Time Engineers. FreeRTOS [online]. Available: http://www.freertos.org/ [7] Percepio AB. FreeRTOS+ Trace [online]. Available: http://percepio.com/docs/FreeRTOS/manual/ [8] The OpenSSL Project. Cryptography and SSL/TLS Toolkit [online]. Available: http://www.openssl.org/ [9] Altera Corporation. Performance Counter Core [online]. Available: http://www.altera.co.jp/literature/hb/nios2/qts_qii55001.pdf 60 Studentská konference Zvůle 2014 Antennas for Radio Telescopes Michal Mrnka Brno University of Technology, Department of Radio Electronics Brno, Czech Republic xmrnka01@stud.feec.vutbr.cz Abstract—The paper deals with the very basics of instrumentation for radio astronomy. The attention is mainly directed towards fundamental antenna structures for both single dish and interferometric radio telescopes. The advantages and disadvantages of different systems are outlined. The concept of focal plane arrays is introduced. Keywords—radio astronomy; radio telescope; parabolic antenna; feed antenna. I. INTRODUCTION Radio astronomy is the study of radio radiation from celestial sources in frequency bands approximately up to 1 THz. Above these frequencies the domain of far infrared astronomy begins. In ground-based observations, the lower frequency limit is given by the ionospheric plasma frequency that is approximately 10 MHz. Due to the broad frequency range of interest, there are no telescopes operating continuously in the entire 10 MHz - 1 THz band. In lower frequency regions, up to several hundreds of MHz, systems like LOFAR (Low-Frequency Array) [1] based on arrays of electrically small antennas are used. From several hundreds of MHz up to tens of GHz, parabolic reflectors with multiple cryogenic receivers are mainly used. Millimeter and sub-millimeter bands call for antennas with extremely high surface accuracy and very specific locations (i.e. high altitudes, low air humidity) [2], [3]. An example of such system is ALMA (Atacama Large Millimeter/Submillimeter Array) [4]. This short review paper focuses on antennas for latter two radio astronomical systems. First, an examples of celestial radio sources are briefly discussed. In the following chapters, main antenna concepts are explained and certain current and future radio astronomical projects outlined. II. CELESTIAL SOURCES OF RADIO EMISSION The sources of radio emission can be divided into three categories [5], [6]: A. Blackbody (thermal) The representatives of blackbody radiation are microwave background radiation, radiation of stars, planets etc. B. Continuum sources (non-thermal) Continuum non-thermal sources comprise phenomena like bremsstralung and synchrotron emission (e.g. pulsars). Both of above mentioned source types emit continuous radiation. C. Spectral line sources On the other hand, spectral line sources emit radiation only on certain wavelengths, which are characteristic for specific physical phenomena (e.g. electron transitions in atoms/molecules). III. RADIO TELESCOPE Radio telescope is a special type of radio receiver equipped with a high directivity antenna (most frequently a parabolic dish) [7], [8]. The spatial resolution of the telescope is given by the dish size. It follows, that by increasing the dish diameter, its spatial resolution improves. Nevertheless, this approach has its limits - the costs of huge and extremely accurate antennas are immense. Much more feasible approach turns out to be based on aperture synthesis, where many single dish antennas operate as one, thus enabling exceptional spatial resolutions. Such array of radio telescopes is called an interferometric array. The block diagram of a single dish radio telescope based on heterodyne receiver principle is depicted in Chyba! Nenalezen zdroj odkazů.. Fig. 1. Block diagram of radio telescope 61 Studentská konference Zvůle 2014 IV. ANTENNAS Considering a single dish radio telescope, the antenna is composed of two elements: the large reflective surface (mostly paraboloidal) and a feed element, that illuminates the reflector appropriately. Secondary hyperboloidal mirror (i.e. subreflector), placed in the primary focal point, can be utilized in order to focus the radiation in the secondary focus. The bulky receivers can be in this case localized behind the main reflector. Several feed horns can be used with one parabolic reflector, operating at different wavelengths, polarizations etc. A. The primary reflector The main reflector is mostly designed in rotationally symmetric paraboloidal shape as in case of 100 m telescope at Effelsberg in Germany [9], [10]. The advantages of this geometry are simpler design and low levels of cross polarization. On the other hand, due to the blockage area (decreased aperture efficiency) and relatively high side lobe levels, offset mirror configurations are being deployed in new telescopes (e.g. Greenbank telescope). Another positive feature of new telescopes is the utilization of active surface for main reflector. In such a case, deformations of the dish due to gravity are compensated by adjusting the position of numerous surface panels (several thousands). In older telescopes, such as Effelsberg telescope, another approach called homology was used [10]. In homologous design, the dish and the supportive structure are designed together in a way, that at any elevation angle, the deformed surface of the main reflector would reshape into slightly different paraboloid. New focal point must be then searched and the position of the secondary reflector altered. B. The secondary reflector (subreflector) Most radio telescopes use secondary subreflectors in the shape of hyperboloid to focus radiation in the secondary focal point (point F2 in Chyba! Nenalezen zdroj odkazů.). Based on the shape of the subreflector we distinguish Cassegrain (convex) and Gregorian (concave) hyperboloidal mirrors. Cassegrain configuration is widely adopted for its mechanical stability, but if the primary focus is to be used as well, the Gregorian configuration is necessary. In such case part of the receivers can be located in the cabin behind the subreflector and during observation, the feed horn is inserted in front of the subreflector (to the point F1 in Chyba! Nenalezen zdroj odkazů.) through the opening in the subreflector surface. If the receivers in the secondary focus are used, the opening in the subreflector is simply closed. C. Focal plane array Multiple feed elements can be utilized in focal plane of the telescope thus forming so called focal plane array. In this array type, the elements work independently as opposed to the phased and interferometric arrays, where the radiation pattern is formed by all the elements. Such arrays are sometimes referred to as multibeam systems due to the nature of their operation (i.e. several independent radiation patterns of the elements). Fig. 2. Effelsberg 100-m radio telescope [8]. Fig. 3. Offset (non-symmetrical) Greenbank telescope [8]. Fig. 4. Cassegrain (left) and Gregorian (right) configuration of reflector antennas [8]. 62 Studentská konference Zvůle 2014 Every element is connected to the separate receiver and the whole feed can be considered as microwave camera with several pixels. When using n feed elements, the scanning speed of the telescope is increased n-times. Since the most costly part of the telescope is the large primary reflector, focal plane arrays became very convenient tool for maximizing deployment of the main reflector. Currently, the research endeavors focus on maximizing the packing density of antenna feed elements in order to obtain so called diffraction limited system. In this system, the resolution is given by the aperture diffraction on the main reflector and not by the coarse resolution of the focal plane array. Corrugated horns are therefore substituted by e.g. Vivaldi antennas. V. CONCLUSION Very brief introduction into the field of radio astronomy instrumentation was given. The attention was particularly drawn to parabolic antennas of large radio telescopes operating in lower GHz up to THz spectral regions. Pros and cons of different structures were given and the concept of focal plane array was introduced. REFERENCES [1] R.C. Vermeulen, M. Van Haarlem, “The international LOFAR telescope (ILT),” General Assembly and Scientific Symposium, 2011 XXXth URSI, pp. 13-20 August 2011. [2] J. E. Carlstrom, J. Zmuidzinas, “Millimeter and Submillimeter Techniques.” Reviews of Radio Science 1993-1995, The Oxford University Press, 1996. [3] SOFIA Science Center - Stratospheric Observatory for Infrared Astronomy. 15 July 2014. Available from //www.sofia.usra.edu/. [4] Atacama Large Millimetre/submillimetre Array. 15 July 2013. Available from www.almaobservatory.org/. [5] B. F. Burke, F. Graham-Smith, An Introduction to Radio Astronomy. Cambridge University Press, 2010. [6] Introduction to Radio Astronomy - lecture at Cornell University. 20 July 2014. Available from http://egg.astro.cornell.edu/alfalfa/ugradteam/pdf12/radio_lecture_jess_u at12.pdf [7] T. L. Wilson, K. Rohlfs, S. Hüttemeister, Tools of Radio Astronomy. Springer, 2009. [8] T. Sasao, A. B. Fletcher, Radio Telescope antennas. 25 July 2014, Available from www.ipa.nw.ru/smu/files/lib/kchap2.pdf [9] R. Wielebinski, N. Junkes, B. H. Grahl, “Effelsberg 100-m Radio Telescope: Construction and Forty Years of Radio Astronomy. ” Journal of Astronomical History and Heritage, Vol. 14, No. 1, pp. 3 - 21, 2011. [10] A. Nothnagel, J. Pietzner, Ch. Eling, C. Hering, “Homologous Deformation of the Effelsberg 100-m Telescope Determined with a Total Station. ” IVS General Meeting Proceedings, pp.123–127, 2010. 63 Studentská konference Zvůle 2014 Broken Bar Analysis of the Squirrel Cage Machine Lukáš Nekolný Department of Electromechanics and Power Electronics University of West Bohemia, Faculty of Electrical Engineering Pilsen, Czech Republic neky@kev.zcu.cz Abstract— This paper presents mechanical problems of the squirrel cage which can occur during a start-up process with high moment of inertia, high load on the shaft, etc. These conditions can lead to problems of the squirrel cage such as broke of a bar or damage of an end ring. This effect is modeled by a method of mesh currents with a subsequent verification by a FEM model. Keywords—induction motor; squirrel cage; squirrel cage model, broken squirrel cage bar; method of mesh currents; FEM. I. INTRODUCTION Induction motors are the most worldwide used machines thanks to their high efficiency, simple rotor structure, stability, and low production cost. Despite of their reliability, several faults still threaten. One of the fairest rotor damage is a broken bar of a squirrel cage which happens because of mechanical, electromagnetic or thermal stresses during their operation. Especially during transients, there exists significant increase in their probability. The damaged cage negatively affects torque, current and speed of the shaft as a consequence of the torque pulsation in the air gap. Therefore, determining of faults in the early stage is essential to improve productivity and prevent a larger destruction of the motor. Prevention can minimize a machine damage and cost of repairs at the same time. For this reasons technical departments of factories require a protection which reveals a failure in the initial stage. According to a survey in Fig. 1 which is prepared by EPRI, it is obvious that 5 percent of engine failures are caused by problems with the squirrel cage. Fig. 1. Probability of fault for squirrel cage induction motors The most common disorder of the rotor is a crack or break between the rotor bar and the end ring which is caused by thermal, electromagnetic or mechanical stress. This failure most often occurs under these conditions:  Heavy start-up procedure, where the squirrel cage is not structurally designed to withstand increased mechanical and thermal stress  Variable mechanical loads such as compressors, coal crushers, pumps or motors with a gearbox  Inadequate production, where the squirrel cage defect has passed a final inspection When a bar breaks, through neighboring bars will flow more current than they are designed for. Neighboring bars will be thermal overloaded and a destruction of the squirrel cage continues. These problems with breaking rotor bars mainly threaten in high power or heavy duty working machines such as locomotives, mining drives, etc. Unlike mass-produced induction machines where the squirrel cage is most frequent made of aluminum by a pressured casting, the squirrel cage of traction machines is made of massive copper bars and end rings which are connected via soldering. Whole manufacturing process starts with caulking bars into iron slots of the rotor until firmly grip, after that end rings are briefly inductively soldered to bars. For this reason, it is necessary to maintain a technological gap between the end ring and the pack of iron due to thermal expansibility of different materials. If there as a result of the bar breakdown occurs to a torque pulsation, this imposition causes differences of kinetic energy between the rotor with bars and end rings. This energy is absorbed by bars between the iron pack and the end ring and it leads to a rupture close to their connection. II. PROBLEM STATEMENT In publications [1], [2] and [3] is presented a sophisticated analysis for different combinations of rotor defects with comparing of their mutual influences. From our point of view, it is unnecessary to deal with defects no more than one of the rotor bar, because one broken bar is a state of emergency for the machine. From the principle of squirrel cage induction machines, it is clear that a defect of the rotor bar leads to a local deformation of the electromagnetic field in interruption, and subsequently causes torque pulsation in the air gap. The primary factor is an armature current of a machine which could be evaluated. With the help of a numerical model current densities in the squirrel cage for a case of interruption between the bar and the end ring are calculated. 64 Studentská konference Zvůle 2014 Furthermore, we would like point out that a broken bar is not only one possible fault of the squirrel cage. Another one, but less frequent, is interruption of the end ring. Due to the squirrel cage design, this would be a failure which leads to more severe consequences than is a repair of the motor. Due to high centrifugal forces, that could probably leads to a separation from the rest of bars as well as a damage of the stator winding. But this is a very rare fault which goes beyond a scope of the article and for this reason it was not considered further. III. ANALYTICAL MODEL Parameters for simulation such as a resistance and a reactance were obtained from the short-circuit test [4] and geometrical dimensions of the squirrel cage. An analytical simulation was carried out using by the method of mesh currents. The calculation was conducted by an m-file script according to the equation 2. In Fig. 2 is an equivalent circuit for squirrel cage [5], where impedances Zk represent the resistance and the reactance of the end ring between two bars, Zt represent the resistance and the reactance of the squirrel cage bar and current sources are represented by induced voltage Ui that was calculated according to the formula 1. Fig. 2. Equivalent circuit of the squirrel cage An equivalent circuit is described by a matrix, which is given in Fig. 3, and vectors as you can see in Fig. 4. The system of linear equations for the healthy squirrel cage has Q2+1 of equations and for the squirrel cage with one broken bar has Q2 of equations, where Q2 is a number of squirrel cage bars. A difference is only in the loop Is4 where the armature reaction changed and the loop area, which is coupled with the magnetic flux, also increased. Final solution is based on an obtaining of currents in short circles which can be easily converted into squirrel cage bars and end ring currents flowing between bars. This way, we have gained a complete diagram of currents in bars and the end ring. The analytical model results were verified by numerical calculations which including analysis of real, imaginary and total currents in bars, which display Fig. 5, and currents in parts of the end ring as you can see in Fig. 6. (1) A · x = b (2) Fig. 3. Impedance matrix of the squirrel cage for fourth broken bar Fig. 4. Vectors of currents and inducted voltages Fig. 5. Currents in bars for broken bar no. 18 Fig. 6. Ring currents between bars for broken bar no. 18 IV. NUMERICAL MODEL A calculation of a numerical model is based on a 2D model of the induction motor SIEMENS 1LA7 which is created in the program FEMM. Geometrical dimensions and parameters of the stator winding and the rotor squirrel cage were obtained from a catalog. In Fig. 8 is given the current distribution for individual bars. The stator winding is powered by a nominal current and a magnetic circuit is saturated. It means that the operating point moves in a nonlinear part of the magnetizing 65 Studentská konference Zvůle 2014 characteristic and therefore waveforms of currents are deformed by higher time harmonics (5th, 7th, 11th and 13th). In Fig. 9, the differences between one broken bar and the healthy squirrel cage can be clearly seen. The most important thing for the broken bar is that neighboring bars must carry 128 percent of the current which flows through the healthy squirrel cage. It causes an additional thermal and mechanical straining and a continuing of the destruction of the squirrel cage. For the numerical modeling a symmetric three phase current source was considered. It should be noted that the calculation will necessarily be affected by errors, because the supply current will be unbalanced due to a failure in the rotor, but this imprecision cannot be deleted due to the model topology. In order to take this fact into account, we must use another more complicated way of a modeling. Unlike a modeling by using the pushed current, we would have to use voltage sources and the transient analysis. However, this method must be modeled in 3D which greatly complicates the calculation itself. Due to high current amplitudes, this error is almost negligible. Fig. 7. FEM current density distribution Fig. 8. Current distribution in rotor bars The numerical model is not completely accurate, because does not consider problems such as parasitic currents, currents flowing through iron bars and effects of broken bars on stator currents. Fig. 9. Comparison between healthy and damaged squirrel cage V. CONCLUSION This paper presents an analytical model of the squirrel cage induction machine with a subsequent numerical model comparison. For this purpose each rotor bar is axially subdivided into bar sections which are tangentially connected by end ring conductors. Very important is to mention that the analytical model strongly depends on a precision of parameters determination. For this reason a sensitivity analysis was used in order to obtain more accurate results. This way a discretized rotor topology model is created. The obtained parameters confirm that the proposed analytical model leads to reasonable results and accurately reflects the real squirrel cage machine behavior under broken rotor bar conditions. The resulting difference between analytical and numerical models is no more than 8 percent which suggesting good simulation results. REFERENCES [1] Bellini, A., Filippetti, F., Franceschini, G., Tassoni, C., Kliman, G. B.: Quantitative evaluation of induction motor broken bars by means of electrical signature analysis, IEEE Transactions on Industry Applications, pp. 1248-55, 2001. [2] Bonnett, A. H., Soukup, G. C.: Analysis of Rotor Failures in SquirrelCage Induction Motors, IEEE Transactions on Industry Applications, vol. 24, pp. 1124–1130, 1988. [3] Schoen, R. R., Habetler, T. G.: Effects of Time-Varying Loads on Rotor Fault Detection in Induction Machines, IEEE Transactions on Industry Applications, vol. 31, pp. 900–906, 1995. [4] Hruška, K.: Speciální klece asynchronních motorů, dizertační práce, ZČU, 2011. [5] Kindl, V.: Analýza dynamických silových účinků na rotor asynchronního stroje velkého výkonu, dizertační práce, ZČU, 2011. 66 Studentská konference Zvůle 2014 Multifunctional non-differential Controllable 2nd-order Frequency filter Josef Polak Faculty of Electrical Engineering and Communication Brno University of Technology Brno 616 00, Technicka 12 E-mail: xpolak24@stud.feec.vutbr.cz Lukas Langhammer Faculty of Electrical Engineering and Communication Brno University of Technology Brno 616 00, Technicka 12 E-mail: xlangh01@stud.feec.vutbr.cz Jan Jerabek Faculty of Electrical Engineering and Communication Brno University of Technology Brno 616 00, Technicka 12 E-mail: jerabekj@feec.vutbr.cz Abstract— In this paper one new circuit of the 2nd-order multifunctional frequency filter designed using signal-flow graphs method is presented. New implementation of active element transimpedance amplifier (TIA) in non-differential form is used in this filter. Other active elements are multiple-output current follower (MO-CF) and multiple output transconductance amplifier (MOTA). The pole frequency and quality factor of the filter can be controlled mutually independently. Parameters of MOTA are used to control pole frequency and parameter of TIA is used for controlling of quality factor. This filter is designed as a multiple input multiple output (MIMO) with two inputs and six outputs. Functions of proposed filter are verified using PSpice simulation. Simulation results of proposed filter are included in this paper. Keywords—transimpedance amplifier; current follower; transconductance amplifier; frequency filter; MIMO I. INTRODUCTION Several methods how to approach to the design of frequency filters are known. One of the main filter-design methods frequently used for the proposal of new filters is method of autonomous circuit design [1]. This method of frequency filters design presents circuit as connection of active and passive elements, which are not excited by a source signal and there is no sensing of voltage or current response. We know only the characteristic equation that represents the determinant of admittance matrix of analyzed circuit. According to the type of characteristic equation we can recognize, which order of frequency filter can be implemented by the autonomous circuit. The most difficult is to find appropriate structure of autonomous circuit that can easily be done by connecting the number of active elements to full admittance network. Using this method, we can obtain all the possible connections of autonomous circuit. If we have a greater number of active elements, this method can be timeconsuming. More appropriate is to use method of expanding autonomous circuits [2]. Known autonomous circuit which was found previously is used in this method. Autonomous circuit is expanded by the other active and passive elements. Another method useful for filter design is method of synthetic elements [2], [3]. It is based on the principle of serial or parallel shifting of elementary circuits of D and/or E synthetic elements as described in [4]. The third method mentioned in this article and used to design the filter presented herein is a method of signal-flow graphs (SFGs) [5], [6]. In general, the signal-flow graphs form the basis of circuit theory and their use is also possible in the field of automatic control and data communications. For the synthesis and analysis of electric circuits are used the simplified and mixed Mason-Coates (M-C) flow graphs [5]. M-C graphs express the relationships between the variables. Each graph consists of nodes representing individual variables and branches that define their relationship. II. DEFINITION OF ACTIVE ELEMENTS This section describes three active elements used to design presented filter. Each active element is presented by schematic symbol, signal-flow graph and implementation using the universal current conveyor (UCC) [7, 8]. The first of the active elements is multiple-output current follower (MO-CF) [9], which has a low-impedance input and in case of implementation by UCC it gets four high-impedance outputs. Fig. 1 shows the schematic symbol, expressing MOCF using a signal flow graph and implementation using the UCC. The relationships between input and outputs are described in the following equations: IOUT1 = IOUT3 = +IIN, (1) IOUT2 = IOUT4= -IIN. (2) _ _ + MO-CF _ + IOUT1 IOUT2 IOUT3 IOUT4 IIN 1 1 1 -1 -1 UCC X Z1+ Z1– Z2+ Z2– Y1+ Y2– Y3+ IOUT1 IOUT2 IOUT3 IOUT4 IIN a) b) c) Fig. 1. Multiple-output current follower (MO-CF) a) schematic symbol, b) signal-flow graph, c) implementation using the UCC 67 Studentská konference Zvůle 2014 The second of the active elements is multiple-output transconductance amplifier (MOTA) [10], which has inverting and non-inverting high impedance inputs. Implementation using UCC has two inverting and two non-inverting high impedance outputs. Fig 2 shows the schematic symbol, its signal-flow graph and implementation using the UCC with resistor R used to set value of transconductance (gm = 1/R) of this active element. The relationships between inputs and outputs are shown by following equations: IOUT1 = - IOUT2 = IOUT3 = - IOUT4 = gm (UIN+ - UIN-). (3) Fig. 2. Multiple-output transconductance amplifier (MOTA) a) schematic symbol, b) signal-flow graph, implementation using the UCC The third of the active elements is transimpedance amplifier (TIA), which has one current input and one voltage output. TIA could be formed by current follower (CF) and current conveyor of second generation (CCII). Resistor RT inserted between CF and CCII is used to control transresistance (actually transimpedance) of this element as is shown on Figure 3c. The relationship between input and output is: vw =RT if , when vf = 0. (4) TIA f if vf iw RT w vw b) 1 RT a) c) UCC X Z1+ Z1– Z2+ Z2– Y1+ Y2– Y3+ X Z Y RT BUFFER (CCII) if vf iw vw Fig. 3. Transimpedance amplifier (TIA) a) schematic symbol, b) signal-flow graph, implementation using the UCC III. PROPOSAL OF MULTIFUNCTION FREQUENCY FILTER Frequency filter presented in this paper is designed using SFGs method [5], [6]. The filter is designed as a multifunctional (highpass HP±, lowpass LP±, bandpass BP ±, bandstop BS±) 2nd-order filter with independent control of the pole frequency and quality factor. Transconductance of two active elements (MOTAs) are used to control the pole frequency of the filter, transresistance of TIA is used to control the quality factor. Fig. 4 shows schematic diagram of frequency filter and Fig. 5 shows its signal-flow graph f RT w C1 MOTA1 + _ gm1 C2 MOTA2 + _ gm2 MO-CF R _ + _ + _+ + _+ IIN1 IIN2 IOUT1 IOUT2 IOUT3 IOUT4 IOUT5 IOUT6 TIA _ Fig. 4. Scheme of 2nd order multifunctional frequency filter pC1 pC2 1 1 11 IOUT1 1IIN1 1 1 RT +gm2+gm1 IOUT5 IIN21 -gm1 +gm 1 IOUT3 -gm2 R -1 1 IOUT2 -gm1 IOUT4 -gm2 1 Fig. 5. Signal-flow graph of 2nd-order multifunctional frequency filter The characteristic equation was calculated by SFGs method and verified in SNAP program. The equation (5) is characteristic equation of this proposed filter. D = p2 C1C2R + pC2gm1RT + gm1gm2R. (5) Calculations of pole frequency, quality factor are based on the equation (5) and are as follows: (6) (7) where f0 is the pole frequency and Q is the quality factor of the filter. All of the filter transfer functions are taken from highimpedance outputs of active elements. Schematic diagram on Figure 4 also includes two low-impedance inputs, HP± (OUT1, OUT2), LP± (OUT5, OUT6), BS± (OUT1, 2, OUT5, 6) are taken when source current is connected to IN1 and BP± (OUT3, OUT4) when IN2 is used. The following equations, based on equation (4) represent the transfer functions of the individual filter functions: _ _ + MOTA + IOUT1 IOUT2 IOUT3 IOUT4 IIN+ 1 -gm -gm gm UCC X Z1+ Z1– Z2+ Z2– Y1+ Y2– Y3+ IOUT1 IOUT2 IOUT3 IOUT4 a) b) c) _ + UIN+ UINIIN- gm -1 gm IIN+ IINR = 1/gm ISET 68 Studentská konference Zvůle 2014 (8) (9) (10) (11) IV. SIMULATIONS To verify appropriate functions of proposed filter simulation program OrCAD was used with macro-models of used elements [11]. All simulations were carried out in frequency range from 10 kHz to 10 MHz. The values of passive elements (capacitors and resistor R) are constant and have values C1 = 470 pF, C2 = 47 pF, R = 112 Ω. Fig. 6 shows the output responses of the individual filter functions for the theoretical pole frequency of 285 kHz, calculated according to equation (6). Values of transconductances in this case are obvious from following: R1 = 1/gm1=1/gm2 = R2 = 3750 Ω. Quality factor Q = 0.707 (Butterworth approximation) is obtained when resistor RT is equal to 500 Ω. Filtering functions HP±, LP±, BS± have pass-band gain equal to 0 dB when input IN1 is considered (eq. 8, 9, 11). If IN2 is used, only BP± has unity-gain in pass-band. Figure 6 shows that the pole frequency is around 270 kHz and slope of individual curves corresponds to the requirements of the filter of 2ndorder. In Fig. 7 can be seen phase responses of particular filter functions. Control of pole frequency is possible by changing values of transconductances (actually resistors R1, R2). Fig. 8 shows control of pole frequency f0 in case of BP filter. Comparison of calculated values according to equation (6) with values from simulation for a BP filter is shown in Table I. for four values of the resistors R1, R2. The change of the quality factor Q can be done by changing value of the resistor RT. The value of the resistor RT is calculated according to equation (7) for four values of quality factor shown in Table II. Example of change of the quality factor of the BP filter is shown in Figure 9. TABLE I. COMPARISON OF CALCULATED AND SIMULATED VALUES OF F0 R1=1/gm1=1/gm2 = R2 [] f0 [kHz] Calculated Simulation 5750 186 175 4750 225 209 3750 285 274 2750 390 407 TABLE II. COMPARISON OF CALCULATED AND SIMULATED VALUES OF Q RT [] Q [-] Calculated Simulation 750 0.472 0.364 500 0.708 0.606 250 1.42 1.17 100 3.54 2.81 10 4 10 5 10 6 10 7 -70 -60 -50 -40 -30 -20 -10 0 10 Frequency [Hz] Gain[dB] HP (OUT2) BP (OUT3) LP (OUT6) BS (OUT2+OUT6) Fig. 6. Output responses of high pass, low pass, band pass and band pass reject 10 4 10 5 10 6 10 7 -500 -400 -300 -200 -100 0 100 200 Frequency [Hz] Phase[°] HP+ BP- LP+ BSFig. 7. Phase responses of high pass, low pass, band pass, band stop 69 Studentská konference Zvůle 2014 10 4 10 5 10 6 10 7 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 Frequency [Hz] Gain[dB] f0 = 407 kHz f0 = 274 kHz f0 = 209 kHz f0 = 175 kHz Fig. 8. Demonstration of controlling of the pole frequency f0 in case of BP 10 4 10 5 10 6 10 7 -50 -40 -30 -20 -10 0 10 Frequency [Hz] Gain[dB] Q = 2.81 Q = 1.17 Q = 0.606 Q = 0.364 Fig. 9. Demonstration of controlling of the quality factor Q of BP filter V. CONCLUSION In this paper a new design of non-differential 2nd-order filter using signal-flow graphs method is presented. The filter is designed as a multi-functional. All the filtering function, except all pass filter, could be obtained by proper connecting of the inputs and outputs. The filter is designed so that pole frequency and the quality factor of the filter can be changed independently. Pole frequency can be changed with two active elements MOTA, which are realized in this implementation by UCC. Changing of appropriate resistors of MOTA, their transconductance is changed and the pole frequency of the filter is changed too. Quality factor can be changed by transimpedance active element TIA, which is for the time being and in this form designed and tested only in non-differential form. In this active element changing value of the resistor RT, quality factor of proposal frequency filter is changed too. Functions of the proposed filter and the possibility of controlling of the pole frequency and the quality factor were verified using PSpice simulations. ACKNOWLEDGEMENT Research described in the paper was supported by Czech Science Foundation project under No. 14-24186P and by internal grant No. FEKT-S-14-2352. The described research was performed in laboratories supported by the SIX project; the registration number CZ.1.05/2.1.00/03.0072, the operational program Research and Development for Innovation. REFERENCES [1] J. Koton, K. Vrba, “Designing of Frequency Filters Using Autonomous Circuit With a Full Admittance Network,” (Návrh kmitočtových filtrů pomocí autonomního obvodu s úplnou sítí admitancí), Elektrorevue Internet Journal (http://www.elektrorevue.cz), No. 33, 2005. [2] J. Koton, K. Vrba, “Zobecněné metody návrhu kmitočtových filtrů,” Elektrorevue - Internetový časopis (http://www.elektrorevue.cz), 2008, roč. 2008, č. 26, s. 1-17. ISSN: 1213-1539. [3] P. Brandstetter, L. Klein, “Design of Frequency Filters by Method of Synthetic Immittance Elements with Current Conveyors,” In Proc. International Conference Applied Electronics (AE), Pilsen, Czech Republic, Sep. 2012, pp. 37 - 40. [4] R. SPONAR, “High-Order Imittance Synthetic One-port Elements in Frequency Filters with Current Conveyors,” (Syntetické dvojpólové prvky s imitancemi vyšších řádů v kmitočtových filtrech s proudovými konvejory), Elektrorevue – Internet Journal(http://www.elektrorevue.cz), 2004, No. 13, ISSN 1213-1539. [5] J. Jerabek, K. Vrba, “The Proposal of Tunable Frequency Filter With Current Active Elements Using Signal-Flow Graphs Method,” (Návrh přeladitelného kmitočtového filtru s proudovými aktivními prvky za pomoci metody grafu signálových toků), Elektrorevue - Internet Journal (http://www.elektrorevue.cz), No. 42, 2009, pp. 42-1 - 42-7. [6] K. Intawichai, W. Tangsrirat, “Signal-Flow Graph Realization of nthOrder Current-Mode Allpass Filters Using CFTAs,” In Proc. The 10th International Conference Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Krabi, Thailand, May 2013, pp. 1 - 6. [7] R. Sponar, K. Vrba, “Measurements and Behavioral Modelling of Modern Conveyors,” International Journal of Computer Science and Network Security, Vol. 3A, Issue 6, 2006, pp. 57-63. [8] K. Vrba, J. Jerabek, “Vybrané vlastnosti universálního proudového konvejoru - ukázka návrhu aplikace,” Elektrorevue - Internetový časopis (http://www.elektrorevue.cz), 2006, roč. 2006, č. 33, s. 1-4. ISSN: 1213- 1539. [9] K. Vrba, J. Jerabek, “Filters Based on Active Elements with Current Mirrors and Inverters,” International Transactions Communication and Signal Processing, Vol. 1, Issue 8, 2006. [10] J. Jerabek, K. VRBA, “Design of Fully Differential Filters with Basic Active Elements Working in the Current Mode,“ Elektrorevue Internetový časopis (http://www.elektrorevue.cz), 2010, roč. 2010, č. 87, s. 1-5. ISSN: 1213- 1539. [11] J. Jerabek, “Kmitočtové filtry s proudovými aktivními prvky,“ doktorská práce, Brno: Vysoké učení technické v Brně, Fakulta elektroniky a komunikačních technologií, Ústav telekomunikací, 2011. 147 s. 70 Studentská konference Zvůle 2014 Measurement of Optical Signals Emitted by the Energetic Materials during Detonation Martin Pospisil, Roman Marsalek, Ales Prokes Department of Radio electronics Brno University of Technology Technicka 12, Brno, Czech Republic Email: xpospi29stud.feec.vutbr.cz Jiri Pachman, Jakub Selesovsky Institute of Energetic Materials Faculty of Chemical Technology University of Pardubice Studentska 95, Pardubice, Czech Republic Email: jiri@pachman.eu Abstract—This paper deals with the measurement of the optical signals generated by the energetic materials during their detonation. The optical signal is sensed using a set of optical fibers positioned at different places along the measured material. The paper presents the analysis and first steps of design of the optical-electrical converters. The measured results for selected materials are finally presented together with the analyzed velocity of detonation of selected material and the comparison of results with the observations from a high-speed camera. I. INTRODUCTION The velocity of detonation (VOD) is one of the key parameters of the energetic materials, such as explosives. There are several methods to measure the VOD. The possible methods can be classified as continuous [1] or discontinuous. One of the used methods is a discontinuous method, based on the set of optical probes placed at defined positions along the expected direction of detonation. The VOD measurement is based on the estimation of time necessary to pass the detonation wave between neighboring sensors. Several commercial instruments for VOD measurements using the optical method are currently available in market and some papers on this topic can be found in the literature, such as [2] or [3]. For the correct function of any measurement instrument, it is necessary to know how the signals from the probes look like. The aim of the presented research was thus to design a simple optical-electrical converter suitable for the presented application and observe the real measured signals on the oscilloscope in order to get a knowledge on the character of the optical information and potential parasitic effects of various explosives. This paper is structured as follows. In the section II, we present possible optical-electronic components and designed converter. In section III the experiment is described together with selected results from the measurements and short comparison of the estimated VOD with the results from high-speed camera. The section IV rounds up the paper. II. OPTICAL-ELECTRICAL CONVERTER BOARD In order to choose the best concept for the optical-electrical converter board, three solutions have been tested. The first used an optical receiver OPF 562 while the others used the PIN photodiods OPF482 or SFH250 in combination with a transimpedance aplifier AD 8015, see the basic block schematic in Fig. 1. The parameters of several selected optical receivers and TABLE I. BASIC PARAMETERS OF SELECTED FIBER OPTIC RECEIVERS Component housing fiber connector λ BW [nm] [MHz] OPF 2416 plastic 62.5 (50)/125 µm ST 850 125 OPF 2418 plastic 62.5 (50)/125 µm ST 850 155 OPF 561 metal 62.5 (50)/125 µm SMA 840 125 OPF 562 metal 62.5 (50)/125 µm ST 840 125 TABLE II. BASIC PARAMETERS OF SELECTED PIN PHOTODIODS Component fiber connector λ tr R [nm] [ns] [A/W] SFH 250 V POF 980/1000/2200 µm V-housing 850 3 0.4 IF-D91 POF 1000 µm V-housing 920 5 0.5 OPF 482 50(62.5)/125 µm ST 850 1 0.55 PIN photodiods are summarized in Table I and II, respectively. The equivalent optical noise input power of the receiwers is -43 dBm, dynamic range 34.7 dB. For the photodiods, tr is the output rise time and R the responsivity. The optional lowpass filters at the output of converters are designed to limit the signal bandwidth to selected value. A design of first testing PCB is shown in Fig. 2. A dynamic range of the first testing prototype with the amplifiers AD8051 was not sufficient (this device is supposed to be used for high-speed data transmission where the nonlinearity is not an issue), so the boards have then be redesigned using the AD 8021 amplifiers. A photograph of the converter board used in the practical experiments is shown in Fig. 3. III. MEASUREMENT AND RESULTS A. Measurement setup In the first setup, the signal from three optical fibers have been converted to the electrical signals using the designed optical-electrical converter board and the electrical signals have been recorded using a 4-channel oscilloscope Tektronix DP0 7254. The setup is schematically depicted in Fig. 4. B. Selected results From a wide set of measured materials, two example signals have been chosen and are shown in Fig. 5 and in Fig. 6. As the detonation wave travels along the measured material, it subsequenttly generates the optical peaks at the optical fiber position. For some of the materials, e.g. Tetryl 71 Studentská konference Zvůle 2014 Fig. 1. Basic schematic of first version of opto-electric detectors Fig. 2. A PCB of the first test version of opto-electric detectors (Fig. 5), the measured signals contain clean peaks. On the contrary, the signals generated by some materials contain the additional unwanted peaks, as can be seen on the example of RDX/Al material (Fig. 6). Although the most of the available optical receivers and PIN diodes work with the wavelength of 850 nm., we also wanted to compare the optical signals in the 850 nm. with the 1550 nm. band. For that, the oscilloscope probes Tektronix 6701 (500-950 nm.) and 6703 (1100-1700 nm.) have been used. The selected results of such experiment (with Tetryl as material under test) are shown on Fig. 7. In this case, the optical power for 500-950 nm. band was twice higher than for 1100-1700 nm. band. As the alternative to the method of VOD measurement described above, the VOD has also been determined using the ultra high speed camera UHSi 12/24 produced by the Invisible Vision company that can capture the images with a nanosecond resolution. A snapshot of 24 frames (possible limit) has been used. The comparison is shown in Fig. 8 for three detonations of Semtex 1A material. On the same picture, the low and high limits of the VOD determined using the above mentioned Fig. 3. A photo of new version of the optical-electrical converter board used during the practical measurements Fig. 4. A block schematic of a measurement setup method are marked with the horizontal lines. Except some deviations at the beginning of the record, the both methods results are very close. IV. CONCLUSIONS This paper presents the concept of simple optical-electrical converter for the measurement of signals emitted by the energetic materials during detonation. Using the receiver, the measurement of several energetic materials has been performed. It has been verified that some of the materials (e.g. Tetryl, detonation cord etc.) generate quite clean optical impulses, while it is not a case for other materials as e.g. RDX/Al or Semtex 1A. The measurement for two different wavelengths have also been compared and for a selected material, the optical power in 500-950 nm. band was twice higher than for 1100-1700 nm. band. The measured signals have then been used to compute the velocity of detonation of examined materials. The results are in good agreement with the velocities determined from the ultra high speed camera records. 72 Studentská konference Zvůle 2014 Fig. 5. Measured signals at three different positions for Tetryl material Fig. 6. Measured signals at three different positions for RDX/Al material ACKNOWLEDGMENT The research described is a part of the project funded by the Technology agency of the Czech Republic TA02010923 OPTIMEX Optical measurement of explosions. A design of electrical components was performed in the laboratories of the SIX research center, reg. no. CZ.1.05/2.1.00/03.0072. Thanks also for a support by the internal project FEKT-S-14-2177 (PEKOS). REFERENCES [1] Benterou J. et al., In-Situ Continuous Detonation Velocity Measurements Using Fiber-optic Bragg Grating Sensors, EuroPyro 2007, 34th International Pyrotechnics Seminar, 2007, Beaune, France. Fig. 7. Measured signals by the oscilloscope probes for 850 nm (top) and 1550 nm (bottom) Fig. 8. Comparison of the VOD measured by the high-speed camera and VOD measured from the signals using the optical probes (low and high limit marked using the yellow lines) for SEMTEX 1A material [2] Wang Xiaoyan, Zhao Hui, Wu Jian, Wang Gao, Design of the fiber Detonation velocity measuring system based on the FPGA,Electronics and Optoelectronics (ICEOE), 2011 International Conference on , vol.4, no., pp.V4-29,V4-32, 29-31 July 2011 [3] Chan, E. M., Lee, V., Mickan, S. P. Davies, P. J., Low-cost optoelectronic devices to measure velocity of detonation, Small Structures, Devices, and Systems II, Proceedings of SPIE Vol. 5649 (SPIE, Belingham, WA, 2005), p. 586-594 [4] Tete A.D., et al., Velocity of detonation (VOD) measurement techniques practical approach, International Journal of Engineering and Technology, Vol. 2 (3), 2013, p. 259-265. 73 Studentská konference Zvůle 2014 of Stereoscopic Video Formats with half resolution (Side-by-Side, Below-Under, etc.) Martin Šindelář Department of applied electronics and telecommunication University of West Bohemia Pilsen, Czech Republic sindel23@kae.zcu.cz Abstract— This article is focused mainly on errors, which are caused usually during recording of stereoscopic videos and which are for example: incorrect synchronization between two cameras, unequal configuration of both cameras, etc. But here are described some errors, which are caused during compression and displaying, too. Among this errors belongs mainly not using 3D specified codecs (most frequently used MPEG-2, MPEG-4, WMV, etc.) and breaking of recommended viewing condition. Key words— Stereoscopic video, errors; recording of stereoscopic video; compression of stereoscopic video; synchronisation betweeen two channels; time delay; compression pitfalls; viewing conditions I. INTRODUCTION Since 1832, when principle of stereoscopy was discovered by Charles Wheatstone, a lot of issues were solved, how to stereoscopically (three-dimensional) record the desired scene as good as possible. And how to reproduce later this scene the best way, so that the resulting three-dimensional effect were most authenticable. In this period until the end of 19th century, however were solved only issues related to static image. With the beginning of color photography in the mid of 20th century and later with the invention of Anaglyph, questions still increased. And there was no other choice than with help of many and many series of subjective tests, searching the most precise answers to all these questions. In this article, we will seek answers to two questions concerning the recording of stereoscopic videos which are: “The need and dependency of synchronization between two cameras (recording methods with two recording equipment)” and “The need for the same initial setup of the two cameras”. Then here will be outlined also the answer to the question concerning the processing of stereoscopic video. It is sometimes assumed too much that stereoscopic videos stored so-called "in one frame" (side-byside, below-under, etc.) They can be cut, modified and sometimes even compressed by using current methods, which have been developed for the needs of the current "2D" image, and their use in combination with 3D video can clog into the picture a lot of bugs and disruptive artifacts. II. SYNCHORNISATION BETWEEN TWO CAMERAS Mutual synchronization of two recording equipment, with which we want to record the three-dimensional non-static scene is one of the most critical requirements, which is put on every stereo system based on a stereo camera pair. And therefore it is very necessary to know the size of the acceptable delay, which can occur between the start of recording on both cameras depending on the speed of motion of the scene (generally on dynamic in the image). In professional recording devices the synchronization is mostly secured by a trigger generator, from which impulses lead to the synchronization inputs of both cameras. And these can be executed with precision in range of ms. On the assumption of recording in current (home) conditions, we do not usually have cameras with a synchronization input and the start of recording can be solved either completely separately, or for example by using the remote controls, which commercial cameras are mostly equipped with (it is expected the possibility of controlling of both cameras using only one remote control). In this case a mutual random delay occurs, which is influenced by several factors. For example: If the signal on the IR sensor elements of both cameras arrived from the controller directly, using one or more reflections, what was the difference in the camera is turned on etc. In order to measure the maximum size of a delay and in order to determine its effect on the perceived quality of the stereoscopic video a total of 6 tests was prepared, which were divided into three categories according to the distance between the camera system and the object. This object was moving vehicle, which always moved with the same speed, concretely 50kmph and always from the left to right and the other way around. Because during several consecutive starts of recording the size of non-synchronization was only in the range 0 - 50 ms, as we can see in Table I., each of tests were appended by videos, that their non-synchronization was in a graphics editor artificially increased to a value of one and two frames (40ms a 80ms). Each of the tests thus contained a reference sequence (with minimal delay), the sequence with random delay in 74 Studentská konference Zvůle 2014 Errors in Recording and Compression range 0 – 50ms, and sequence with artificially created delay of 1, 2 respectively 3 frames. TABLE I. TIME DELAY BETWEEN STARTS OF RECORDING Number of start Delay t [ms] t [frames] 1 0.32 0.008 2 -21.11 -0.52775 3 5.22 0.1305 4 17.32 0.433 5 50.60 1.265 6 -17.62 -0.4405 7 0.80 0.02 8 33.16 0.829 9 -21.50 -0.5375 10 -5.41 -0.13525 A negative value shows the delay of the right channel towards the left channel. From the following graph, which shows us the results of subjective evaluation, we can see that the Mean Opinion Score (MOS) decreases according to the assumption with the growing mutual delay. It can be stated that at speeds up to 50kmph and distances greater than 15 meters, the acceptable delay is up to 40ms (in our case one image). From the chart it is also visible that in some cases was evaluated better a video with random time delay then the movie without delay. This indicates that the stereo-base setup stationary at 7.5 cm was not ideal for greater distances and its increasing using time shift resulted in improved MOS. Fig. 1. MOS depending on mutual delay between two cameras III. CONFIGURATION OF TWO CAMERAS On the assumption of the same recording assembly (with two cameras) as in Chapter II., another problem occurs with perfectly same behavior on both cameras mounted on tripod throughout all times of the recording. For example, error in differences of excitation of sensor element amplifier, alias the automatic correction of brightness, which may have very unpleasant effect, when the observer perceives the first scene brighter than the second. It results in reduction in the level of visual comfort and in fatigue of the observer´s eye. In Fig. 2, we can see an example of this error that occurs mostly when lighting changes (dimming, cloudy sky, etc.). Fig. 2. Error caused by autocorrection of brightness Another error which we can very often encounter is reflected in the image as unpleasant freezing of moving objects. This is mostly caused by one of the many types of stabilization, which by setting a horizontal displacement in the image (the stereo-base) reacts differently. This error affects the overall impression of the record, especially on the quality of visual comfort, defined by ITU-T.BT-2021. The remaining two parameters, image quality and quality of depth are reduced by that the least. And their greater falling happens only in the case of compression thus taken stereoscopic records. The first prerequisite to prevent these and several other potential error is to always use the same recording devices (one producer, one type and if is it possible one series). The second rule then is not using any automated process directly in the recording equipment and also not using stabilization. It is advisable indeed, not to use either auto focus, due to their different responses, there may be a momentary blur in each channel differently. IV. ERRORS CAUSED DURING COMPRESSION Another group of errors which we will discuss are errors caused during video compression. Especially errors caused by using compression, which does not expect stereoscopic video processing. Very often we see that stereoscopic records, imposed the so-called "in one frame" (eg.: Side-by-Side, Below-Under, etc.) are compressed by using existing compression algorithms used in distribution network. Yes, this has an advantage that the distribution network will remain the same for both systems. Also stereoscopic signal is restored from “one-frame” arrangement and further processed in the terminal equipment by the viewer. But are these errors caused by that negligible? Are they really not perceived by a viewer? The answer to this question brings us the following subjective test, which consists of a total of 10 videos. These 10 videos are different partly in the contents. In the first video one channel contains only pure white image and the second channel contains only pure black image. And in second video the content was a striped wallpaper behind 3D object. These movies were compressed by using MPEG-2 codec into the resulting bit rates of 512kbps, 1Mbps, 1.5Mbps, 2Mbps a 2.5Mbps. 75 Studentská konference Zvůle 2014 Fig. 3. Crossover between left and rigth frame Each of evaluators had then tasked to focus on margins of screen and search at them unnatural artefacts and their occurrence then evaluate by using of classical five-point scale, which is defined in recommendation ITU-T.BT-2021. The results of the evaluation, which are relatively straightforward to determine show, that in static scene not be seen no specific artefacts at the margins of screen caused by improper type of compression. Or these artefacts are hidden behind the frame in front of the display. Influence of compression and followed blurring of margins, which is shown in detail in Figure 3, has resulted in only a faulty automatically detect of 3D signal. That was in the test TVs Philips 42PFL7666T/12 and Panasonic TX-L42ET50 practically impossible. However, the question remains whether this conclusion is valid for dynamic scenes, too. And therefore was prepared one more subjective test, which contain already only one type of content (again moving vehicle) and this was presented again in five different levels of compression (500kbps – 2.5Mbps). Fig. 4. MOS in depend on bitrate (evaluated the crossover between frames) From the evaluation of this test, which is graphically illustrated in Fig. 4, we can see that image quality on the margins of dynamic scenes is strongly dependent on the bit rate. While exceeding down the limit 1Mbps, on the margins of the image began to appear distracting artefacts caused by the diffusion of the left and right channel at the center of the image (ordering Side-by-Side). In Fig. 5, where these artifacts are shown, we can see that when the blue car moving in the left channel from left to right achieve transition border, so cause visible distortion in the right channel. Fig. 5. Errors on crossover between left and rigth frame It is therefore important before carrying out any compression of stereoscopic images, know the principle, the resulting compression rate and behavior towards to the stereoscopic image. For MPEG-2 there can be also set a limit 1Mbps, that will be respected. There will be no distortion at the crossing between frames either none or hardly noticeable to the naked eye. V. CONCLUSION Using a series of subjective tests it was found how to influence several possible mistakes, which we often commit during the recording and processing of stereoscopic video. As the first error was researched time-shift between the synchronization of two cameras mounted on a stereoscopic tripod. The size of this delay is when we start up recording by using the remote control usually maximally 50ms. This can be reduced for example in any graphical editor by using aligning according to the audio track to the max ½ of duration of one frame. With the assumption 25fps to only 20ms, which is the time, which with the speed of objects smaller than 50kmph and shooting distance greater than 15m, have not excessive influence on the quality of the stereoscopic video. In next part of the text have been highlighted several possible errors that result from incorrect initial setup of two cameras. And for preventing of these errors were in general recommended to turn off all automatic correction, image stabilization and to focus preferably manually. In the final part of the text was then measured influence of using improper codec for processing of stereoscopic video, which affects particular the creation of false artefacts at the crossing between the left and right image. And by using of subjective tests, it was demonstrated that at higher compression ratios, the effect is appreciable and causes uncomfortable not only on the quality of the visual comfort, but also on the overall image quality. REFERENCES [1] M. Benoit, “Digital Stereoscopy – Scene to Screen 3D production workflow”, ISBN 978-1-48015709-5, 2013. [2] M. Šindelář, “Subjektivní hodnocení kvality stereoskopické obrazu”, Diploma thesis, University of Westbohemia in Pilsen, 2013. [3] L. Beran, “Měření synchronizace stereoskopických kamer” , Bachelor thesis, University of Westbohemia in Pilsen, 2014 76 Studentská konference Zvůle 2014 Ladislav Šťastný, Zdeněk Bradáč Department of Control and Instrumentation BUT, Faculty of Electrical Engineering and Communication Brno, Czech Republic ladislav.stastny@phd.feec.vutbr.cz, bradac@feec.vutbr.cz Abstract— This paper discusses the need for time synchronization and its importance in distributed systems such as Smart Grids. Time synchronization covers two main goals, timestamping and planning, which are used to ensure proper operation of every distributed system. Three basic synchronization methods typical for Smart Grids (1-PPS, IRIGB, GPS) are described with their main features, advantages and disadvantages. Keywords— time synchronization, Smart Grids, 1-PPS, IRIG-B, GPS I. INTRODUCION Smart Grids were developed as a next step of evolution in power distribution networks. Smart Grids consist of power distribution network and communication network. This connection allows regulation of electricity production and consumption. Smart Grids can be considered as nice demonstration of distributed system – based on separate elements, which brings intelligence and mutual communication to ensure the operation, maintenance, monitoring and overall stability of distribution network. Therefore, the main parts of Smart Grids are smart devices in function of sensors or actuators, or both. These devices vary in complexity, from simple sensors and actuators with efficient 8-bit microcontroller and program in super-loop, to multi-core servers for large area data processing, using the operating system [1]. Use of Smart Grids has also economical aspect besides the technological one. Management and optimization of production and consumption allows more effective operation of power distribution. This can be achieved by offering different tariffs to customers depending on excess / lack of energy. It is slowly moving away from the old “once-a-year billing” model, to new based on monthly payments for actual energy consumption or more popular “prepaid” model abroad. The benefits are the ability to get lower prices for the customer and optimization of consumption curve and network stability for the distributor. Smart grids offer better control of the network itself, faster detection of unauthorized consumptions, interference, connecting / disconnecting networks in island mode, better handling of critical events, prevent blackouts and faster start of the network from dark to distributor. To utilize all benefits of Smart Grids, it is important to realize, that Smart Grids, same as other distributed system, are dependent on mutual coordination of all elements. To achieve this, all elements need some kind of mutual “time awareness”. II. TIME SYNCHRONIZATION The concept of time synchronization is particular known in the field of distributed systems. They consist of autonomous nodes that communicate with each other. Each of these nodes has its own local time, which is generally derived from an internal clock oscillator. However, this oscillator is produced with certain tolerance and is also temperature dependent. To conclude, synchronization process is not a one-time event. Main objectives of time synchronization:  Establishment of the common time-base across distributed systems  maintain nodes of a distributed system in synchrony A. Reasons for time synchronization Time synchronization is used in a wide range of applications as a source of accurate time information that is needed to make the system consistent. Time synchronization is also part of systems, which are not normally considered as realtime systems, but where accurate time information is necessary for their functioning. This includes systems that use for example: file systems, database systems, cash transactions, stock, cryptographic operations and planning. Time synchronization in automation is used to measure time, control and planning events, coordination and synchronization of parallel processes, system tuning and postmortem analysis in case of failure. Time information can be used in many different ways, but in general, it is possible to summarize the reasons for time synchronization in two main categories:  Timestamps - when an event occurs  Planning - in which order actions will be done But before using any time information, it is necessary to think about quality of this information. This can be done by analyzing quality and type of synchronization method. 77 Studentská konference Zvůle 2014 Basic Time Synchronization Methods in Smart Grids B. Quality of synchronization The quality of synchronization can be seen from two different angles - as a type of synchronization and its quality parameters. 1) Type of synchronization There are 2 basic types of synchronization:  Local (internal) - the maximum difference of time between any pair of nodes  Global (external) - the maximum difference between the reference time and any other time of nodes Local synchronization is used in application, where the key requirement is, that the nodes are synchronized with each other mutually. Therefore, it is not necessary, that this time somehow reflects real time (such as UTC), it is sufficient just to be the same at all nodes. On the other hand, in the case of global synchronization, the condition of correspondence time information of the nodes with some external (reference) time, which may be mentioned, for example, UTC or other "human" time system. 2) Quality parameters The second way of expressing the quality of synchronization is known as "precision and accuracy" or how stable and accurate results we can get. This can be explained best in Fig. 1. Fig. 1. Quality of time synchronization [2] Every time synchronization deals with 3 basic challenges that also affect the assessment of the overall quality of synchronization:  Precision and accuracy  Reliability - fault models, the number of tolerable errors, ...  Efficiency - the number of nodes, the number of required messages, ... III. TIME SYNCHRONISATION METHODS IN SMART GRIDS The time or clock synchronization can be achieved by various methods. These methods have evolved over time as well as requirements have evolved. Therefore, there is a wide range of methods, which vary at the maximum achievable accuracy, computational cost, demands on the communication channel and other parameters. The choice of the appropriate synchronization method must be subordinated to the purpose and options for its deployment. The following section describes the methods and their basic features, which found their application in the area of Smart Grids. A. 1-PPS One of the oldest methods of clock synchronization is 1PPS - 1 pulse per second. This signal is characterized by pulses whose width is less than one second (typically 100 ms) and sharp ascending and descending edges, with pulses repeating exactly one second. Example of 1-PPS signal can be found in Fig. 2. Although it is a simple method that does not carry absolute time information, it is especially popular for the local clock synchronization. Due to its simplicity of implementation, it is possible to find this method in various frequency standards, radio beacons, GPS receivers and other types of precision oscillators. Fig. 2. Example of 1-PPS signal Main features 1-PPS [3]: + Simple + Smallest jitter of mentioned methods - Does not provide absolute time information - Does not compensate propagation time of signal - Requires cable / additional channel Nowadays, it is usual to find this method in combination with a precise time obtained from the GPS. GPS receiver provides information about the absolute time using serial interface. By principle, serial communication can achieve only limited accuracy. Therefore, high-quality GPS receivers are equipped with dedicated output pin for 1-PPS signal, which edges specify starts of seconds with minimal jitter. This allows the GPS receiver to provide absolute and also very accurate time information. In the case of using cheaper GPS receivers, that usually have larger jitter on PPS-output, the jitter can be reduced by different techniques as discussed in the source [4]. Reference [3], which also deals with sync methods in the substation automation, it is also stated, that thanks to minimal jitter of this method, it is possible to obtain a standard deviation less than 2 ns using 1000 m optical cable. B. IRIG-B IRIG or Inter Range Instrumentation Group standardized various time formats. One of the latest is just IRIG standard 200-04. This standard defines the time codes A, B, D, E, G and H, which mainly differ in time frame length- range from 0.1 up to 1 hour. These codes standardized serial form of date and time and so allow time synchronization to devices from different manufacturers. 78 Studentská konference Zvůle 2014 The most common time codes of this standard include the IRIG-B, which may be used at logic level (unmodulated) or as amplitude modulated (AM) signal of 1 kHz carrier wave. Each bit is represented by 10 ms period starting with high level. In case of bit 0, it is 2 ms of high level, for bit1, it is 5 ms and the P bit is 8 ms. P bit is used as a separator of individual items within a single frame. This time code contains the seconds, minutes, hours and number of days of the year, all in BCD format. It also contains a number of seconds and control bits, but these are in binary form. However, synchronization accuracy of this method is heavily dependent on its implementation in devices. Reference [3] compares 1-PPS, IRIG-B and PTP and during tests it used two IRIG-B master devices and various jitters were observed. This reference also states, that the standard deviation is around 120-times greater for IRIG-B as for the 1-PPS signal, using the same transmission medium. IRIG-B also found its application in the area of Smart Grids [3] [5]. In practice, we can often found solution with IRIG-B master device equipped with a GPS receiver to obtains a reference time. Time is then distributed to individual devices using IRIG-B standard, which is supported by many manufacturers for substation automation. Its disadvantage is inability to provide information about the origin of the time. This is required by IEC 61850 used in the distribution net. C. GNSS (GPS) GNSS or Global Navigation Satellite System, as the name suggests, it is a system of satellites with global coverage, which provides information about location and precise time. Two most famous GNSS include American GPS and Russian GLONASS. Time accuracy achieved via GPS is better than 100 ns, last results are even better than 10 ns [5]. This accuracy is more than enough for majority of devices and even sufficient for industrial use, including Smart Grids [6]. GPS is preferred and used as source of precise time in substations and devices for electricity distribution. For these devices there is no pressure on their prices, so they are usually equipped with GPS receiver to ensure precise synchronization of time. GPS might seem like perfect solution for any need for time synchronization, but the reality is different. GPS is suitable for installations, where accurate time is absolutely critical for proper operation. In these cases, higher prices caused by GPS receiver and necessity of placement antenna to “see” the sky, are acceptable. Antenna placement is often problematic for devices located in buildings, basements, or even underground. Solution for this situation is to use appropriately positioned GPS receiver to obtain precise time and then distribute this information by different synchronization method based on wired connection (IRIG-B, NTP, PTP, ...). For example, in the case of substation automation, it is often the IRIG-B signal, which is distributed to individual devices. IV. CONCLUSION This paper discusses three basic time/clock synchronization methods used in Smart Grids- 1-PPS, IRIG-B and GPS. 1-PPS is the simplest method with minimal jitter, but does not provide absolute time synchronization. Because of missing propagation delay compensation is suitable for shorter distances. IRIG-B is standardized time code used to interpret time and date in serial form accepted by different manufacturers. As reference state, quality of this synchronization depends on its implementation in device, but generally standard deviation and jitter are worse for the 1-PPS. On first sight, GPS may look like perfect time synchronization method for any situation, but it has two disadvantages – price of GPS receiver and need for almost direct view to the sky to get signal. These are problems for mass deployment in Smart Grids devices. Therefore future work will focus on suitable synchronization method for end devices that usually use power-line communication. ACKNOWLEDGEMENT This work was supported also by the project “TA02010864 - Research and development of motorized ventilation for the human protection against chemical agents, dust and biological agents” and project “TA03020907 - REVYT - Recuperation of the lift loss energy for the lift idle consumption” granted by Technology Agency of the Czech Republic (TA ČR). Part of the work was supported by project “FR-TI4/642 - MISE Employment of Modern Intelligent MEMS Sensors for Buildings Automation and Security” granted by Ministry of Industry and Trade of Czech Republic (MPO). REFERENCES [1] L. Franek; L. Šťastný; P. Fiedler: Operating systems for smart metering devices. In International Interdisciplinary PhD Workshop 2013, 2013, p. 227–231, ISBN 978-80-214-4759-2. [2] J. R. Vig: Quartz crystal resonators and oscillators for frequency control and timing applications - a tutorial. In 2004 IEEE International Frequency Control Symposium Tutorials, 2004. [3] D. Ingram; P. Schaub; D. Campbell: Evaluation of precision time synchronisation methods for substation applications. In Precision Clock Synchronization for Measurement Control and Communication (ISPCS), 2012 International IEEE Symposium on, Sept 2012, ISSN 1949-0305, p. 1–6, doi: 10.1109/ISPCS.2012.6336630. [4] L. Gasparini; O. Zadedyurina; G. Fontana: A Digital circuit for jitter reduction of GPS-disciplined 1-pps synchronization signals. In Advanced Methods for Uncertainty Estimation in Measurement, 2007 IEEE International Workshop on, July 2007, p. 84–88, doi:10.1109/AMUEM.2007.4362576. [5] J. Aweya; N. Al Sindi: Role of time synchronization in power system automation and Smart Grids. In Industrial Technology (ICIT), 2013 IEEE International Conference on, Feb 2013, p. 1392–1397, doi:10.1109/ICIT.2013.6505875. [6] A. Carta; N. Locci; C. Muscas: A Flexible GPS-based system for synchronized phasor measurement in electric distribution networks. Instrumentation and Measurement, IEEE Transactions on, Nov 2008: p. 2450–2456, ISSN 0018-9456, doi:10.1109/TIM.2008.924930. 79 Studentská konference Zvůle 2014 Comparison of computational methods in FEKO software Jan Velim Department of Radio Electronics, Brno University of Technology Email: velim@phd.feec.vutbr.cz Abstract—This paper aims to introduce fundamental differences between full-wave methods and asymptotic methods in FEKO software. In the first part the theoretical description of the Method of Moments and Geometrical Optics is stated. Next, the possibilities of settings of the both methods in the FEKO software are shown. In the last part achieved results are discussed. For computations of only simple numerical models are used. I. INTRODUCTION In past centuries, scientists tried to describe natural phenomena mathematically. Many equations were related to electrical and magnetic phenomena. James Clerk Maxwell unified the previous formulas and so was a new field created named as electromagnetism. In his honor, we call the equations as Maxwell’s equations. Nowadays we try to go in the opposite direction. The aim is to know behavior of electromagnetic field around determined geometric objects. There are many kinds of methods which were developed for computations of electromagnetic fields. Methods based clearly on Maxwell’s equations are collectively called full-wave methods (e.g. Method of Moments, time-domain finite-element method,finite-difference time-domain,..). Today’s computers have still limited computational performance therefore it is necessary to use methods based on approximations for computation of electrically large objects. These methods are collectively called asymptotic methods (e.g. Geometrical Optics, Physical Optics,..). II. THEORY In the next parts of the article it is talked about setting properties of Geometrical Optics and Method of Moments as well as about results solved by these methods. Therefore brief introduction into the theory of both methods is covered. A. Method of Moments Method of Moments (MoM) is also known as method of weighted residuals. At first we have analytical expression of physical effect. L(f) = g, [1] (1) where L is typically an integro-differential operator, f is the unknown function (charge, current) and, g is known excitation source[1]. The unknown function f is replaced by formal approximation fa (2). Formal approximation is sum of N known functions fn and N unknown weighting coefficients an. fa = N n=1 an · fn, [1] (2) After substitution fa by f in the formula(1), we get formula as in(3). The difference between left and right part of (3) is called residue (4). N n=1 an · L(fn) ≈ g, [1] (3) R = N n=1 an · L(fn) − g, [1] (4) According to the dimensions of problem solved, the residue R can have the shape of surface or volume. Integral equation is multiplied appropriate weighted function W for minimizing the residue. Linear system of equations is obtained by computing the integral equation (5). dim W · R ddim = 0 (5) B. Geometrical Optics Geometrical Optics (GO) is long time known method and it has been plentifully used in optical industry [2]. Even after the establishment of Maxwell’s equations this method was not displaced completely. It was used in case where it was possible consider wavelength as approaching to zero in limit. Today’s numerical solvers use GO to decrease computational demands. Unfortunately there is not any mathematical bridge between GO and Maxwell’s equation yet therefore accuracy of GO is difficult to evaluate. The energy between two arbitrary points is transfered by tube of rays. To illustrate this helps Fig. 1. If we assume lossless environment there has to be valid formula (6) where S means radiation density in given distance and dA means cross-sectional area of tube. S0 · dA0 = S · dA, [3] (6) Between radiation density S and the electric field intensity E is valid relation characterized by (7) if wave impedance ZW of the medium is known [4]. S = 1 2ZW · |E|2 , [3] (7) 80 Studentská konference Zvůle 2014 Fig. 1. Tube of rays for spherical radiated wave. [3] III. PROPERTIES OF SOLVERS In order to compute above described numerical methods it is possible to use own solver or commercial softwares. The advantage of commercial solvers is universality but on the other hand it is necessary to set up the solver for specific model. Therefore in this part of text the basic settings of FEKO software is shown. The main advantage of this software is the possibility of combining full-wave method with asymptotic method in the same model. A. Method of Moments In FEKO the method of moments is set as default. In order to use other methods it is necessary to change it manually in window Solver settings, as in Fig. 2. If high frequencies are computed the low frequency stabilization should be disabled for decreasing memory demands. Since the geometry of modeled structures is diverse, sometimes it is better to use higher-order approximation of functions over the element. FEKO enables the use of functions with the order of 0.5, 1.5, 2.5 and 3.5. This can be set globally or locally on selected areas. If the user does not dare to guess what order of the basic function to choose, it is enabled to select Auto. With this choice it can be specified if low, normal or high orders of basic function are preferred. B. Geometrical Optics As we mentioned above, the energy is transfered in tube of rays in GO. The density of rays can be set manually in FEKO. A model may contain point or plane wave as source. According to source we set angle or distance between neighboring rays. The chosen number should depend on wavelength of source wave. In some cases it is necessary to consider reflections and diffractions of rays. These events have common blank for setting with name Maximum number of ray interactions. IV. PRACTICAL TESTS Simulation results are compared in this subsection. Multilevel fast multipole method (MLFMM) and GO are used for Fig. 2. Marker General of Solver settings. [5] computation. MLFMM comes out of MOM. A. Obstacle between antennas First model has obstacle between transmitting and receiving antennas 3. Obstacle is made from FR4 which has relative permittivity 4.8 and dielectric loss tangent 0.017. The conductivity is not consider. Dimensions are 0.1 x 0.1 m respectively 0.2 x 0.2 m. Half-wave dipoles designed at frequency 60 GHz are used as antennas. Actually full antennas are not inserted into model but only farfields of the antennas. The antenna farfields have resolution 2 ◦ in both directions of angle. Transmitting antenna is fed by 1 W of power. There are no mismatch losses in simulations. Power levels received by receiving antenna are shown in Table I. Distance between the antennas is 2 m. If we come out of theory, the results from MLFMM have better accuracy then results gotten by GO. We tested influence of density of rays to accuracy. It is possible to say that the accuracy is better with higher number of rays in this configuration. Fig. 3. Configuration with obstacle between antennas. 81 Studentská konference Zvůle 2014 TABLE I. OUTPUT POWER OF RECEIVING ANTENNA WITH OBSTACLE BETWEEN ANTENNAS Dimensions of obstacle MLFMM GO (0.1 ◦) GO (0.01 ◦) [dBW] [dBW] [dBW] FR4 0.1x0.1 m -72.9775 -73.1354 -72.9745 FR4 0.2x0.2 m -80.3234 -80.2366 -80.3358 TABLE II. OUTPUT POWER OF RECEIVING ANTENNA WITH OBSTACLE BESIDE OF ANTENNAS Dimensions of obstacle MLFMM GO (0.1 ◦) GO (0.01 ◦) [dBW] [dBW] [dBW] FR4 0.1x0.1 m -69.5500 -69.7330 -69.7624 FR4 0.2x0.2 m -70.9232 -69.5269 -69.5481 Fig. 4. Configuration with antenna beside of antennas. B. Obstacle beside of antennas In second model the position of the antennas is not changed but the obstacle is placed beside of the antennas. Distance between obstacle and notional connecting line is 0.5 m (figure 4). It is now apparent that except diffraction on the edges also the reflections influence the accuracy of the result. Propagating waves and reflected waves interfere with each other. In this case the results are not better with higher number of rays as is shown in Table II. V. CONCLUSION Although asymptotic methods use approximations which cause doubts, the results in Table I and Table II prove that it is possible to get acceptable values. We can say that it depends on the geometry of simulated model and on setting parameters of used software. Geometrical optic can considerable save computational requirements and time for electrically large objects. REFERENCES [1] GIBSON, Walton C. The method of moments in electromagnetics. United States of America: Chapman & Hall/CRC, 2008. ISBN 9781420061451. [2] KLINE, Morris. Electromagnetic Theory and Geometrical Optics [Research report]. New York University, 1962. [3] BALANIS, Constantine A. Advanced engineering electromagnetics. Canada: John Wiley & Sons, 1989. ISBN 0-471-62194-3. [4] LACIK, Jaroslav, Zbynek LUKES a Zbynek RAIDA. On Using RayLaunching Method for Modeling Rotational Spectrometer. 2008, Vol. 17, No. 2. [5] EM SOFTWARE & SYSTEMS-S.A. (PTY) LTD. FEKO Comprehensive Electromagnetic Solution. South Africa, 2012. 82 Studentská konference Zvůle 2014 Scene Change Based GOP for HTTP Adaptive Streaming Utilizing High Efficiency Video Coding Ondrej Zach Faculty of Electrical Engineering and Communication Department of Radio Electronics Brno University of Technology Technicka 12, 616 00, Brno, Czech Republic Email: ondrej.zach@phd.feec.vutbr.cz Abstract—The paper describes the influence of the segment lengths in HTTP Adaptive Streaming utilizing the new High Efficiency Video Coding (HEVC) standard on the objective video quality. The segment lengths varied from 2s to 10s. In special case the segments were created according to the scene changes in the video. These segments then obviously had different lengths but one segments contained just frames of one scene. We investigated the influence on the objectively measured quality using typical video quality metrics PSNR and SSIM and also the influence of the coding efficiency. Keywords—Adaptive GOP; HTTP adaptive streaming; HEVC I. INTRODUCTION The distribution of video content has changed significantly during the last ten years. Physical media such as DVDs or Bluray disks serve more as a mean of data storage than as a way to get the content to the consumer. The usage of the Internet and web-based services for video distribution constantly increases and video data take a big part of the Internet traffic. The distribution of the video content using data networks can use both known transfer protocols UDP and TCP. UDP, however, can not be used in best-effort type of networks (e.g. the Internet), because the network cannot guarantee the delivery of the data and therefore is UDP usually used in local networks where the provider can control the network and its properties completely. For delivering video content on the Internet the TCP is used. One of the possibilities how to transfer video data using web services (e.g. YouTube) is HTTP Adaptive Streaming (HAS). HAS based service adapts the bitrate of the multimedia content according to the bandwidth of the consumer in order to achieve the best quality as possible. To do so, the data need to be split in small segments which are coded separately. The length of the segments may vary according to the specific HAS implementation. This article deals with the influence of the segment length on the objective quality. For video coding the new H.265/HEVC standard was used. The paper is organized as follows: Sections II. and III. briefly describe the HTTP Adaptive Streaming (HAS) and High Efficiency Video Coding standard (HEVC) respectively. Section IV. describes the preparation of the experiment and Section V. summarizes the results of the experiment. II. HTTP ADAPTIVE STREAMING HTTP Adaptive Streaming is a mean of video distribution where the bitrate of the received data changes according to the actual condition of the network in order to effectively utilize the bandwidth and to get the best quality as possible. An extensive survey of the Quality of Experience in HAS can be found in [1]. The main principle of HAS is that the data are divided in small segments which are processed separately and are available in different quality levels. The lengths of these segments vary on the used HAS implementation. Typical values may vary from 2 seconds to 10 seconds. The length of the segment then determines the intervals where a quality change can occur. Shorter segments can offer more often changes and therefore better utilization of the bandwidth. On the other hand, shorter segments may influence the efficiency of the coding. The adaptation of the quality can be performed in three different domains: framerate, resolution and bitrate of the coded video. This paper deals with the bitrate adaptation only therefore the framerate and resolution for our experiment were fixed. Different HAS implementations may use different video coding algorithms. Practically all of them support H.264/AVC. We will focus on the possibility of using the new standard for video coding, High Efficiency Video Coding (H.265/HEVC). At this time, the only HAS implementation which is capable of using HEVC is MPEG DASH [1], because it has no limitation on the codecs used. III. HIGH EFFICIENCY VIDEO CODING HEVC (also known as H.265 or MPEG-H Part 2) is a successor of the previous H.264/AVC video coding standard. HEVC was designed in order to obtain the same perceived video quality as with the H.264/AVC algorithm using just 50% of the bitrate of the H.264/AVC. The main difference in comparision with the previous standard is the new QuadTree structure. QuadTree structure allows bigger block sizes (up to 64x64 for luma component) which can be then divided into smaller blocks according to the content. HEVC uses both intraprediction and interprediction and introduces 33 more direction orientations for the intra-prediction. The motion estimation algorithm is more evolved and the motion compensaation can be calculated with quarter-pixel accuracy for the luma component. As an entropy coding HEVC uses Content Adaptive Binary Arithmetic Coding (CABAC). CABAC was 83 Studentská konference Zvůle 2014 introduced already in H.264/AVC but HEVC uses improved algorithm in order to achieve better computation speed. The problematics of HEVC lies behind the extent of this paper and an overview of the HEVC algorithm can be found e.g. in [2]. A. HEVC Encoders For HEVC encoding, there are several implementations which are available for free. The first implementation is the HM Reference model currently in version 14 [3]. Other implementations are x265 which is an open-source project available under the GNU GPL 2 license [4] and a DivX HEVC Community Encoder of the DivX company [5]. HM reference implementation offers wide variety of different settings. However, the computation speed is very low. The encoding using the DivX implementation is much less time consuming, however the settings of the encoder are too severe for our purpose. Therefore we used the x265 implementation. As previous tests showed, the x265 encoder offers relatively wide variety of settings and still has very good computational speed [6]. IV. EXPERIMENT SETUP In our experiment, we have used six different GOP setups/lengths. In one case we divided the video in blocks of variable length according to the scene change in the content. In other cases, we used GOP lengths in range from 2s to 10s. The block scheme of the proposed experiment can be seen in Fig. 2. The dashed block called ”Scene Detection” was used only in case of scene-change-dependent GOP length. For scene change detection and follow-up cut we used the algorithm presented in [8], the algorithm was then implemented in Matlab environment. The algorithm proved to be fast and accurate enough for our purpose. A. Tested material As our tested material, we chose the Elephants dream movie, [7]. This is a freely available animated short film created using Blender software under the Creative Commons license. The movie is available in wide variety of video qualities and resolutions, for purpose of this paper we use the 360p (640x360 pixels) version in uncompressed YUV 4:2:0 format. Although HEVC is mostly designed for HD and higher resolutions, we chose 360p in order to get preliminary results of the behavior of different GOP lengths. One frame of the used movie is shown in Fig. 1. B. Encoding As previously mentioned, for encoding purposes we used the x265 encoder. The encoder was in version 1.0+139a5998df9b12e and was built under MS Windows using MS Visual Studio 2010. The settings of the encoder were set according to the medium profile of the x265 encoder, more information can be found on the project’s website [4]. The GOP lengths used in our test were following: 2s, 3s, 5s, 7s, 10s and scene-change adaptive. The range from 2s to 10s covers most of the segment lengths used in HAS based services. All segments were then encoded using four bitrates in range from 100 kbps to 300 kbps. These bitrates are sufficient when using our resolution with HEVC. Fig. 1. One frame of the Elephant’s dream movie. INPUT VIDEO SCENE DETECTION SEGMENTS HEVC ENCODING QUALITY METRIC Fig. 2. Block Scheme of Compression with variable GOP lengths. C. Objective metrics For quality evaluation, we used two widely used pixel based quality metrics Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM). They are both very simple and easy to calculate metrics, however they may not correspond to the subjective quality as perceived by the real viewer. On the other hand, they offer a basic information about the quality and for the purpose of this paper are results obtained by PSNR and SSIM satisfactory. V. RESULTS After encoding, we calculated the common used quality metrics PNSR and SSIM for each HRC. The values shown in Table I represent the overall quality of the whole sequence, which was calculated from the segments considering the variable lengths of the segments. From the data acquired, we can see an interesting result. The objectively measured quality is usually higher for longer segments. Special case is the variable GOP length when the segments were created according to the scene change in the video (cut). TABLE I. RESULTS OF OBJECTIVE METRICS GOP length PSNR [dB] SSIM [-] bitrate [kbps] 300 200 150 100 300 200 150 100 Adaptive 45.859 43.628 42.494 41.141 0.964 0.934 0.922 0.905 2s 43.970 42.358 41.943 39.724 0.948 0.933 0.915 0.901 3s 44.256 42.518 41.272 39.934 0.948 0.931 0.916 0.900 5s 44.737 43.052 41.823 40.350 0.950 0.932 0.919 0.902 7s 44.976 43.279 42.026 41.578 0.947 0.931 0.917 0.899 10s 45.113 43.471 42.287 40.745 0.949 0.933 0.919 0.901 Graphical representation of the results is shown in Fig. 3 and Fig. 4. The results of PSNR show very nicely the dependence between quality and segment length. PSNR values are practically always higher for longer video segments. The best results were achieved with variable segments length when the segments were created according to the cut in the video. As these segments contained frames of one scene only, it was easier for the encoder to process them a this is also reason 84 Studentská konference Zvůle 2014 Fig. 3. PSNR values of the encoded segments. Fig. 4. SSIM values of the encoded segments. for the higher perceived quality. Also the coding efficiency is higher for this type of GOP. VI. CONCLUSION In this paper, we evaluated the possible influence of the segment lengths in HTTP Adaptive streaming using HEVC. The results showed according to our expectations, that longer segments may offer better objectively measured quality than the shorter ones. The best option seems to be the variable length using scene change detection. However, this may produce very long segments which may not be desirable for using in HAS systems. Also the segment length of 10 seconds may not be suitable therefore further testing utilizing more settings and more influence which may occur in HAS based system will be necessary. ACKNOWLEDGMENT The research described in this paper has been supported by the Czech Ministry of Education, Youth and Sports under grant no. LD12005 - Quality of Experience Aspects of Broadcast and Broadband Multimedia Services (QUALEXAM) and performed in laboratories supported by the SIX project, no. CZ.1.05/2.1.00/03.0072, the operational program Research and Development for Innovation. REFERENCES [1] Seufert, M., Egger, S., Slanina, M. et al. A Survey on Quality of Experience of HTTP Adaptive Streaming. Research report. University of Wurzberg. [Online] Cited 2014-06-20. Available at: www3.informatik.uniwuerzburg.de/TR/tr490.pdf. April 2014. [2] G.J.Sullivan, J.Ohm, Woo-Jin Han and T. Wiegand, ”Overview of the High Efficiency Video Coding (HEVC) Standard,” Circuits and Systems for Video Technology, IEEE Transactions on , vol.22, no.12. [3] Heinrich Hertz Institute. (2014). HEVC Software Repository. [online]. https://hevc.hhi.fraunhofer.de/svn/svn HEVCSoftware/ [4] x265.org (2014). x265 - open-source HEVC implementation. [online]. http://www.x265.org [5] DivX LLC. (2014). DivX HEVC Community Encoder. [online]. http://labs.divx.com/divx265 [6] Zach, O., Slanina, M. A Comparison of H.265/HEVC Implementations. In Proceedings ELMAR 2014. Zagreb: University of Zagreb, (in press). [7] Blender Foundation. Elephant’s dream, the Orange Open Movie Project. 2006. [8] Xiaoquan Yi, Nam Ling. ”Fast pixel-based video scene change detection,” Circuits and Systems, 2005. ISCAS 2005. IEEE International Symposium on , vol., no., pp.3443,3446 Vol. 4, 23-26 May 2005. 85 Studentská konference Zvůle 2014