Prof. Dr. Yo-Sung Ho, Gwangju Institute of Science and Technology, Korea
Abstract
In recent years, various multimedia services have become available and
the demand for realistic multimedia systems is growing rapidly. A
number of three-dimensional (3D) video technologies, such as
holography, two-view stereoscopic system with special glasses, 3D wide
screen cinema, and multi-view video have been studied. Among them,
multi-view video coding (MVC) is the key technology for various
applications including free-viewpoint video (FVV), free-viewpoint
television (FVT), 3DTV, immersive teleconference, and surveillance
systems. The traditional video is a two-dimensional (2D) medium and
only provides a passive way for viewers to observe the scene. However,
MVC can offer arbitrary viewpoints of dynamic scenes and thus allow
more realistic video. The multi-view video includes multi-viewpoint
video sequences captured by multiple cameras at the same time, but at
different positions. However, because of the increased number of
cameras, the multi-view video contains a large amount of data. Since
this system has serious limitations on information distribution
applications, such as broadcasting, network streaming services, and
other commercial applications, we need to compress the multi-view
sequence efficiently without sacrificing visual quality significantly.
In this tutorial lecture, we are going to cover both the basics and the
current state-of-the-art technologies for multi-view video coding.
Bio
Dr. Yo-Sung Ho received the B.S. and M.S. degrees in electronic
engineering from Seoul National University, Seoul, Korea, in 1981 and
1983, respectively, and the Ph.D. degree in electrical and computer
engineering from the University of California, Santa Barbara, in 1990.
He joined ETRI (Electronics and Telecommunications Research Institute),
Daejon, Korea, in 1983. From 1990 to 1993, he was with Philips
Laboratories, Briarcliff Manor, New York, where he was involved in
development of the Advanced Digital High-Definition Television
(AD-HDTV) system. In 1993, he rejoined the technical staff of ETRI and
was involved in development of the Korean DBS Digital Television and
High-Definition Television systems. Since 1995, he has been with
Gwangju Institute of Science and Technology (GIST), where he is
currently Professor of Information and Communications Department. Since
August 2003, he has been Director of Realistic Broadcasting Research
Center at GIST in Korea. From September 2005, he has been a visiting
scholar at University of Washington, Seattle, USA. He gave several
tutorial lectures at various international conferences, including the
IEEE Region Ten Conference (TenCon) in 1999 and 2000, the Pacific-Rim
Conference on Multimedia (PCM) and the IEEE Pacific-Rim Symposium on
Image and Video Technology (PSIVT) in 2006. He is presently serving as
an Associate Editor of IEEE Transactions on Multimedia. His research
interests include Digital Image and Video Coding, Three-dimensional
Image Modeling and Representation, and Advanced Source Coding
Techniques.
Dr. M. Marković, Security Department, Banca Intesa ad Beograd, Belgrade, Serbia
Abstract
In this Tutorial, main cryptographic aspects of modern TCP/IP computer
networks: digital signature technology based on asymmetrical
cryptographic algorithms, data confidentiality by applying symmetrical
cryptographic systems, and PKI system – Public Key
Infrastructure, are addressed. This Tutorial is thus devoted to the
emerging topic in domain of modern e-business systems – a
computer network security based on Public Key Infrastructure (PKI)
systems. First, we consider possible vulnerabilities of the TCP/IP
computer networks and possible techniques to eliminate them. We signify
that only a general and multi-layered security infrastructure could
cope with possible attacks to the computer network systems. We evaluate
security mechanisms on application, transport and network layers of
ISO/OSI reference model and give examples of the today most popular
security protocols applied in each of the mentioned layers (e.g.
S/MIME, SSL and IPSec). Namely, we recomment a secure computer network
systems that consists of combined security mechanisms on three
different ISO/OSI reference model layers: application layer security
(end-to-end security) based on strong user authentication, digital
signature, confidentiality protection, digital certificates and
hardware tokens (e.g. smart cards), transport layer security based on
establishment of a cryptographic tunnel (symmetric cryptography)
between network nodes and strong node authentication procedure and
network IP layer security providing bulk security mechanisms on network
level between network nodes – protection from the external
network attacks. These layers are projected in a way that a
vulnerability of the one layer could not compromise the other layers
and then the whole system is not vulnerable. User strong authentication
procedures based on digital certificates and PKI systems are especially
emphasized. We also evaluate and signify differences between
software-only, hardware-only and combined software and hardware
security systems. Therefore, ubiquitous smart cards and hardware
security modules are considered. Hardware security modules (HSM)
represent very important security aspect of the modern computer
networks. Main purposes of the HSM are twofold: increasing the overall
system security and accelerating cryptographic functions (asymmetric
and symmetric algorithms, key generation, etc.). HSMs are intended
mainly for use in server applications and, optionally for client sides
too in case of specialized information systems (government, military,
police). For large individual usage, smart cards are more suitable as
hardware security modules. However, for large usages, the best approach
is in the combination of SW and smart card solutions for best
performance. Namely, smart card increases security and SW increases the
total processing speed. In this sense, the most suitable large-scale
solution consists of: SW for bulk symmetric data encryption/decryption
and smart card for digital envelop retrieval and digital signature
generation.
At the end, we give the brief description of the main components of the
PKI systems, emphasizing Certification Authority and its role in
establishing a cryptographic unique identity of the valid system users
based on ITU-T X.509v3 digital certificates. Public-key cryptography
uses a combination of public and private keys, digital signature,
digital certificates, and trusted third party Certification Authorities
(CA), to meet the major requirements of e-business security. Before
applying the security mechanisms you need the answers for the following
questions: Who is your CA? Where do you store your private key? How do
you know that the private key of the person or server you want to talk
to is secure? Where do you find certificates? A public-key
infrastructure (PKI) provides the answers to the above questions. In
the sense of ITU-T X.509 standard, the PKI system is defined as the set
of hardware, software, roles and procedures needed to create, manage,
store, distribute and revoke certificates based on public-key
cryptography. PKI system provides a reliable organizational, logical
and technical security environment for realization of the four main
security functions of the e-business systems: authenticity, data
integrity protection, non-repudiation and data confidentiality
protection. PKI system consists of the following components:
Certification Authority (CA) – responsible for issuing and
revoking certificates, Registration Authorities (RAs) –
responsible for acquiring certificate requests and checking the
identity of the certificate holders, Systems for certificate
distribution – responsible for delivering the certificates to
their holders, Certificate holders (subjects) – people,
machines or software agents that have been issued with certificates,
CP, CPS, user agreements and other basic CA documents, systems for
publication of issued certificates and Certificate Revocation Lists
(CRLs), as well as of PKI applications (secure WEB transactions, secure
E-mail, secure FTP, VPN, secure Internet payment, secure document
management system – secure digital archives, etc.).
Besides, at the end of the Tutorial, we give a brief overview of legal
aspects of using digital signature emphasizing the EU Directive on
electronic signatures and corresponding Electronic Signature Laws on
national levels in Europe. Also, we consider possible usage of
qualified signatures which have the same legal effect as handwritten
signatures, different accreditation and supervision schemes for CAs,
some aspects about using Secure Signature Creation Devices (SSCD),
necessary conditions for CAs issuing qualified certificates, etc.
Bio
Milan Marković received B.S.E.E., M.S.E.E., and Ph.D. degrees in
electrical engineering from Faculty of Electrical Engineering,
University of Belgrade, Belgrade, Serbia, in 1989, 1992, and 2001,
respectively. He is a leading researcher of the Mathematical Institute
SANU, Belgrade and is currently a lecturer on Military Technical
Academy, Faculty of Business Informatics Belgrade and Computer Faculty
Belgrad for “Secure Computer Networks” and
“PKI systems” courses. His research interests are
in cryptographic algorithms, public key infrastructure, combined SW/HW
security solutions, smart cards, robust speech analysis, coding and
recognition, statistical pattern recognition, signal processing,
multimedia communication, wireless communications and wearable
computing. He has been included in very sophisticated security
projects, such as: PKI systems for: National Bank of Serbia, some
commercial banks, Ministries of Internal and Foreign Affaires, as well
as PKI systems for ongoing Serbian smart card ID project. He is
currently in Banca Intesa ad Beograd, as an ICT Security Officer and is
included in projects: developing security policies, PKI consolidation
project in the bank for internal and external users, as well as in
project of issuing EMV DDA MasterCards with PKI applications on them.
Dr. A. E. Mahdi, Department of Electronic & Computer Engineering, University of Limerick, Limerick, Ireland
Abstract
Due to fiercely growing market competition, QoS is continuously growing
in importance in the telecommunications industry. For voice
communication networks, the quality of the communicated speech is one
of the most important measuring objects of QoS. Thus, the ability to
continuously monitor and design for this quality has become a top
priority to maintain customers’ satisfaction. Voice quality
refers to the clearness of a speaker’s voice as perceived by
a listener. Voice quality measurement (VQM) is a relatively new
discipline which offers a means of adding the human,
end-user’s perspective to traditional ways of performing
network management evaluation of voice telephony services. The most
reliable method for obtaining true measurement of users’
perception of speech quality is to perform properly designed Subjective
Listening tests, whereby subjects hear speech recordings processed
through different network conditions, and rate them using a simple
opinion scale such as the ITU-T 5-point listening quality scale. The
average score of all ratings registered by the subjects for a given
condition is termed the Mean Opinion Score (MOS). Subjective tests are,
however, slow and expensive to conduct making them accessible only to a
small number of laboratories and unsuitable for real-time monitoring of
live networks. Hence, numerous objective voice quality measures, which
provide automatic assessment of voice communication systems without the
need for human listeners, have been made available over the last two
decades. These objective measures are becoming widely used particularly
to supplement subjective test results. This tutorial will examine some
of the technicalities associated with VQM and presents an up-to-date
review of current state-of-the-art voice quality measurement
methods/tools for telecommunication applications. The tutorial begins
with a broad discussion of what voice quality is, how to measure it,
and the needs for such measurement. Definitions of the two main
categories of metrics used for evaluating voice quality; that is
subjective and objective metrics, are then provided with detailed
account of the various methods of both categories. Target applications
of these methods and their advantages/disadvantages will also be
discussed. The presentation will be accompanied by demonstrations of
VQM for samples of degraded speech recordings using a number of methods
including ITU-T standardised algorithms, such as the PESQ and the 3SQM
(P.563).
Bio
Abdulhussain Mahdi is a Senior Lecturer at the Department of Electronic
& Computer Engineering, University of Limerick –
Ireland. He is a Chartered Engineer (CEng), Member of the Institution
of Engineering and Technology - UK (MIET), Member of the Engineering
Council - UK, and Founder Member of the International Compumag Society
(ICS). Dr Mahdi is a graduate in Electrical Engineering from University
of Basrah (BSc 1st Class Hon. 1978) and earned his PhD in Electronic
Engineering at University of Wales – Bangor, UK in 1990. He
is also a SEDA-UK Accredited Teacher of Higher Education (University of
Plymouth, UK 1998). His research interests include: speech processing
and applications in telecom and rehabilitation, domain transformation
and time-frequency analysis. He has authored and co-authored more than
82 refereed journal, book chapters and international conference
articles and, and has edited one book. His published work has been
cited in more than 40 journal articles.