General information about our project

Grant Contract no. 21PCCDI/2018

Project code: PN-III-P1-1.2-PCCDI-2017-0917

Project Name: Hybrid platform for visible light communications and augmented reality for the development of intelligent systems for active vehicle assistance and safety

Acronym: CAR Safe

Component Subproject 2

Subproject Name: Effective smart-based communications in augmented reality interactive scenarios for cars

Description: The project addresses the issue of in-vehicle augmented reality system development (IV-AR) focusing on the interaction between intelligent devices and IVIS vehicle systems.

Our Activities

Phase 1

Design and development of an augmented reality vehicle software infrastructure (IV-AR)

Phase 2

Design and implementation of digital data transfer techniques between user's smart personal devices and In-Vehicle Inf. Systems (IVIS)

Phase 3

Design and implementation of digital data transfer techniques between IVIS and the user leaving the vehicle

Results

Phase 1 - 2018

Reports

Design report (abstract)

Tests report (abstract)

Applications

Euphoria - A Scalable, Event-driven Architecture for Designing Flexible, Heterogeneous, Asynchronous Interactions in Smart Environments

Short description:

Euphoria is a new software architecture design and implementation that enables prototyping and evaluation of flexible, asynchronous interactions between users, personal devices, and public installations, systems, and services within smart environments, such as new user interactions for smart in-vehicle environments. A demonstration of Euphoria is available here!

GenericProducer & GenericConsumer - prototype implementations of client software components for Euphoria software infrastructure

Short description:

The GenericProducer application currently includes 3 types of content producers: - a generic, customizable content manufacturer that delivers size notifications / messages customizable, dynamicaly; - a content generating producer that uses messages as the source of weather information as an example of using a service based on a phone sensor, and - a content producer that picks up from the usual mobile notification device.

Short description:

The GenericConsumer application receives JSON objects from Euphoria and displays on the consumer display information about them. At the moment, the purpose of display was to test the reliability of such an approach. In order to display the information produced by the GenericProducer, we first considered the visibility information in different environmental conditions, as well as minimizing disturbance of the attention of traffic participants in general, that of the driver in particular by displaying this information.

Our demo movie is here:

Articles - direct results

Euphoria: A Scalable, Event-driven Architecture for Designing Flexible, Heterogeneous, Asynchronous Interactions in Smart Environments

Ovidiu-Andrei SCHIPOR, Radu-Daniel VATAVU, Jean VANDERDONCKT
Information and Software Technology, Elsevier IF: 2.627, 5-year IF: 2.768 (under review)

Augmenting Selection by Intention for In-Vehicle Control and Command

Catalin DIACONESCU, Dragos Florin SBURLAN and Dorin-Mircea POPOVICI
Proceedings of the International Conference on Human-Computer Interaction (RoCHI2018), ISSN: 2501-9422, pg. 107-110, Ed. MatrixRom, 2018.

Abstract:

Our paper proposes an in-vehicle interactive environment that facilitates driver to interact with a computer-based (multimedia) system through intentional gestures. To this end we used two complementary gesture detection technologies, Leap motion responsible for user gesture detection, and Myo as a second interaction device that gives the driver the possibility to switch between launched applications and supplementary controls. Based on preliminary results of the usability test we conducted for our solution, we discuss the advantages and disadvantages of using two different gesture recognition technologies for in-vehicle interactive environments. We conclude by the main issues we identify until now and some future directions of our efforts.

Speech & Speaker recognition for Romanian Language

Eugeniu VEZETEU, Dragos SBURLAN and Elena PELICAN
Proceedings of the International Conference on Human-Computer Interaction (RoCHI2018), ISSN: 2501-9422, pg. 54-62, Ed. MatrixRom, 2018.

Abstract:

The present paper illustrates the main methods that can be employed to build a speech and speaker recognition system for Romanian language. To this aim, we start by presenting the classical approach of extracting the Mell Frequency Cepstral Coefficients features from a dataset of speech signals (which represents some words/phrases in Romanian language). The recognition is done either by using Dynamic Time Warping (DTW) or by training an Convolutional Neural Network. A comparison between these models is presented and commented. Once such a system is developed, we proceed further by implementing an application that listens and executes some predefined commands. In our setup, the system performs two main tasks: it recognizes the user by his voice and executes a task corresponding to the vocal command.

Articles - related results

Filter Application on Facial Features in a Smartphone App

Sofia MORAR, Elena PELICAN and Dorin-Mircea POPOVICI
Proceedings of the International Conference on Human-Computer Interaction (RoCHI2018), ISSN: 2501-9422, pg. 5-11, Ed. MatrixRom, 2018.

Abstract:

Filter application on facial features is a rather new field and it has quickly become essential in all the social network applications. Fast and accurate filter application is still a field to be explored. In this paper, an automatic application of filters of different makeup products (lipstick and blush) is developed on the facial features of interest (lips and cheeks). Facial features are recognized and extracted with the help of a Machine Learning API (Application Programming Interface), Google Mobile Vision API. The application of the filters is developed using Bezier curves and the basic principles of graphics for the correct rendering of layers. The application of the filters is developed in an iOS App, with two functionalities, application on a picture, as well as in real-time through the phone’s camera, where we also use a Convolutional Neural Network (CNN) in order to recognize the user’s face, which introduces us into the field of Augmented Reality (AR) and Deep Learning.

From Exploration of Virtual Replica to Cultural Immersion through Natural Gestures

Catalin DIACONESCU, Matei-Ioan POPOVICI and Dorin-Mircea POPOVICI
Proceedings of the 1st International Conference on VR Technologies in Cultural Heritage 2018, ISBN: 978-3-030-05819-7

Abstract:

We investigate in this work the potential of multimodal rendering for assisting users during culturally-related navigation and manipulation tasks inside virtual environments. We argue that natural gestures play an important role for engaging users in experiencing the cultural dimension of a given environment. To this end, we propose an open system for multi-user visualization and interac-tion that enables users to employ natural gestures. We explored different configurations and controls in order to achieve the most accurate and natural user experience. One being switching between the naviga-tion and manipulation mode based on distance and orientation towards different points of interest and the other being based on interacting with a virtual UI used for switching between the two modes. We also implemented both a single-user and a multi-user version. The single-user version having a normal, computer monitor based, point of view is better for a more accurate and detailed viewing experience. Also, in this version the user would be wearing the Myo armband and also using the Leap Motion for a more immersed experience. The multi-user version is based on a holographic pyramid which has two user perspectives, one of the Myo user and the other being the Leap Motion user’s, and two for the spectators’ point of view. Finally, we discuss findings on the users’ perceptions of experienced cultural immersion.

Meet The Team

OVIDIUS University of Constanta - The Research in Virtual and Augmented Reality Lab (CeRVA)

Dorin-Mircea POPOVICI

P2 project leader / researcher
UEF-ID: U-1700-039W-6468

Virtual and augmented reality, mixed environments for education, training and cultural heritage

Dragos-Florin SBURLAN

Researcher

Dragos Sburlan is Associate Professor of Computer Science at the Ovidius University of Constanta. His main interests regard the development of new computing paradigms and algorithms in the field of theoretical computer science.

Crenguta-Madalina PUCHIANU

Researcher
UEF-ID: U-1700-035G-1876

Associate Professor at the Mathematics and Computer Science faculty of the Ovidius University of Constanta, Romania. Her main competencies are in the following areas: semantic web and information and software systems engineering.

Elena BAUTU

Researcher
UEF-ID: U-1700-039R-1944

Elena Bautu is Senior Lecturer of Computer Science at the Ovidius University of Constanta. Her main research results concern the development of novel hybrid evolutionary methods for regression and classification problems.

Emanuela BRAN

Research Assistant, PhD Student
UEF-ID: U-1900-061Y-9866

TO DO

Stefan cel Mare University of Suceava - The Machine Intelligence and Information Visualization Lab (MINTVIZ)

Radu-Daniel VATAVU

Partner leader / Researcher

Radu is a Professor of Computer Science at the University of Suceava and the scientific leader of the MintViz lab. He is interested in applying AI techniques to design useful and usable interactions between humans, computers, and environments.

Stefan-Gheorghe PENTIUC

Researcher

Professor of Computer Science at the University of Suceava, and Dean of the Faculty of Electrical Engineering and Computer Science. His research interests are pattern recognition, image processing, and distributed and mobile computing.

Ovidiu-Andrei SCHIPOR

Researcher

Ovidiu is an Associate Professor of Computer Science at the University of Suceava. He is interested in speech processing and human movement analysis for intelligent user interfaces.

Laura-Bianca BILIUS

Research Assistant, PhD Student

Laura is a PhD student in Computer Science at the University of Suceava, advised by Stefan-Gheorghe Pentiuc. She is interested in pattern recognition, tensor mathematics, gesture input, and image processing.

Get in Touch

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut
et dolore magna aliqua. Ut enim ad minim veniam

Contact Info

Ovidius University of Constanta
124 Mamaia Bd, 900527
Constanta, Romania
Phone: +40 241 606 467