General information about our project

Grant Contract no. 21PCCDI/2018

Project code: PN-III-P1-1.2-PCCDI-2017-0917

Project Name: Hybrid platform for visible light communications and augmented reality for the development of intelligent systems for active vehicle assistance and safety

Acronym: CAR Safe

Component Subproject 2

Subproject Name: Effective smart-based communications in augmented reality interactive scenarios for cars

Description: The project addresses the issue of in-vehicle augmented reality system development (IV-AR) focusing on the interaction between intelligent devices and IVIS vehicle systems.

Our Activities

Phase 1

Design and development of an augmented reality vehicle software infrastructure (IV-AR)

Phase 2

Design and implementation of digital data transfer techniques between user's smart personal devices and In-Vehicle Inf. Systems (IVIS)

Phase 3

Design and implementation of digital data transfer techniques between IVIS and the user leaving the vehicle

Results

Phase 2 - 2019

Reports

Design report manual transfer (abstract)

Design report automatic transfer (abstract)

Design report augmented reality techniques (abstract)

Tests report (abstract)

Application

Prototypes of software components that implement data transfer between the driver's smart personal devices and the IVIS system

Short description:

A first software component makes the connection between a smartwatch (Samsung Gear Fit 2) and a connected device represented by a tablet installed in the car with a three-component user interface (UI): UI1: presentation of user / driver information and automatically transferred for display in the user interface inside the vehicle, UI2: the list of the user's favorite songs stored on the smartwatch, and UI3: the to-do list, also stored on the smartwatch.

Demonstrate the transfer of digital content between a smart device (driver's watch) and the system running inside the car with various use cases: the type of transferred content can be configured (left and middle image) and, after the transfer, becomes interactive (right).

Short description:

The second software component implemented several multimodal interaction techniques based on voice command and gesture recognition. Therefore, we have developed a voice command recognition module that runs on a smart device with the Android operating system (such as phone, tablet or watch) connected to the WiFi network. Because speech recognition is generally affected by ambient noise, we also designed complementary gesture-based interaction techniques for two generic actions: accepting and rejecting notifications using Leap Motion technology.

Multimodal interaction using gestures purchased by the Leap Motion controller, respectively voice commands using Android SmartWatch.

Articles - direct results

Euphoria: A Scalable, Eventdriven Architecture for Designing Interactions across Heterogeneous Devices in Smart Environments

Ovidiu-Andrei Schipor, Radu-Daniel Vatavu, and Jean Vanderdonckt
Information and Software Technology 109 (May 2019). Elsevier, 43-59. http://dx.doi.org/10.1016/j.infsof.2019.01.006

Abstract:

Context: From personal mobile and wearable devices to public ambient displays, our digital ecosystem has been growing with a large variety of smart sensors and devices that can capture and deliver insightful data to connected applications, creating thus the need for new software architectures to enable fluent and flexible interactions in applications, creating thus the need for new software architectures to enable fluent and flexible interactions in such smart environments. Objective: We introduce EUPHORIA, a new software architecture design and implementation that enables easy prototyping, deployment, and evaluation of adaptable and flexible interactions across heterogeneous devices in smart environments. Method: We designed EUPHORIA by following the requirements of the ISO/IEC 25010:2011 standard on Software Quality Requirements and Evaluation applied to the specific context of smart environments. Results: To demonstrate the adaptability and flexibility of EUPHORIA, we describe three application scenarios for contexts of use involving multiple users, multiple input/output devices, and various types of smart environments, as follows: (1) wearable user interfaces and whole-body gesture input for interacting with public ambient displays, (2) multi-device interactions in physical-digital spaces, and (3) interactions on smartwatches for a connected car (2) multi-device interactions in physical-digital spaces, and (3) interactions on smartwatches for a connected car application scenario. We also perform a technical evaluation of EUPHORIA regarding the main factors responsible for the magnitudes of the request-response times for producing, broadcasting, and consuming messages inside the architecture. We deliver the source code of EUPHORIA free to download and use for research purposes. Conclusion: By introducing EUPHORIA and discussing its applicability, we hope to foster advances and developments in new software architecture initiatives for our increasingly complex smart environments, but also to readily support implementations of novel interactive systems and applications for smart environments of all kinds.

A Design Space for Vehicular LifeLogging to Support Creation of Digital Content in Connected Cars

Adrian Aiordăchioae, Radu-Daniel Vatavu, and Dorin Mircea Popovici
Proceedings of EICS ’19 (2019), the 11th the ACM SIGCHI Symposium on Engineering Interactive Computing Systems. New York, NY, USA: ACM Press, Article No. 9, 6 Pages. http://dx.doi.org/10.1145/3319499.3328234

Abstract:

Connected cars can create, store, and share a wide variety of data reported by in-vehicle sensors and systems, but also by mobile and wearable devices, such as smartphones, smart-watches, and smartglasses, operated by the vehicle occupants. This wide variety of driving- and journey-related data creates ideal premises for vehicular logs with applications ranging from driving assistance to monitoring driving performance and to generating content for lifelogging enthusiasts. In this paper, we introduce a design space for vehicular lifelogging consisting of five dimensions: (1) nature and (2) source of the data, (3) actors, (4) locality, and (5) representation. We use our design space to characterize existing vehicular lifelogging systems, but also to inform the features of a new prototype for the creation of digital content in connected cars using a smartphone and a pair of smartglasses.

Towards Interactions with Augmented Reality Systems in Hyper-Connected Cars

Ovidiu-Andrei Schipor, Radu-Daniel Vatavu
Proceedings of HCI Engineering 2019 (2019), the 2nd Workshop on Charting the Way Towards Methods and Tools for Advanced Interactive Systems (in conjunction with ACM EICS '19) , ISSN: 1613-0073, 76-82.

Abstract:

Hyper-connected cars can store, process, and share a large amount and variety of digital content, which creates opportunities for using high-definition Augmented Reality (AR) and live video streaming to enhance current in-vehicle driving assistance and navigation systems. However, several challenges must be overcome to make such systems viable and efficient, such as dealing effectively with a variety of smart devices, platforms, and in-vehicle standards and technologies or delivering dynamic digital content to users in interactive time. In this paper, we propose a solution to these challenges by modeling the smart car as a distinct type of a smart environment. This model enables us to introduce a five-layer software architecture proposal based on Euphoria, a recent high-performing event-driven software architecture design for supporting effective communications between heterogeneous I/O devices in generic smart environments. We discuss the ways in which Euphoria can provide effective solutions to our identified challenges and hope that our contributions will stimulate interesting discussions towards defining a practical roadmap of engineering interactions with AR systems and high-definition video for hyper-connected cars.

In-Vehicle System for Adaptive Filtering of Notifications

Elena BAUTU, Carmen I. TUDOSE, Crenguta M. PUCHIANU
Proceedings of the International Conference on Human-Computer Interaction (RoCHI2019), ISSN: 2501-9422, Ed. MatrixRom, 145-151.

Abstract:

Nowadays, there is a growing interest for drivers and car passengers to have adaptive smartphone applications to the driving context of the car. In this paper we present a software system that filters notifications launched by mobile phone depending on the driver’s or co-driver’s interests, needs or preferences. The filtering is adaptive in the sense that it changes according to the driving conditions determined by the system using data received from the GPS system and accelerometer of smartphone. The system has been developed using “separation of concerns” principle that guarantees an iterative and incremental development of system. Also, it has been tested until now by 28 users who appreciated the system’s quality considering their answers on the usability test.

Articles - related results

Cultural Heritage Interactive Dissemination through Natural Interaction

Emanuela BRAN, Elena BAUTU, Dorin-Mircea POPOVICI, Vasili BRAGA, Irina COJUHARI
Proceedings of the International Conference on Human-Computer Interaction (RoCHI2019), pp.156-161, ISSN: 2501-9422, Ed. MatrixRom.

Abstract:

Virtual Heritage is widely used in education, enhancing the learning process by motivation through providing a natural experience. New ways of user exploring cultural heritage virtual environments represent dynamic research areas while they are based on latest technologies such as mobile, wearable or ubiquitous interfaces. We have designed SNAIP, a distributed system based on natural interaction, for exploring a virtual heritage environment populated with interactive agents.

Phase 1 - 2018

Reports

Design report (abstract)

Tests report (abstract)

Applications

Euphoria - A Scalable, Event-driven Architecture for Designing Flexible, Heterogeneous, Asynchronous Interactions in Smart Environments

Short description:

Euphoria is a new software architecture design and implementation that enables prototyping and evaluation of flexible, asynchronous interactions between users, personal devices, and public installations, systems, and services within smart environments, such as new user interactions for smart in-vehicle environments. A demonstration of Euphoria is available here!

GenericProducer & GenericConsumer - prototype implementations of client software components for Euphoria software infrastructure

Short description:

The GenericProducer application currently includes 3 types of content producers: - a generic, customizable content manufacturer that delivers size notifications / messages customizable, dynamicaly; - a content generating producer that uses messages as the source of weather information as an example of using a service based on a phone sensor, and - a content producer that picks up from the usual mobile notification device.

Short description:

The GenericConsumer application receives JSON objects from Euphoria and displays on the consumer display information about them. At the moment, the purpose of display was to test the reliability of such an approach. In order to display the information produced by the GenericProducer, we first considered the visibility information in different environmental conditions, as well as minimizing disturbance of the attention of traffic participants in general, that of the driver in particular by displaying this information.

Our demo movie is here:

Articles - direct results

Euphoria: A Scalable, Event-driven Architecture for Designing Flexible, Heterogeneous, Asynchronous Interactions in Smart Environments

Ovidiu-Andrei SCHIPOR, Radu-Daniel VATAVU, Jean VANDERDONCKT
Information and Software Technology, Elsevier IF: 2.627, 5-year IF: 2.768 (under review)

Augmenting Selection by Intention for In-Vehicle Control and Command

Catalin DIACONESCU, Dragos Florin SBURLAN and Dorin-Mircea POPOVICI
Proceedings of the International Conference on Human-Computer Interaction (RoCHI2018), ISSN: 2501-9422, pg. 107-110, Ed. MatrixRom, 2018.

Abstract:

Our paper proposes an in-vehicle interactive environment that facilitates driver to interact with a computer-based (multimedia) system through intentional gestures. To this end we used two complementary gesture detection technologies, Leap motion responsible for user gesture detection, and Myo as a second interaction device that gives the driver the possibility to switch between launched applications and supplementary controls. Based on preliminary results of the usability test we conducted for our solution, we discuss the advantages and disadvantages of using two different gesture recognition technologies for in-vehicle interactive environments. We conclude by the main issues we identify until now and some future directions of our efforts.

Speech & Speaker recognition for Romanian Language

Eugeniu VEZETEU, Dragos SBURLAN and Elena PELICAN
Proceedings of the International Conference on Human-Computer Interaction (RoCHI2018), ISSN: 2501-9422, pg. 54-62, Ed. MatrixRom, 2018.

Abstract:

The present paper illustrates the main methods that can be employed to build a speech and speaker recognition system for Romanian language. To this aim, we start by presenting the classical approach of extracting the Mell Frequency Cepstral Coefficients features from a dataset of speech signals (which represents some words/phrases in Romanian language). The recognition is done either by using Dynamic Time Warping (DTW) or by training an Convolutional Neural Network. A comparison between these models is presented and commented. Once such a system is developed, we proceed further by implementing an application that listens and executes some predefined commands. In our setup, the system performs two main tasks: it recognizes the user by his voice and executes a task corresponding to the vocal command.

Articles - related results

Filter Application on Facial Features in a Smartphone App

Sofia MORAR, Elena PELICAN and Dorin-Mircea POPOVICI
Proceedings of the International Conference on Human-Computer Interaction (RoCHI2018), ISSN: 2501-9422, pg. 5-11, Ed. MatrixRom, 2018.

Abstract:

Filter application on facial features is a rather new field and it has quickly become essential in all the social network applications. Fast and accurate filter application is still a field to be explored. In this paper, an automatic application of filters of different makeup products (lipstick and blush) is developed on the facial features of interest (lips and cheeks). Facial features are recognized and extracted with the help of a Machine Learning API (Application Programming Interface), Google Mobile Vision API. The application of the filters is developed using Bezier curves and the basic principles of graphics for the correct rendering of layers. The application of the filters is developed in an iOS App, with two functionalities, application on a picture, as well as in real-time through the phone’s camera, where we also use a Convolutional Neural Network (CNN) in order to recognize the user’s face, which introduces us into the field of Augmented Reality (AR) and Deep Learning.

From Exploration of Virtual Replica to Cultural Immersion through Natural Gestures

Catalin DIACONESCU, Matei-Ioan POPOVICI and Dorin-Mircea POPOVICI
Proceedings of the 1st International Conference on VR Technologies in Cultural Heritage 2018, ISBN: 978-3-030-05819-7

Abstract:

We investigate in this work the potential of multimodal rendering for assisting users during culturally-related navigation and manipulation tasks inside virtual environments. We argue that natural gestures play an important role for engaging users in experiencing the cultural dimension of a given environment. To this end, we propose an open system for multi-user visualization and interac-tion that enables users to employ natural gestures. We explored different configurations and controls in order to achieve the most accurate and natural user experience. One being switching between the naviga-tion and manipulation mode based on distance and orientation towards different points of interest and the other being based on interacting with a virtual UI used for switching between the two modes. We also implemented both a single-user and a multi-user version. The single-user version having a normal, computer monitor based, point of view is better for a more accurate and detailed viewing experience. Also, in this version the user would be wearing the Myo armband and also using the Leap Motion for a more immersed experience. The multi-user version is based on a holographic pyramid which has two user perspectives, one of the Myo user and the other being the Leap Motion user’s, and two for the spectators’ point of view. Finally, we discuss findings on the users’ perceptions of experienced cultural immersion.

Meet The Team

OVIDIUS University of Constanta - The Research in Virtual and Augmented Reality Lab (CeRVA)

Dorin-Mircea POPOVICI

P2 project leader / researcher
UEF-ID: U-1700-039W-6468

Virtual and augmented reality, mixed environments for education, training and cultural heritage

Dragos-Florin SBURLAN

Researcher
UEF-ID: U-1700-030V-4710

Dragos Sburlan is Associate Professor of Computer Science at the Ovidius University of Constanta. His main interests regard the development of new computing paradigms and algorithms in the field of theoretical computer science.

Crenguta-Madalina PUCHIANU

Researcher
UEF-ID: U-1700-035G-1876

Associate Professor at the Mathematics and Computer Science faculty of the Ovidius University of Constanta, Romania. Her main competencies are in the following areas: semantic web and information and software systems engineering.

Elena BAUTU

Researcher
UEF-ID: U-1700-039R-1944

Elena Bautu is Senior Lecturer of Computer Science at the Ovidius University of Constanta. Her main research results concern the development of novel hybrid evolutionary methods for regression and classification problems.

Emanuela BRAN

Research Assistant, PhD Student
UEF-ID: U-1900-061Y-9866

TO DO

Stefan cel Mare University of Suceava - The Machine Intelligence and Information Visualization Lab (MINTVIZ)

Radu-Daniel VATAVU

Partner leader / Researcher

Radu is a Professor of Computer Science at the University of Suceava and the scientific leader of the MintViz lab. He is interested in applying AI techniques to design useful and usable interactions between humans, computers, and environments.

Stefan-Gheorghe PENTIUC

Researcher

Professor of Computer Science at the University of Suceava, and Dean of the Faculty of Electrical Engineering and Computer Science. His research interests are pattern recognition, image processing, and distributed and mobile computing.

Ovidiu-Andrei SCHIPOR

Researcher

Ovidiu is an Associate Professor of Computer Science at the University of Suceava. He is interested in speech processing and human movement analysis for intelligent user interfaces.

Laura-Bianca BILIUS

Research Assistant, PhD Student

Laura is a PhD student in Computer Science at the University of Suceava, advised by Stefan-Gheorghe Pentiuc. She is interested in pattern recognition, tensor mathematics, gesture input, and image processing.

Get in Touch

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut
et dolore magna aliqua. Ut enim ad minim veniam

Contact Info

Ovidius University of Constanta
124 Mamaia Bd, 900527
Constanta, Romania
Phone: +40 241 606 467