(Created page with "<!-- CALL FOR SPECIAL SESSIONS --> {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" <div id="top"> <!-- CFP Do...") |
|||
(9 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
<!-- CALL FOR SPECIAL SESSIONS --> | <!-- CALL FOR SPECIAL SESSIONS --> | ||
− | {| id="mp-upper" style="width: | + | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" |
<div id="top"> | <div id="top"> | ||
<!-- CFP Download --> | <!-- CFP Download --> | ||
Line 14: | Line 14: | ||
| style="border:1px solid transparent;" |<br /> | | style="border:1px solid transparent;" |<br /> | ||
|- | |- | ||
+ | |||
+ | <!-- List of Special Sessions of MFI 2016 --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <!-- List of Special Sessions of MFI 2016 --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #f2ea7e; background:#ffffe8; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#ffffe8;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#fff7bd; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f2ea7e; text-align:left; color:#000; padding:0.2em 0.4em;">List of Special Sessions of MFI 2016</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | As additional special sessions are announced, the title of each confirmed special session will be added to the topic list above in the paper submission interface. Authors submitting to a special session need to delay their submission until the special sessions are available. <!--If the special session you wish to submit to is not available yet, consider delaying your submission until it becomes available. In case you find a new special session that appears to be a good match for your paper and would like your paper to be considered for presentation in that session, you can use the “Edit Submission” tool to add that session to the topics associated with your paper.--> | ||
+ | |||
+ | * [[Special_Sessions#ss1| SS1 Multi-Sensor Data Fusion for Autonomous Vehicles]] | ||
+ | * [[Special_Sessions#ss2| SS2 Kalman Filters in Nonlinear State Estimation]] | ||
+ | * [[Special_Sessions#ss3| SS3 Data Fusion Methods for Indoor Localization of People and Objects]] | ||
+ | * [[Special_Sessions#ss4| SS4 Multimodal Image Processing and Fusion]] | ||
+ | * [[Special_Sessions#ss5| SS5 Homotopy Methods in State Estimation]] | ||
+ | * [[Special_Sessions#ss6| SS6 Data Fusion in Sensor-based Sorting]] | ||
+ | * [[Special_Sessions#ss7| SS7 Multi-Robot Systems and Mobile Sensor Networks]] | ||
+ | * [[Special_Sessions#ss8| SS8 Multisensor Fusion Methods for Radiation Source Localization]] | ||
+ | * [[Special_Sessions#ss9| SS9 Multiple (Extended) Object Tracking]] | ||
+ | * [[Special_Sessions#ss10| SS10 Neurorobotics - a proposing perspective on synergies between neuroscience and robotics]] | ||
+ | </div> | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss1"> | ||
+ | <!-- SS1 Multi-Sensor Data Fusion for Autonomous Vehicles --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #d6bdde; background:#f7eff7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:4px; background:#e7deef; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #d6bdde; text-align:left; color:#000; padding:0.2em 0.4em;">SS1 Multi-Sensor Data Fusion for Autonomous Vehicles</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Automotive transportation is currently evolving at an unprecedented rate. Future vehicles operating in a highly assisted and autonomous mode will need the ability to function on any road possible, in a safe, legal and socially acceptable manner and without access to high-precision, precompiled maps. Therefore, at the heart of any autonomous functionality will be the ability of a vehicle to sense its environment, produce a map of static objects in the environment and to track both itself and other dynamic targets within that environment. For autonomous vehicles to be commercially viable it must achieve this high level of situational awareness using only commercially viable sensors. Multi-sensor data fusion offers the ability to greatly reduce the uncertainty of state estimates and estimate physical states which might otherwise be unobservable. | ||
+ | |||
+ | In addition to the challenge of fusing sensors organic to the vehicle, many future infrastructure projects envisage the concept where vehicles will be connected with each other and with sensors in the infrastructure to further improve situational awareness. This presents many challenges where there may be poorly understood correlation, or poor trust in externally shared information, along with constraints in terms of bandwidth and computational capacity. Furthermore, there exist both challenges and opportunities for understanding and characterising the interaction between the driver and the vehicle. | ||
+ | |||
+ | '''Organizers:''' [mailto:d.s.clarke@cranfield.ac.uk Daniel Clarke], [mailto:michael.fiegert@siemens.com Michael Fiegert], [mailto:zhang@fortiss.org Feihu Zhang], [mailto:gulati@fortiss.org Dhiraj Gulati], [mailto:benjamin.noack@kit.edu Benjamin Noack], [mailto:florian.faion@kit.edu Florian Faion] | ||
+ | |||
+ | </div> | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss2"> | ||
+ | <!-- SS2 Kalman Filters in Nonlinear State Estimation --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #f36766; background:#f9d6c9; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#f9d6c9;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#f5baa3; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f36766; text-align:left; color:#000; padding:0.2em 0.4em;">SS2 Kalman Filters in Nonlinear State Estimation</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Nonlinear state estimation is an important component in navigation, robotics, object tracking, and many other current research fields. Besides popular but computational expensive particle filters, variants of nonlinear Kalman filters or LMMSE estimators are widely used methods for state estimation. Such filters include for example the Unscented Kalman Filter, the Divided Difference Filter, or iterated Kalman filters. This session aims to cover the recent advances in the area of nonlinear Kalman filters with an emphasis on sampling and sigma-point set design, linearization techniques, and applications of Kalman filters in nonlinear state estimation scenarios. | ||
+ | |||
+ | '''Organizers:''' [mailto:jannik.steinbring@kit.edu Jannik Steinbring], [mailto:uwe.hanebeck@kit.edu Uwe D. Hanebeck] | ||
+ | </div> | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss3"></div> | ||
+ | <!-- SS3 Data Fusion Methods for Indoor Localization of People and Objects --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #a3babf; background:#f5fdff; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#ceecf2; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3babf; text-align:left; color:#000; padding:0.2em 0.4em;">SS3 Data Fusion Methods for Indoor Localization of People and Objects</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Indoor positioning has gained great importance as technology allows for affordable | ||
+ | realtime sensing and processing systems. Researchers and developers can take | ||
+ | advantage of the pervasiveness of WSNs (e.g., in the form of WLAN) and mobile sensors | ||
+ | (such as smartphones) to obtain more accurate results by exploiting already existing | ||
+ | infrastructure. Applications for indoor positioning include pedestrian navigation in | ||
+ | public buildings and shops, location based services, safety for the elderly and | ||
+ | impaired, museum guides, surveillance tasks, and also tracking products in manufacturing, | ||
+ | warehousing, etc. Unlike outdoor environments, which are covered by GNSS to a | ||
+ | satisfying extent, indoor navigation faces additional challenges depending on the | ||
+ | underlying measurement system such as occlusions, reflections and attenuation. While | ||
+ | there are a great variety of sensors and measuring principles, in practice every single | ||
+ | measuring technique suffers from deficits. While RF and (ultra-)sound are subject to | ||
+ | multipath propagation, optical systems are intolerant to NLOS conditions. Some systems | ||
+ | require setting up beacons, while others are self-calibrating and easy-to-install. | ||
+ | Data fusion can overcome these limitations by combining complementary and redundant | ||
+ | sensing techniques, with the application of algorithmic methods such as stochastic filtering. | ||
+ | |||
+ | '''Organizers:''' [mailto:antonio.zea@kit.edu Antonio Zea], [mailto:florian.faion@kit.edu Florian Faion], [mailto:uwe.hanebeck@kit.edu Uwe D. Hanebeck] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss4"></div> | ||
+ | <!-- SS4 Multimodal Image Processing and Fusion --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #bdd6c6; background:#e7f7e7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#d6efd6; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #bdd6c6; text-align:left; color:#000; padding:0.2em 0.4em;">SS4 Multimodal Image Processing and Fusion</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Since the launch of the first version of the Microsoft Kinect in 2010, setting up networks based on multimodal image sensors has become extremely popular. The novelty of these devices includes the availability of not only color information, but also infrared and depth information of a scene, at a price affordable to laymen. The combination of multiple sensors and image modalities has many advantages, such as simultaneous coverage of large environments, increased resolution, redundancy, multimodal scene information, and robustness against occlusion. However, in order to exploit these benefits, multiple challenges also need to be addressed: synchronization, calibration, registration, multi-sensor fusion, large amounts of data, and last but not least, sensor-specific stochastic and set-valued uncertainties. This Special Session addresses fundamental techniques, recent developments and future research directions in the field of multimodal image processing and fusion. | ||
+ | |||
+ | '''Organizers:''' [mailto:antonio.zea@kit.edu Antonio Zea], [mailto:florian.faion@kit.edu Florian Faion], [mailto:uwe.hanebeck@kit.edu Uwe D. Hanebeck] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss5"></div> | ||
+ | <!-- SS5 Homotopy Methods in State Estimation --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #f2ea7e; background:#ffffe8; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#ffffe8;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#fff7bd; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f2ea7e; text-align:left; color:#000; padding:0.2em 0.4em;">SS5 Homotopy Methods in State Estimation</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' In Bayesian state estimation, the inherent uncertainties of the underlying system and measurements are represented as probability density functions. Given new measurements, the predicted system state is updated to incorporate the new information. Traditionally, this information is introduced directly, which can lead to problems in recursive applications. In the case of discrete particle representations of the densities for example, the problem of particle degeneration is well known and has to be corrected for. Another approach is to gradually incorporate the new information through homotopy methods, allowing for a smooth transition of the underlying densities. | ||
+ | |||
+ | This special session is concerned with theoretical and practical aspects of homotopy methods in the context of state estimation and all works pertaining to fundamental techniques, recent developments and future research directions in this field are invited. | ||
+ | |||
+ | '''Organizers:''' [mailto:martin.pander@kit.edu Martin Pander], [mailto:uwe.hanebeck@kit.edu Uwe D. Hanebeck] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss6"></div> | ||
+ | <!-- SS6 Text6 --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #d6bdde; background:#f7eff7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:4px; background:#e7deef; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #d6bdde; text-align:left; color:#000; padding:0.2em 0.4em;">SS6 Data Fusion in Sensor-based Sorting</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Sensor-based sorting is an established technology for sorting of various products according to quality aspects. Fields of application include food processing, recycling, and industrial mineral processing. Selection of a sensor suitable for material characterization typically depends on the product under inspection as well as the sorting task itself. However, in many cases increased sorting performance is achieved by combining information retrieved from multiple different sensors, for example line-scan and area-scan cameras, near-infrared cameras, X-ray, 3D sensors, or hyperspectral cameras. Hence, sensor data needs to be fused to increase performance by putting information in a temporal and / or spatial context. This applies for systems including several sensors of the same kind as well as heterogeneous combinations. Additionally, sensor-based sorting systems are typically restricted in terms of the time being available to derive a sorting decision. Therefore, real-time capable information fusion methods are required. | ||
+ | |||
+ | '''Organizers:''' [mailto:georg.maier@iosb.fraunhofer.de Georg Maier], [mailto:Robin.Gruna@iosb.fraunhofer.de Robin Gruna] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss7"> | ||
+ | <!-- SS7 Multi-Robot Systems and Mobile Sensor Networks --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #f36766; background:#f9d6c9; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#f9d6c9;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#f5baa3; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f36766; text-align:left; color:#000; padding:0.2em 0.4em;">SS7 Multi-Robot Systems and Mobile Sensor Networks</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' The objective of the special session is to provide an international forum for the discussion of recent developments and advances in the field of nmlti-robot systems and mobile sensor networks. In-depth discussions of relevant theories and applications related to multi-robot systems and mobile sensor networks are expected, including the presentatiol!l of results of applications to real-world land, sea, underwater, aerial and space multi-vehicle sys1tems, as well as strong theoretical contributions. Additionally, the special session welcomes papers that explore new ways of using visual sensors to solve problems in robotics. | ||
+ | |||
+ | '''Organizers:''' [mailto:Joachim.Horn@hsu-hh.de Joachim Horn], [mailto:hla@unr.edu Hung M.La], [mailto:gronemem@hsu-hh.de Marcus Grooemeyer], [mailto:adang@hsu-hh.de Anh Duc Dang] | ||
+ | </div> | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss8"></div> | ||
+ | <!-- SS8 Multisensor Fusion Methods for Radiation Source Localization --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #a3babf; background:#f5fdff; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#ceecf2; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3babf; text-align:left; color:#000; padding:0.2em 0.4em;">SS8 Multisensor Fusion Methods for Radiation Source Localization</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Locating a dangerous source of penetrating (i.e., gamma and neutron) radiation in an urban environment is a critical mission for nuclear counterterrorism and emergency response. Traditional methods for localizing a radiation source have primarily relied on individual, non-networked radiation sensors whose responses are used by loosely coordinated operators to collaboratively locate the source. Recently, significant progress has been made in developing rigorous methods for simultaneously analyzing the response of a network of radiation sensors. This session will present recent work on the analysis of radiation sensor networks to optimize the resources and time required to locate a dangerous radiation source in an urban environment. | ||
+ | |||
+ | '''Organizers:''' [mailto:john_mattingly@ncsu.edu John Mattingly] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss9"></div> | ||
+ | <!-- SS9 Multiple (Extended) Object Tracking --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #bdd6c6; background:#e7f7e7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#d6efd6; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #bdd6c6; text-align:left; color:#000; padding:0.2em 0.4em;">SS9 Multiple (Extended) Object Tracking</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Autonomous driver safety functions are standard in many modern cars, and semi-automated systems (e.g., traffic jam assist) are becoming more and more common. Construction of a driverless vehicle requires solutions to many different problems, among them multiple object tracking. The multiple object tracking problem is defined as keeping track of an unknown naumber of moving objects, and historically it has been focused on so called point objects which give at most one detection per time step. However, modern sensors have increasingly higher resolution, meaning that it is common to see multiple detections per object. For example, this is the case when automotive radar or lidar sensors are used. In order to be able to use point object algorithms for these sensors, heuristic clustering algorithms are applied to the raw measurements to obtain object hypotheses. In challenging scenarios, the hard decisions of the clustering algorithms affect the performance of the tracking algorithm due to the associated loss of information. | ||
+ | Consequently, so called extended object tracking algorithms which are capable of handling several measurements per object are required. This special session addresses recent results in the area of multiple object tracking for both point objects and especially extended objects. | ||
+ | |||
+ | |||
+ | '''Organizers:''' [mailto:karl.granstrom@chalmers.se Karl Granström], [mailto:stephan.reuter@uni-ulm.de Stephan Reuter], [mailto:marcus.baum@cs.uni-goettingen.de Marcus Baum] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss10"></div> | ||
+ | <!-- SS10 Neurorobotics - a proposing perspective on synergies between neuroscience and robotics --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #f2ea7e; background:#ffffe8; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#ffffe8;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#fff7bd; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f2ea7e; text-align:left; color:#000; padding:0.2em 0.4em;">SS10 Neurorobotics - a proposing perspective on synergies between neuroscience and robotics</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Neurorobotics represents a research field allowing both, understanding of basic functionalities of the brain and the use of its basic principles for sensory motor controls of robots and other artifacts. Neuroscience is focusing on basic brain mechanisms supporting intelligent biological systems behavior, while neurorobotics attempts to apply these mechanisms for building adaptive controls for bio-inspired machines. Spiking neurons are used to implement basic sensor-motor controls with the capability of learning basic functionalities exploiting the synaptic variety with bio-inspired fine grained spiking neural computing techniques. Neurorobotics in the context of robot controls is under study since decades. Taking advantage from observed neuroscientific data and knowledge, spiking neural controls for robotic systems enabling robustness, adaptivity, sensor data fusion as well as some features of intelligent behaviour. On the other side, recent development in robotics and machine learning allows the use of robots for research in neuroscience as experimental platforms for testing of artificial brain models. Since several decades nervous system functionalities based on spiking neural networks are under research to understand biological systems but also to contribute to future technical applications in artificacts. Recently a number of projects like the US BRAIN Initiative and the European Human Brain Project have taken up the challenge by combining efforts from the fields of neuroscience and computer science to enable large scale modeling and simulation of biological neural networks with millions of spiking neurons. Special hardware and adequate software has been made available to address real-time experiments related to robot controls, vision mimiking the retina, haptics and its coupling with motoric neural control structures. This special session addresses advances in neuroscientific models for cognition and new perspectives in control for robotic applications based on both, biologically-inspired and artificial spiking neural networks. The final goal is to bring together researchers from both theory and experimental robotics interesting in cybernetics, neurorobotics and sensor-actor fusion processes. | ||
+ | |||
+ | '''Organizers:''' [mailto:ruediger.dillmann@kit.edu Rüdiger Dillmann] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | {{Organisation}} | ||
+ | __NOTOC____NOEDITSECTION__ |
Latest revision as of 10:36, 29 June 2016
|
|
|
|
|
|
|
|
|
|
|
|