(6 intermediate revisions by the same user not shown) | |||
Line 28: | Line 28: | ||
* [[Special_Sessions#ss2| SS2 Kalman Filters in Nonlinear State Estimation]] | * [[Special_Sessions#ss2| SS2 Kalman Filters in Nonlinear State Estimation]] | ||
* [[Special_Sessions#ss3| SS3 Data Fusion Methods for Indoor Localization of People and Objects]] | * [[Special_Sessions#ss3| SS3 Data Fusion Methods for Indoor Localization of People and Objects]] | ||
− | * [[Special_Sessions#ss4| SS4 Multimodal Image Processing and Fusion ]] | + | * [[Special_Sessions#ss4| SS4 Multimodal Image Processing and Fusion]] |
− | * [[Special_Sessions#ss5| SS5 Homotopy Methods in State Estimation ]] | + | * [[Special_Sessions#ss5| SS5 Homotopy Methods in State Estimation]] |
+ | * [[Special_Sessions#ss6| SS6 Data Fusion in Sensor-based Sorting]] | ||
+ | * [[Special_Sessions#ss7| SS7 Multi-Robot Systems and Mobile Sensor Networks]] | ||
+ | * [[Special_Sessions#ss8| SS8 Multisensor Fusion Methods for Radiation Source Localization]] | ||
+ | * [[Special_Sessions#ss9| SS9 Multiple (Extended) Object Tracking]] | ||
+ | * [[Special_Sessions#ss10| SS10 Neurorobotics - a proposing perspective on synergies between neuroscience and robotics]] | ||
</div> | </div> | ||
|} | |} | ||
Line 114: | Line 119: | ||
| class="MainPageBG" style="width:100%; border:1px solid #bdd6c6; background:#e7f7e7; vertical-align:top; color:#000;" | | | class="MainPageBG" style="width:100%; border:1px solid #bdd6c6; background:#e7f7e7; vertical-align:top; color:#000;" | | ||
{| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
− | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#d6efd6; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #bdd6c6; text-align:left; color:#000; padding:0.2em 0.4em;">SS4 Multimodal Image Processing and Fusion </h2> | + | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#d6efd6; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #bdd6c6; text-align:left; color:#000; padding:0.2em 0.4em;">SS4 Multimodal Image Processing and Fusion</h2> |
|- | |- | ||
| style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
Line 132: | Line 137: | ||
| class="MainPageBG" style="width:100%; border:1px solid #f2ea7e; background:#ffffe8; vertical-align:top; color:#000;" | | | class="MainPageBG" style="width:100%; border:1px solid #f2ea7e; background:#ffffe8; vertical-align:top; color:#000;" | | ||
{| id="mp-left" style="width:100%; vertical-align:top; background:#ffffe8;" | {| id="mp-left" style="width:100%; vertical-align:top; background:#ffffe8;" | ||
− | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#fff7bd; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f2ea7e; text-align:left; color:#000; padding:0.2em 0.4em;">SS5 Homotopy Methods in State Estimation </h2> | + | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#fff7bd; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f2ea7e; text-align:left; color:#000; padding:0.2em 0.4em;">SS5 Homotopy Methods in State Estimation</h2> |
|- | |- | ||
| style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
Line 145: | Line 150: | ||
| style="border:1px solid transparent;" |<br /> | | style="border:1px solid transparent;" |<br /> | ||
|- | |- | ||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss6"></div> | ||
+ | <!-- SS6 Text6 --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #d6bdde; background:#f7eff7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:4px; background:#e7deef; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #d6bdde; text-align:left; color:#000; padding:0.2em 0.4em;">SS6 Data Fusion in Sensor-based Sorting</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Sensor-based sorting is an established technology for sorting of various products according to quality aspects. Fields of application include food processing, recycling, and industrial mineral processing. Selection of a sensor suitable for material characterization typically depends on the product under inspection as well as the sorting task itself. However, in many cases increased sorting performance is achieved by combining information retrieved from multiple different sensors, for example line-scan and area-scan cameras, near-infrared cameras, X-ray, 3D sensors, or hyperspectral cameras. Hence, sensor data needs to be fused to increase performance by putting information in a temporal and / or spatial context. This applies for systems including several sensors of the same kind as well as heterogeneous combinations. Additionally, sensor-based sorting systems are typically restricted in terms of the time being available to derive a sorting decision. Therefore, real-time capable information fusion methods are required. | ||
+ | |||
+ | '''Organizers:''' [mailto:georg.maier@iosb.fraunhofer.de Georg Maier], [mailto:Robin.Gruna@iosb.fraunhofer.de Robin Gruna] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss7"> | ||
+ | <!-- SS7 Multi-Robot Systems and Mobile Sensor Networks --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #f36766; background:#f9d6c9; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#f9d6c9;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#f5baa3; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f36766; text-align:left; color:#000; padding:0.2em 0.4em;">SS7 Multi-Robot Systems and Mobile Sensor Networks</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' The objective of the special session is to provide an international forum for the discussion of recent developments and advances in the field of nmlti-robot systems and mobile sensor networks. In-depth discussions of relevant theories and applications related to multi-robot systems and mobile sensor networks are expected, including the presentatiol!l of results of applications to real-world land, sea, underwater, aerial and space multi-vehicle sys1tems, as well as strong theoretical contributions. Additionally, the special session welcomes papers that explore new ways of using visual sensors to solve problems in robotics. | ||
+ | |||
+ | '''Organizers:''' [mailto:Joachim.Horn@hsu-hh.de Joachim Horn], [mailto:hla@unr.edu Hung M.La], [mailto:gronemem@hsu-hh.de Marcus Grooemeyer], [mailto:adang@hsu-hh.de Anh Duc Dang] | ||
+ | </div> | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss8"></div> | ||
+ | <!-- SS8 Multisensor Fusion Methods for Radiation Source Localization --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #a3babf; background:#f5fdff; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#ceecf2; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #a3babf; text-align:left; color:#000; padding:0.2em 0.4em;">SS8 Multisensor Fusion Methods for Radiation Source Localization</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Locating a dangerous source of penetrating (i.e., gamma and neutron) radiation in an urban environment is a critical mission for nuclear counterterrorism and emergency response. Traditional methods for localizing a radiation source have primarily relied on individual, non-networked radiation sensors whose responses are used by loosely coordinated operators to collaboratively locate the source. Recently, significant progress has been made in developing rigorous methods for simultaneously analyzing the response of a network of radiation sensors. This session will present recent work on the analysis of radiation sensor networks to optimize the resources and time required to locate a dangerous radiation source in an urban environment. | ||
+ | |||
+ | '''Organizers:''' [mailto:john_mattingly@ncsu.edu John Mattingly] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss9"></div> | ||
+ | <!-- SS9 Multiple (Extended) Object Tracking --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #bdd6c6; background:#e7f7e7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#d6efd6; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #bdd6c6; text-align:left; color:#000; padding:0.2em 0.4em;">SS9 Multiple (Extended) Object Tracking</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Autonomous driver safety functions are standard in many modern cars, and semi-automated systems (e.g., traffic jam assist) are becoming more and more common. Construction of a driverless vehicle requires solutions to many different problems, among them multiple object tracking. The multiple object tracking problem is defined as keeping track of an unknown naumber of moving objects, and historically it has been focused on so called point objects which give at most one detection per time step. However, modern sensors have increasingly higher resolution, meaning that it is common to see multiple detections per object. For example, this is the case when automotive radar or lidar sensors are used. In order to be able to use point object algorithms for these sensors, heuristic clustering algorithms are applied to the raw measurements to obtain object hypotheses. In challenging scenarios, the hard decisions of the clustering algorithms affect the performance of the tracking algorithm due to the associated loss of information. | ||
+ | Consequently, so called extended object tracking algorithms which are capable of handling several measurements per object are required. This special session addresses recent results in the area of multiple object tracking for both point objects and especially extended objects. | ||
+ | |||
+ | |||
+ | '''Organizers:''' [mailto:karl.granstrom@chalmers.se Karl Granström], [mailto:stephan.reuter@uni-ulm.de Stephan Reuter], [mailto:marcus.baum@cs.uni-goettingen.de Marcus Baum] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- MFI 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 100%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss10"></div> | ||
+ | <!-- SS10 Neurorobotics - a proposing perspective on synergies between neuroscience and robotics --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #f2ea7e; background:#ffffe8; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#ffffe8;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#fff7bd; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f2ea7e; text-align:left; color:#000; padding:0.2em 0.4em;">SS10 Neurorobotics - a proposing perspective on synergies between neuroscience and robotics</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Neurorobotics represents a research field allowing both, understanding of basic functionalities of the brain and the use of its basic principles for sensory motor controls of robots and other artifacts. Neuroscience is focusing on basic brain mechanisms supporting intelligent biological systems behavior, while neurorobotics attempts to apply these mechanisms for building adaptive controls for bio-inspired machines. Spiking neurons are used to implement basic sensor-motor controls with the capability of learning basic functionalities exploiting the synaptic variety with bio-inspired fine grained spiking neural computing techniques. Neurorobotics in the context of robot controls is under study since decades. Taking advantage from observed neuroscientific data and knowledge, spiking neural controls for robotic systems enabling robustness, adaptivity, sensor data fusion as well as some features of intelligent behaviour. On the other side, recent development in robotics and machine learning allows the use of robots for research in neuroscience as experimental platforms for testing of artificial brain models. Since several decades nervous system functionalities based on spiking neural networks are under research to understand biological systems but also to contribute to future technical applications in artificacts. Recently a number of projects like the US BRAIN Initiative and the European Human Brain Project have taken up the challenge by combining efforts from the fields of neuroscience and computer science to enable large scale modeling and simulation of biological neural networks with millions of spiking neurons. Special hardware and adequate software has been made available to address real-time experiments related to robot controls, vision mimiking the retina, haptics and its coupling with motoric neural control structures. This special session addresses advances in neuroscientific models for cognition and new perspectives in control for robotic applications based on both, biologically-inspired and artificial spiking neural networks. The final goal is to bring together researchers from both theory and experimental robotics interesting in cybernetics, neurorobotics and sensor-actor fusion processes. | ||
+ | |||
+ | '''Organizers:''' [mailto:ruediger.dillmann@kit.edu Rüdiger Dillmann] | ||
+ | |||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | {{Organisation}} | ||
__NOTOC____NOEDITSECTION__ | __NOTOC____NOEDITSECTION__ |
Latest revision as of 10:36, 29 June 2016
|
|
|
|
|
|
|
|
|
|
|
|