IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Thermal imaging, or thermography, consists in measuring and imaging the thermal radiation emitted by every object above the absolute zero temperature. As this radiation is temperature-dependent, the infrared images recorded can be converted into temperature maps, or thermograms, allowing retrieving valuable information about the object under investigation. Thermal imaging has been known since the middle of the 20th century and recent technological achievements concerning the infrared imaging devices, together with the development of new procedures based on transient thermal emission measurements revolutionized the field. Nowadays, thermography is a method whose advantages are undisputed in engineering. It is routinely used for the non-destructive testing of materials, to investigate electronic components, or in the photovoltaic industry to detect defects in solar cells.
Despite an early interest, thermal imaging is currently rarely used for biomedical applications and even less in clinical settings. One reason is probably the initial disappointing results obtained solely with static measurement procedures, where the sample is investigated in its steady state, and using unhandy and performance-limited first-generation infrared cameras. In addition, the retrieval of quantitative data using dynamic thermal imaging procedures often requires complex mathematical modelling of the sample which can be demanding in biomedical applications due to the large variability intrinsic to life science field.
The goal of this lecture is to a) demonstrate the potential of dynamic thermal imaging for biomedical applications and b) give the reader the necessary background to successfully translate the technology to his/her specific biomedical applications.
In a first step, the basics of thermal radiation and thermal imaging device technology will be reviewed. Rather than giving an exhaustive description of the technology, we aim to familiarize the reader with key concepts that will allow selecting an optimal infrared camera depending on the specific application. In a subsequent part, we will present the foundation of thermodynamics needed to understand and be able to mathematically model heat transfer processes happening inside the sample under investigation and between the sample and its environment. Such thermal exchanges are responsible for the sample surface temperature. As next steps, we will present in detailed and compare the different procedures used in dynamic thermal imaging. Dynamic thermal imaging means that the sample surface thermal emission is monitored in its transient state and exhibit superior capabilities compared to passive thermal imaging. The thermal stimulation can be achieved with different modalities depending on the sample under investigation (LASER or flash lamps to investigate thin coatings, alternating magnetic fields to detect magnetic material, microwave to heat up water, or ultrasound to monitor cracks). Various procedures are possible: stepped- and pulsed-thermal imaging, pulsed-phase and lock-in thermal imaging. Each approach exhibiting specific characteristics in term of signal to noise ratio or measurement duration.
As an illustration, we will demonstrate how lock-in thermal imaging can be advantageously used to build extremely sensitive instruments to detect and characterize stimuli-responsive nanoparticles (both plasmonic and magnetic) in complex environments like cell cultures, tissue or food. In this example, we will present the research instrument in detail with the choice of the various components, the digital lock-in demodulation implemented, the mathematical modelling of the sample required to extract quantitative information as well as the resulting setup performances. The goal being to allow the reader to translate the dynamic thermal imaging measurement principles to its own biomedical application.
Impedance Spectroscopy is a measurement method used in many fields of science and technology including chemistry, medicine, and material sciences. The possibility to measure the complex impedance over a wide frequency range involves interesting opportunities for separating different physical effects, accurate measurements, and measurements of non-accessible quantities. Especially by sensors, a multifunctional measurement can be realized so that more than one quantity can be measured at the same time and the measurement accuracy and reliability can be significantly improved.
In order to realize impedance spectroscopy-based solutions, several aspects should be carefully addressed such as measurement procedures, modelling and signal processing, parameter extraction. Development of suitable impedance models and extraction of target information by optimization techniques is one of the most used approaches for calculation of target quantities.
Different presentations can be provided to specific topics to show the chances of application of this method in the fields of battery diagnosis, bioimpedance, sensors, and material sciences. The aim is to attract scientists to be able to apply impedance spectroscopy in different fields of instrumentation and measurement in an adequate way.
Optical Instrumentation, computer vision, and augmented reality are powerful platform technologies. In this lecture, we will discuss how these technologies can be used for medical applications. I will give an overview of the technologies and current challenges relevant to medical and surgical settings. The recent advances in image acquisition, computer vision, photonics, and instrumentation present the scientific community with the opportunity to develop new systems to impact healthcare. Leveraging an integrated design, advantages of hardware and software approach can be combined, and shortcomings can be complemented. I will present new approaches of fluorescence imaging for surgical applications. We will discuss hardware instrumentation, algorithm development, and system deployment. New development in multimodal imaging and image registration will also be discussed. For example, a combination of real-time intraoperative optical imaging and CT-based surgical navigation represents a promising approach for clinical decision support. Integration of 3D imaging and augmented reality provides surgeons with an intuitive way to visualize surgical data. In addition to technological development, I will discuss the clinical translation of systems and cross-disciplinary collaboration. Interdisciplinary approaches to solving complex problems in surgically relevant settings will be described.
Industry 4.0 is considered the great revolution of the past few years. New technologies, the Internet of things, the possibility to monitor everything from everywhere changed both plants and the approaches to the industrial production. Medicine is considered a slowly changing discipline. The human body model is a difficult concept to develop. But we can identify some passages in which medicine can be compared to industry. Four major changes revolutionized medicine:
Medicine 1.0: James Watson and Francis Crick described the structure of DNA. This was the beginning of research in the field of molecular and cellular biology
Medicine 2.0: Sequencing the Human genome. This discovery made it possible to find the origin of the diseases.
Medicine 3.0: The convergence of biology and engineering. Now the biologist’s experience can be combined with the technology of the engineers. New approaches to new forms of analysis can be used.
Medicine 4.0: Digitalization of Medicine: IOT devices and techniques, AI to perform analyses, Machine Learning for diagnoses, Brain Computer Interface, Smart wearable sensors.
Medicine 4.0 is definitely a great revolution in the patient care. New horizons are possible today. Covid 19 has highlighted problems that have existed for a long time. Relocation of services, which means remote monitoring, remote diagnoses without direct contact between the doctor and the patient. Hospitals are freed from routine tests that could be performed by patients at home and reported by doctors on the internet. Potential dangerous conditions can be prevented. During the Covid emergency everybody can check his condition and ask for a medical visit (swab) only when really necessary. This is true telemedicine. This is not a whatsapp where an elder tries to chat with a doctor. This is a smart device able to measure objective vital parameters and send to a health care center. Of course Medicine 4.0 requires new technologies for smart sensors. These devices need to be very easy to use, fast, reliable and low cost. They must be accepted by both people and doctors.
In this talk we’ll see together the meaning of telemedicine and E-Health. E-health is the key to allowing people to self monitor their vital signals. Some devices already exist but a new approach will allow to everybody (especially older people with cognitive difficulties) to use these systems with a friendly approach. Telemedicine will be the new approach to the concept of hospital. A virtual hospital, without any physical contact but with an objective measurement of every parameter. A final remote discussion between the doctor and the patient is still required to feel comfortable. But the doctor will have all the vital signal recorded to allow him to make a diagnosis based on reliable data.
Another important aspect of medicine 4.0 is the possibility of using AI both to perform parameter measurement and to manage the monitoring of multiple patients. The new image processing based on Artificial Neural Networks allows doctors to have a better and faster analysis. But AI algorithms are also able to manage intensive care rooms with several patients reducing the number of doctors involved in the global monitoring of the situation.
Medicine today has the availability of advanced technologies and new devices for diagnosis. Telemedicine gives a new scenario that allows remote diagnosis, control and treatment of patients at home without physical contact with the doctor. Routine checkups can be outsourced to small care facilities or even to the patient's own home. In Europe the elderly are more than the young but the funds for the health system are decreasing. The medicine paradigm must be rethought. E-Health can be the solution to support for the delocalization of some medical services: new micro and nano electronic circuits, IOT for pervasive and efficient communication, Artificial Intelligence to solve problems where models are not easy to apply but a lot of data is available. The ability to combine the power of AI algorithms and data from different sensors and databases can greatly increase the reliability of the final choice of the right therapy. This is the new Medicine 4.0. The digitalization of the processes and the improvement of technology allow interfacing the human body with computers and Artificial Intelligence allows you to work with a large amount of data (big data) and identify unknown correlations between the parameters to allow a new diagnosis. Several new perspectives will be discussed in this presentation.
We will investigate both new technologies showing wearable devices that can be used both to monitor patients at home (this topic was very important with the Covid 19) and Artificial Intelligence applied to medical image processing to perform remote diagnoses (once again used to distinguish pneumonia from lung problems due to Covid 19).
After this difficult period Medicine 4.0 will change several aspects of the interface between doctors and patients by improving the performance of national health services and reducing unnecessary costs. The future will provide a new digital hospital and a comprehensive monitoring system that integrates the interface between patients and hospitals.
• A basic introduction to the sense-plan-act challenges of autonomous vehicles • Introduction to the most common state-of-the-art sensors used in autonomous driving (radar, camera, lidar, GPS, odometry, vehicle-2-x) in terms of benefits and disadvantages along with mathematical models of these sensors
Autonomous driving is seen as one of the pivotal technologies that considerably will shape our society and will influence future transportation modes and quality of life, altering the face of mobility as we experience it by today. Many benefits are expected ranging from reduced accidents, optimized traffic, improved comfort, social inclusion, lower emissions, and better road utilization due to efficient integration of private and public transport. Autonomous driving is a highly complex sensing and control problem. State-of-the-art vehicles include many different compositions of sensors including radar, cameras, and lidar. Each sensor provides specific information about the environment at varying levels and has an inherent uncertainty and accuracy measure. Sensors are the key to the perception of the outside world in an autonomous driving system and whose cooperation performance directly determines the safety of such vehicles. The ability of one isolated sensor to provide accurate reliable data of its environment is extremely limited as the environment is usually not very well defined. Beyond the sensors needed for perception, the control system needs some basic measure of its position in space and its surrounding reality. Real-time capable sensor processing techniques used to integrate this information have to manage the propagation of their inaccuracies, fuse information to reduce the uncertainties and, ultimately, offer levels of confidence in the produced representations that can be then used for safe navigation decisions and actions.
• Overview of different sensor data fusion taxonomies as well as different ways to model the environment (dynamic object tracking vs. occupancy grid) in the Bayesian framework including uncertainty quantification • Exploiting potential problems of sensor data fusion, e.g. data association, outlier treatment, anomalies, bias, correlation, or out-of-sequence measurements • Propagation of uncertainties from object recognition to decision making based on selected examples, e.g. the real-time vehicle pose estimation based on uncertain measurements of different sources (GPS, odometry, lidar) including the discussion of fault detection and localization (sensor drift, breakdown, outliers etc.)
Sensor fusion overcomes the drawbacks of current sensor technology by combining information from many independent sources of limited accuracy and reliability. This makes the system less vulnerable to random and systematic failures of a single component. Multi-source information fusion avoids the perceptual limitations and uncertainties of a single sensor and forms a more comprehensive perception and recognition of the environment including static and dynamic objects. Through sensor fusion we combine readings from different sensors, remove inconsistencies and combine the information into one coherent structure. This kind of processing is a fundamental feature of all animal and human navigation, where multiple information sources such as vision, hearing and balance are combined to determine position and plan a path to a destination. In addition, several readings from the same sensor are combined, making the system less sensitive to noise and anomalous observations. In general, multi-sensor data fusion can achieve an increased classification accuracy of objects, improved state estimation accuracy, improved robustness for instance in adverse weather conditions, an increased availability, and an enlarged field of view. Emerging applications such as autonomous driving systems that are in direct contact and interact with the real world, require reliable and accurate information about their environment in real-time.
The electromagnetic properties (permittivity and permeability) of a material determine how the material interacts with an electromagnetic field. The knowledge of these properties and their frequency and temperature dependence is of great importance in various areas of science and engineering in both basic and applied research. It has always been an important quantity to electrical engineers and physicists involved in the design and application of circuit components. Over the past several decades the knowledge of the electromagnetic properties has become an important property to scientists and engineers involved in the design of stealth vehicles. These applications are most often associated with the defense industry. Besides these traditional applications, the knowledge of the electromagnetic properties has become increasingly important to agricultural engineers, biological engineers and food scientists. The most obvious application of this knowledge is in microwave and RF heating of food products. Here the knowledge of the electromagnetic properties is important in determining how long a food item needs to be exposed to the RF or microwave energy for proper cooking. For prepackaged food items, the knowledge of the electromagnetic properties of the packaging materials is also important. The interaction with the packaging material also determines the cooking time. Besides these obvious applications there are also numerous not-so-obvious applications. Electromagnetic properties can often be related to a physical parameter of interest. A change in the molecular structure or composition of material results in a change in its electromagnetic properties. It has been demonstrated that material properties such as moisture content, fruit ripeness, bacterial content, mechanical stress, tissue health and other seemingly unrelated parameters are related to the dielectric properties or permittivity of the material. Many key parameters of colloids such as structure, consistency and concentration are directly related to the electromagnetic properties. Yeast concentration in a fermentation process, bacterial count in milk, and the detection and monitoring of microorganisms are a few examples on which research has been performed. Diseased tissue has different electromagnetic properties than healthy tissue. Accurate measurements of these properties can provide scientists and engineers with valuable information that allows them to properly use the material in its intended application or to monitor a process for improved quality control. Measurement techniques typically involve placing the material in an appropriate sample holder and determining the permittivity from measurements made on the sample holder. The sample holder can be a parallel plate or coaxial capacitor, a resonant cavity or a transmission line. These structures are used because the relationship between the electromagnetic properties and measurements are fundamental and well understood. One disadvantage of these types of sample holders is that many materials cannot be easily placed in them. Sample preparation is almost always required. This limits their use in real-time monitoring of processes. Another disadvantage is that several of these sample holders are usable only over a narrow frequency range. Extracting physical properties from electromagnetic property measurements often requires measurements made over a wide frequency range. Techniques for which this relationship, between electromagnetic properties and measurements, is not as straightforward have also been employed. One of these techniques is the open-ended coaxial-line probe. This technique has attracted much attention because of its applicability to nondestructive testing over a relatively broad frequency range. It can be used to measure a wide variety of materials including liquids, solids and semisolids. These attributes make it a very attractive technique for measuring biological, agriculture and food materials. In its simplest form, it consists of a coaxial cable without a connector attached to one end. This end is inserted into the material being measured. All of these measurement techniques will be reviewed. These techniques cover the frequency range from DC to 1 THz.
The permittivity (dielectric properties) of a material is one of the factors that determine how the material interacts with an electromagnetic field. The knowledge of the dielectric properties of materials and their frequency and temperature dependence is of great importance in various areas of science and engineering in both basic and applied research. It has always been an important quantity to electrical engineers and physicists involved in the design and application of circuit components. Over the past several decades the knowledge of permittivity has become an important property to scientists and engineers involved in the design of stealth vehicles. These applications are most often associated with the defense industry. For the typical electrical engineer permittivity is a number that is needed to solve Maxwell’s equations. One of the purposes of this presentation is to give an explanation of why a material has a particular permittivity. The short answer is that a material has a particular permittivity because of its molecular structure. Another is how the permittivity can be related to other physical material properties. The knowledge of permittivity has become increasingly important to agricultural engineers, biological engineers and food scientists. The most obvious application of this knowledge is in microwave and RF heating of food products. Here the knowledge of the dielectric properties is important in determining how long a food item needs to be exposed to the RF or microwave energy for proper cooking. For prepackaged food items, the knowledge of the dielectric properties of the packaging materials is also important. The interaction with the packaging material also determines the cooking time. Besides these obvious applications there are also numerous not-so-obvious applications. Dielectric properties can often be related to a physical parameter of interest. A change in the molecular structure or composition of a material results in a change in its permittivity. It has been demonstrated that material properties such as moisture content, fruit ripeness, bacterial content, mechanical stress, tissue health and other seemingly unrelated parameters are related to the dielectric properties or permittivity of the material. Many key parameters of colloids such as structure, consistency and concentration are directly related to the dielectric properties. Yeast concentration in a fermentation process, bacterial count in milk, and the detection and monitoring of microorganisms are a few examples on which research has been performed. Diseased tissue has a different permittivity from healthy tissue. Accurate measurements of these properties can provide scientists and engineers with valuable information that allows them to properly use the material in its intended application or to monitor a process for improved quality control. Techniques for measurement techniques will be reviewed. These techniques cover the frequency range from DC to 1 THz.
Metrology is in the very basis of acquiring scientific knowledge. In today’s interdependent world, ensuring uniform metrology inside and across national boundaries is a very important enabling factor of both national and international trade. In electric power systems, measurements of electrical and non-electrical quantities are necessary for their control, protection, and safe and reliable operation. Another very significant application of metrology is in electric energy trade, i.e. in revenue metering for both industrial and residential customers, but also between countries. The impact of distributed power generation, renewable energy resources, and the deregulation of electrical power utilities introduced in many countries will be discussed. An attempt will be made to address the question of what Smart Grids really are, and how they relate to smart metering, synchrophasor measurements, energy storage, and other power system technologies. The role of National Measurements Institutes will be highlighted. New instrumentation and measurement methods for both highest-accuracy and industrial applications for AC electrical power and energy, including high-voltage and high-current calibrations and applications, will be addressed.
Rogowski coils have been used for a long time for monitoring or measurements of high, impulse, and transient currents. Rogowski coils are used for monitoring and control, protective relaying, power distribution switches, electric arc furnaces, electromagnetic launchers, core testing of large rotating electrical machines, partial-discharge measurements in high-voltage cables, power electronics, resistance welding in the automotive industry, and plasma physics. Since their nonmagnetic cores do not saturate, they can operate over wide current ranges with inherent linearity. The applications entail low and high accuracy coils, measuring currents from a few amperes to tens of MA, at frequencies from a fraction of hertz to hundreds of MHz. The increased interest in Rogowski coils over the last decades has led to significant improvements in their design and performance. Their development has included innovative designs, new materials, machining techniques, and printed circuit board structures. This presentation will cover the principles of operation, design, calibration, standards, and applications of Rogowski coils.
Estimating latency between network nodes in the Internet can play a significant role in the improvement of the performance of many applications and services that use latency to make routing decisions. A popular example is peer to peer (P2P) networks, which need to build an overlay between peers in a manner that minimizes the delay of exchanging data among peers. Measurement of latency between peers is therefore a critical parameter that will directly affect the quality of applications such as video streaming, gaming, file sharing, content distribution, server farms, and massively multiuser virtual environments (MMVE) or massively multiplayer online games (MMOG). But acquisition of latency information requires a considerable amount of measurements to be performed at each node in order for that node to keep a record of its latency to all the other nodes. Moreover, the measured latency values are frequently subject to change and need to be regularly repeated in order to be updated against network dynamics. This has motivated the use of techniques that alleviate the need for a large number of empirical measurements and instead try to predict the entire network latency matrix using a small set of latency measurements. Coordinate‐based approaches are the most popular solutions to this problem. The basic idea behind coordinates based schemes is to model the latency between each pair of nodes as the virtual distance among those nodes in a virtual coordinate system.
In this talk, we will cover the basics of how to measure latency in a distributed manner and without the need for a bottleneck central server. We will start by an introduction and background to the field, then we will briefly explain measurement approaches such as Network Time Protocol, Global Positioning System, and the IEEE 1588 Standard, before moving to coordinate based measurement approaches such as GNP (Global Network Positioning), CAN (Content Addressable Network), Lighthouse, Practical Internet Coordinates (PIC), VIVALDI, and Pcoord. In the end, we also propose a new decentralized coordinatebased solution with higher accuracy, mathematically‐proven convergence, and locality‐aware design for lower delay.
The target audiences of this tutorial are practitioners, scientists, and engineers who work with networking systems and applications where there is a need to measure and estimate delay among network nodes, possibly a massive number of nodes (thousands, tens of thousands, or even hundreds of thousands nodes).
A Massively Multiuser Virtual Environment (MMVE) sets out to create an environment for thousands, tens of thousands, or even millions of users to simultaneously interact with each other as in the real world. For example, Massively Multiuser Online Games (MMOGs), now a very profitable sector of the industry and subject to academic and industry research, is a special case of MMVEs where hundreds of thousands of players simultaneously play games with each other. Although originally designed for gaming, MMOGs are now widely used for socializing, business, commerce, scientific experimentation, and many other practical purposes. One could say that MMOGs are the “killer app” that brings MMVE into the realm of eSociety. This is evident from the fact that Virtual currencies such as Linden (or L$) in Second Life are already being exchanged for real-world money. Similarly, virtual goods and virtual real-estate are being bought and sold with real-world money. Massive numbers of users spend their time with their fellow players at online games like EverQuest, Half-Life, World of Warcraft, and Second Life. World of Warcraft, for example, has over twelve million users with a peak of over 500,000 players online at a given. There is no doubt that MMOGs and MMVEs have the potential to be the cornerstone of any eSociety platform in the near future because they bring the massiveness, awareness, and inter-personal interaction of the real society into the digital realm.
In this talk, we focus on approaches for supporting the massive number of users in such environments, consisting of scalability methods, zoning techniques, and areas of interest management. The focus will be on networking and system support and architectures, as well as research challenges still remaining to be solved.
The heart is a complex organic engine that converts chemical energy into work. Each heartbeat begins with an electrically-released pulse of calcium, which triggers force development and cell shortening, at the cost of energy and oxygen, and the dissipation of heat. My group has developed new instrumentation systems to measure all of these processes simultaneously while subjecting isolated samples of heart tissue to realistic contraction patterns that mimic the pressure-volume-time loops experienced by the heart with each beat. These devices are effective 'dynamometers' for the heart, that allow us to measure the performance of the heart and its tissues, much in the same way that you might test the performance of your motor vehicle on a 'dyno.'
This demanding undertaking has required us to develop our own actuators, force transducers, heat sensors, and optical measurement systems. Our instruments make use of several different measurement modalities which are integrated in a robotic hardware-based real-time acquisition and control environment and interpreted with the aid of a computational model. In this way, we can now resolve (to within a few nanoWatts) the heat released by living cardiac muscle fibers as they perform work at 37 °C.
Muscle force and length are controlled and measured to microNewton and nanometer precision by a laser interferometer, while the muscle is scanned in the view of an optical microscope equipped with a fluorescent calcium imaging system. Concurrently, the changing muscle geometry is monitored in 4D by a custom-built optical coherence tomograph, and the spacing of muscle-proteins is imaged in real-time by transmission-microscopy and laser diffraction systems. Oxygen consumption is measured using fluorescence-quenching techniques.
Equipped with these unique capabilities, we have probed the mechano-energetics of failing hearts from rats with diabetes. We have found that the peak stress and peak mechanical efficiency of tissues from these hearts was normal, despite prolonged twitch duration. We have thus shown that the compromised mechanical performance of the diabetic heart arises from a reduced period of diastolic filling and does not reflect either diminished mechanical performance or diminished efficiency of its tissues. In another program of research, we have demonstrated that despite claims to the contrary, dietary supplementation by fish-oils has no effect on heart muscle efficiency. Neither of these insights was fully revealed until the development of this instrument.
Optical sensors and techniques are used widely in many areas of instrumentation and measurement. Optical sensors are often, conveniently, ‘non-contact’, and thus impose negligible disturbance of the parameter undergoing measurement. Valuable information can be represented and recorded in space, time, and optical wavelength. They can provide exceptionally high spatial and/or temporal resolution, high bandwidth, and range. Moreover, optical sensors can be inexpensive and relatively simple to use.
At the Bioinstrumentation Lab at the Auckland Bioengineering Institute, we are particularly interested in developing techniques for measuring parameters from and inside and outside the body. Such measurements help us to quantify physiological performance, detect and treat disease, and develop novel medical and scientific instruments. In making such measurements we often draw upon and develop our own optical sensing and measurement methods – from interferometry, fluorimetry and diffuse light imaging, to area-based and volume-based optical imaging and processing techniques.
In this talk, I will overview some of the new interesting optically-based methods that we have recently developed for use in bioengineering applications. These include 1) diffuse optical imaging methods for monitoring the depth of a drug as it is rapidly injected through the skin, without requiring a needle; 2) stretchy soft optical sensors for measuring strains of up to several 100 % during movement; 3) multi-camera image registration techniques for measuring the 3D shape and strain of soft tissues; 4) optical coherence tomography techniques for detecting the 3D shape of deforming muscle tissues, and 5) polarization-sensitive imaging techniques for classifying the optical and mechanical properties of biological membranes.
While these techniques sensors and techniques have been motivated by applications in bioengineering, the underlying principles have broad applicability to other areas of instrumentation and measurement.
Nowadays, scientists, researchers, and practical engineers face a previously unseen explosion of the richness and the complexity of problems to be solved. Besides the spatial and temporal complexity, common tasks usually involve non-negligible uncertainty or even lack of information, strict requirements concerning the timing, continuity, robustness, and reliability of outputs, and further expectations like adaptivity and capability of handling atypical and crisis situations efficiently.
Model-based computing plays important role in achieving these goals because it means the integration of the available knowledge about the problem at hand into the procedure to be executed in a proper form, acting as an active component during the operation. Unfortunately, classical modeling methods often fail to meet the requirements of robustness, flexibility, adaptivity, learning, and generalizing abilities. Even soft computing based models may fail to be effective enough because of their high (exponentially increasing) complexity. To satisfy the time, resource, and data constraints associated with a given task, hybrid methods, and new approaches are needed for the modeling, evaluation, and interpretation of the problems and results. A possible solution is offered to the above challenges by the combination of soft computing techniques with novel approaches of any time and situational modeling and operation.
Anytime processing is very flexible with respect to the available input information, computational power, and time. It is able to generalize previous input information and to provide short response time if the required reaction time is significantly shortened due to failures or an alarm appearing in the modeled system; or if one has to make decisions before sufficient information arrives or the processing can be completed. The aim of the technique is to ensure continuous operation in case of (dynamically) changing circumstances and to provide optimal overall performance for the whole system. In case of a temporal shortage of computational power and/or loss of some data, the actual operation is continued maintaining the overall performance “at a lower price”, i.e., information processing based on algorithms and/or models of simpler complexity provide outputs of acceptable quality to continue the operation of the complete system. The accuracy of the processing may become temporarily lower but it is possibly still enough to produce data for qualitative evaluations and supporting further decisions.
Situational modeling has been designed for the modeling and control of complex systems where the traditional cybernetics models haven’t proved to be sufficient because the characterization of the system is incomplete or ambiguous due to unique, dynamically changing, and unforeseen situations. Typical cases are the alarm situations, structural failures, starting and stopping of plants, etc. The goal of situational modeling is to handle the contradiction arising from the existence of a large number of situations and the limited number of processing strategies, by grouping the possible situations into a treatable (finite) a number of model classes of operational situations and by assigning certain processing algorithms to the defined processing regimes. This technique - similarly to anytime processing - offers a tradeoff between resource (including time and data) consumption and output quality.
The presentation gives an overview of the basics of anytime and situational approaches. Besides summarizing theoretical results and pointing out the arising open questions (e.g. accuracy measures, data interpretation, transients), the author enlightens some possibilities offered by these new techniques by showing successful applications taken from the fields of signal and image processing, control and fault diagnosis of plants, analysis, and expert systems. Some of the discussed topics are: