IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Industry 4.0 is considered the great revolution of the past few years. New technologies, the Internet of things, the possibility to monitor everything from everywhere changed both plants and the approaches to the industrial production. Medicine is considered a slowly changing discipline. The human body model is a difficult concept to develop. But we can identify some passages in which medicine can be compared to industry. Four major changes revolutionized medicine:
Medicine 1.0: James Watson and Francis Crick described the structure of DNA. This was the beginning of research in the field of molecular and cellular biology
Medicine 2.0: Sequencing the Human genome. This discovery made it possible to find the origin of the diseases.
Medicine 3.0: The convergence of biology and engineering. Now the biologist’s experience can be combined with the technology of the engineers. New approaches to new forms of analysis can be used.
Medicine 4.0: Digitalization of Medicine: IOT devices and techniques, AI to perform analyses, Machine Learning for diagnoses, Brain Computer Interface, Smart wearable sensors.
Medicine 4.0 is definitely a great revolution in the patient care. New horizons are possible today. Covid 19 has highlighted problems that have existed for a long time. Relocation of services, which means remote monitoring, remote diagnoses without direct contact between the doctor and the patient. Hospitals are freed from routine tests that could be performed by patients at home and reported by doctors on the internet. Potential dangerous conditions can be prevented. During the Covid emergency everybody can check his condition and ask for a medical visit (swab) only when really necessary. This is true telemedicine. This is not a whatsapp where an elder tries to chat with a doctor. This is a smart device able to measure objective vital parameters and send to a health care center. Of course Medicine 4.0 requires new technologies for smart sensors. These devices need to be very easy to use, fast, reliable and low cost. They must be accepted by both people and doctors.
In this talk we’ll see together the meaning of telemedicine and E-Health. E-health is the key to allowing people to self monitor their vital signals. Some devices already exist but a new approach will allow to everybody (especially older people with cognitive difficulties) to use these systems with a friendly approach. Telemedicine will be the new approach to the concept of hospital. A virtual hospital, without any physical contact but with an objective measurement of every parameter. A final remote discussion between the doctor and the patient is still required to feel comfortable. But the doctor will have all the vital signal recorded to allow him to make a diagnosis based on reliable data.
Another important aspect of medicine 4.0 is the possibility of using AI both to perform parameter measurement and to manage the monitoring of multiple patients. The new image processing based on Artificial Neural Networks allows doctors to have a better and faster analysis. But AI algorithms are also able to manage intensive care rooms with several patients reducing the number of doctors involved in the global monitoring of the situation.
• A basic introduction to the sense-plan-act challenges of autonomous vehicles
• Introduction to the most common state-of-the-art sensors used in autonomous driving (radar, camera, lidar, GPS, odometry, vehicle-2-x) in terms of benefits and disadvantages along with mathematical models of these sensors
Autonomous driving is seen as one of the pivotal technologies that considerably will shape our society and will influence future transportation modes and quality of life, altering the face of mobility as we experience it by today. Many benefits are expected ranging from reduced accidents, optimized traffic, improved comfort, social inclusion, lower emissions, and better road utilization due to efficient integration of private and public transport. Autonomous driving is a highly complex sensing and control problem. State-of-the-art vehicles include many different compositions of sensors including radar, cameras, and lidar. Each sensor provides specific information about the environment at varying levels and has an inherent uncertainty and accuracy measure. Sensors are the key to the perception of the outside world in an autonomous driving system and whose cooperation performance directly determines the safety of such vehicles. The ability of one isolated sensor to provide accurate reliable data of its environment is extremely limited as the environment is usually not very well defined. Beyond the sensors needed for perception, the control system needs some basic measure of its position in space and its surrounding reality. Real-time capable sensor processing techniques used to integrate this information have to manage the propagation of their inaccuracies, fuse information to reduce the uncertainties and, ultimately, offer levels of confidence in the produced representations that can be then used for safe navigation decisions and actions.
• Overview of different sensor data fusion taxonomies as well as different ways to model the environment (dynamic object tracking vs. occupancy grid) in the Bayesian framework including uncertainty quantification
• Exploiting potential problems of sensor data fusion, e.g. data association, outlier treatment, anomalies, bias, correlation, or out-of-sequence measurements
• Propagation of uncertainties from object recognition to decision making based on selected examples, e.g. the real-time vehicle pose estimation based on uncertain measurements of different sources (GPS, odometry, lidar) including the discussion of fault detection and localization (sensor drift, breakdown, outliers etc.)
Sensor fusion overcomes the drawbacks of current sensor technology by combining information from many independent sources of limited accuracy and reliability. This makes the system less vulnerable to random and systematic failures of a single component. Multi-source information fusion avoids the perceptual limitations and uncertainties of a single sensor and forms a more comprehensive perception and recognition of the environment including static and dynamic objects. Through sensor fusion we combine readings from different sensors, remove inconsistencies and combine the information into one coherent structure. This kind of processing is a fundamental feature of all animal and human navigation, where multiple information sources such as vision, hearing and balance are combined to determine position and plan a path to a destination. In addition, several readings from the same sensor are combined, making the system less sensitive to noise and anomalous observations. In general, multi-sensor data fusion can achieve an increased classification accuracy of objects, improved state estimation accuracy, improved robustness for instance in adverse weather conditions, an increased availability, and an enlarged field of view. Emerging applications such as autonomous driving systems that are in direct contact and interact with the real world, require reliable and accurate information about their environment in real-time.
Over the past three decades a wide range of electrostatic sensors have been developed and utilized for the continuous monitoring and measurement of various industrial processes. Electrostatic sensors enjoy simplicity in structure, cost-effectiveness and suitability for a variety of process conditions. They either provide unique solutions to some measurement challenges or offer more cost-effective or complementary options to established sensors such as those based on acoustic, capacitive, electromagnetic or optical principles. The established or potential applications of electrostatic sensors appear wide ranging, but the underlining sensing principle and system characteristics are very similar. This presentation will review the recent advances in electrostatic sensors and associated signal processing algorithms for industrial measurement and monitoring applications. The fundamental sensing principle and characteristics of electrostatic sensors will be introduced. A number of practical applications of electrostatic sensors will be presented. These include pulverized fuel flow metering, linear and rotational speed measurement, condition monitoring of mechanical systems, and advanced flame monitoring. Results from recent experimental and modelling studies as well as industrial trials of electrostatic sensors will be reported.
The electromagnetic properties (permittivity and permeability) of a material determine how the material interacts with an electromagnetic field. The knowledge of these properties and their frequency and temperature dependence is of great importance in various areas of science and engineering in both basic and applied research. It has always been an important quantity to electrical engineers and physicists involved in the design and application of circuit components. Over the past several decades the knowledge of the electromagnetic properties has become an important property to scientists and engineers involved in the design of stealth vehicles. These applications are most often associated with the defense industry. Besides these traditional applications, the knowledge of the electromagnetic properties has become increasingly important to agricultural engineers, biological engineers and food scientists. The most obvious application of this knowledge is in microwave and RF heating of food products. Here the knowledge of the electromagnetic properties is important in determining how long a food item needs to be exposed to the RF or microwave energy for proper cooking. For prepackaged food items, the knowledge of the electromagnetic properties of the packaging materials is also important. The interaction with the packaging material also determines the cooking time. Besides these obvious applications there are also numerous not-so-obvious applications. Electromagnetic properties can often be related to a physical parameter of interest. A change in the molecular structure or composition of material results in a change in its electromagnetic properties. It has been demonstrated that material properties such as moisture content, fruit ripeness, bacterial content, mechanical stress, tissue health and other seemingly unrelated parameters are related to the dielectric properties or permittivity of the material. Many key parameters of colloids such as structure, consistency and concentration are directly related to the electromagnetic properties. Yeast concentration in a fermentation process, bacterial count in milk, and the detection and monitoring of microorganisms are a few examples on which research has been performed. Diseased tissue has different electromagnetic properties than healthy tissue.
Accurate measurements of these properties can provide scientists and engineers with valuable information that allows them to properly use the material in its intended application or to monitor a process for improved quality control. Measurement techniques typically involve placing the material in an appropriate sample holder and determining the permittivity from measurements made on the sample holder. The sample holder can be a parallel plate or coaxial capacitor, a resonant cavity or a transmission line. These structures are used because the relationship between the electromagnetic properties and measurements are fundamental and well understood. One disadvantage of these types of sample holders is that many materials cannot be easily placed in them. Sample preparation is almost always required. This limits their use in real-time monitoring of processes. Another disadvantage is that several of these sample holders are usable only over a narrow frequency range. Extracting physical properties from electromagnetic property measurements often requires measurements made over a wide frequency range. Techniques for which this relationship, between electromagnetic properties and measurements, is not as straightforward have also been employed. One of these techniques is the open-ended coaxial-line probe. This technique has attracted much attention because of its applicability to nondestructive testing over a relatively broad frequency range. It can be used to measure a wide variety of materials including liquids, solids and semisolids. These attributes make it a very attractive technique for measuring biological, agriculture and food materials. In its simplest form, it consists of a coaxial cable without a connector attached to one end. This end is inserted into the material being measured. All of these measurement techniques will be reviewed. These techniques cover the frequency range from DC to 1 THz.
The permittivity (dielectric properties) of a material is one of the factors that determine how the material interacts with an electromagnetic field. The knowledge of the dielectric properties of materials and their frequency and temperature dependence is of great importance in various areas of science and engineering in both basic and applied research. It has always been an important quantity to electrical engineers and physicists involved in the design and application of circuit components. Over the past several decades the knowledge of permittivity has become an important property to scientists and engineers involved in the design of stealth vehicles. These applications are most often associated with the defense industry. For the typical electrical engineer permittivity is a number that is needed to solve Maxwell’s equations. One of the purposes of this presentation is to give an explanation of why a material has a particular permittivity. The short answer is that a material has a particular permittivity because of its molecular structure. Another is how the permittivity can be related to other physical material properties.
The knowledge of permittivity has become increasingly important to agricultural engineers, biological engineers and food scientists. The most obvious application of this knowledge is in microwave and RF heating of food products. Here the knowledge of the dielectric properties is important in determining how long a food item needs to be exposed to the RF or microwave energy for proper cooking. For prepackaged food items, the knowledge of the dielectric properties of the packaging materials is also important. The interaction with the packaging material also determines the cooking time. Besides these obvious applications there are also numerous not-so-obvious applications. Dielectric properties can often be related to a physical parameter of interest. A change in the molecular structure or composition of a material results in a change in its permittivity. It has been demonstrated that material properties such as moisture content, fruit ripeness, bacterial content, mechanical stress, tissue health and other seemingly unrelated parameters are related to the dielectric properties or permittivity of the material. Many key parameters of colloids such as structure, consistency and concentration are directly related to the dielectric properties. Yeast concentration in a fermentation process, bacterial count in milk, and the detection and monitoring of microorganisms are a few examples on which research has been performed. Diseased tissue has a different permittivity from healthy tissue. Accurate measurements of these properties can provide scientists and engineers with valuable information that allows them to properly use the material in its intended application or to monitor a process for improved quality control. Techniques for measurement techniques will be reviewed. These techniques cover the frequency range from DC to 1 THz.
Metrology is in the very basis of acquiring scientific knowledge. In today’s interdependent world, ensuring uniform metrology inside and across national boundaries is a very important enabling factor of both national and international trade. In electric power systems, measurements of electrical and non-electrical quantities are necessary for their control, protection, and safe and reliable operation. Another very significant application of metrology is in electric energy trade, i.e. in revenue metering for both industrial and residential customers, but also between countries. The impact of distributed power generation, renewable energy resources, and the deregulation of electrical power utilities introduced in many countries will be discussed. An attempt will be made to address the question of what Smart Grids really are, and how they relate to smart metering, synchrophasor measurements, energy storage, and other power system technologies. The role of National Measurements Institutes will be highlighted. New instrumentation and measurement methods for both highest-accuracy and industrial applications for AC electrical power and energy, including high-voltage and high-current calibrations and applications, will be addressed.
Rogowski coils have been used for a long time for monitoring or measurements of high, impulse, and transient currents. Rogowski coils are used for monitoring and control, protective relaying, power distribution switches, electric arc furnaces, electromagnetic launchers, core testing of large rotating electrical machines, partial-discharge measurements in high-voltage cables, power electronics, resistance welding in the automotive industry, and plasma physics. Since their nonmagnetic cores do not saturate, they can operate over wide current ranges with inherent linearity. The applications entail low and high accuracy coils, measuring currents from a few amperes to tens of MA, at frequencies from a fraction of hertz to hundreds of MHz. The increased interest in Rogowski coils over the last decades has led to significant improvements in their design and performance. Their development has included innovative designs, new materials, machining techniques, and printed circuit board structures. This presentation will cover the principles of operation, design, calibration, standards, and applications of Rogowski coils.
Estimating latency between network nodes in the Internet can play a significant role in the improvement of the performance of many applications and services that use latency to make routing decisions. A popular example is peer to peer (P2P) networks, which need to build an overlay between peers in a manner that minimizes the delay of exchanging data among peers. Measurement of latency between peers is therefore a critical parameter that will directly affect the quality of applications such as video streaming, gaming, file sharing, content distribution, server farms, and massively multiuser virtual environments (MMVE) or massively multiplayer online games (MMOG). But acquisition of latency information requires a considerable amount of measurements to be performed at each node in order for that node to keep a record of its latency to all the other nodes. Moreover, the measured latency values are frequently subject to change and need to be regularly repeated in order to be updated against network dynamics. This has motivated the use of techniques that alleviate the need for a large number of empirical measurements and instead try to predict the entire network latency matrix using a small set of latency measurements. Coordinate‐based approaches are the most popular solutions to this problem. The basic idea behind coordinates based schemes is to model the latency between each pair of nodes as the virtual distance among those nodes in a virtual coordinate system.
In this talk, we will cover the basics of how to measure latency in a distributed manner and without the need for a bottleneck central server. We will start by an introduction and background to the field, then we will briefly explain measurement approaches such as Network Time Protocol, Global Positioning System, and the IEEE 1588 Standard, before moving to coordinate based measurement approaches such as GNP (Global Network Positioning), CAN (Content Addressable Network), Lighthouse, Practical Internet Coordinates (PIC), VIVALDI, and Pcoord. In the end, we also propose a new decentralized coordinatebased solution with higher accuracy, mathematically‐proven convergence, and locality‐aware design for lower delay.
The target audiences of this tutorial are practitioners, scientists, and engineers who work with networking systems and applications where there is a need to measure and estimate delay among network nodes, possibly a massive number of nodes (thousands, tens of thousands, or even hundreds of thousands nodes).
A Massively Multiuser Virtual Environment (MMVE) sets out to create an environment for thousands, tens of thousands, or even millions of users to simultaneously interact with each other as in the real world. For example, Massively Multiuser Online Games (MMOGs), now a very profitable sector of the industry and subject to academic and industry research, is a special case of MMVEs where hundreds of thousands of players simultaneously play games with each other. Although originally designed for gaming, MMOGs are now widely used for socializing, business, commerce, scientific experimentation, and many other practical purposes. One could say that MMOGs are the “killer app” that brings MMVE into the realm of eSociety. This is evident from the fact that Virtual currencies such as Linden (or L$) in Second Life are already being exchanged for real-world money. Similarly, virtual goods and virtual real-estate are being bought and sold with real-world money. Massive numbers of users spend their time with their fellow players at online games like EverQuest, Half-Life, World of Warcraft, and Second Life. World of Warcraft, for example, has over twelve million users with a peak of over 500,000 players online at a given. There is no doubt that MMOGs and MMVEs have the potential to be the cornerstone of any eSociety platform in the near future because they bring the massiveness, awareness, and inter-personal interaction of the real society into the digital realm.
In this talk, we focus on approaches for supporting the massive number of users in such environments, consisting of scalability methods, zoning techniques, and areas of interest management. The focus will be on networking and system support and architectures, as well as research challenges still remaining to be solved.
Nowadays, scientists, researchers, and practical engineers face a previously unseen explosion of the richness and the complexity of problems to be solved. Besides the spatial and temporal complexity, common tasks usually involve non-negligible uncertainty or even lack of information, strict requirements concerning the timing, continuity, robustness, and reliability of outputs, and further expectations like adaptivity and capability of handling atypical and crisis situations efficiently.
Model-based computing plays important role in achieving these goals because it means the integration of the available knowledge about the problem at hand into the procedure to be executed in a proper form, acting as an active component during the operation. Unfortunately, classical modeling methods often fail to meet the requirements of robustness, flexibility, adaptivity, learning, and generalizing abilities. Even soft computing based models may fail to be effective enough because of their high (exponentially increasing) complexity. To satisfy the time, resource, and data constraints associated with a given task, hybrid methods, and new approaches are needed for the modeling, evaluation, and interpretation of the problems and results. A possible solution is offered to the above challenges by the combination of soft computing techniques with novel approaches of any time and situational modeling and operation.
Anytime processing is very flexible with respect to the available input information, computational power, and time. It is able to generalize previous input information and to provide short response time if the required reaction time is significantly shortened due to failures or an alarm appearing in the modeled system; or if one has to make decisions before sufficient information arrives or the processing can be completed. The aim of the technique is to ensure continuous operation in case of (dynamically) changing circumstances and to provide optimal overall performance for the whole system. In case of a temporal shortage of computational power and/or loss of some data, the actual operation is continued maintaining the overall performance “at a lower price”, i.e., information processing based on algorithms and/or models of simpler complexity provide outputs of acceptable quality to continue the operation of the complete system. The accuracy of the processing may become temporarily lower but it is possibly still enough to produce data for qualitative evaluations and supporting further decisions.
Situational modeling has been designed for the modeling and control of complex systems where the traditional cybernetics models haven’t proved to be sufficient because the characterization of the system is incomplete or ambiguous due to unique, dynamically changing, and unforeseen situations. Typical cases are the alarm situations, structural failures, starting and stopping of plants, etc. The goal of situational modeling is to handle the contradiction arising from the existence of a large number of situations and the limited number of processing strategies, by grouping the possible situations into a treatable (finite) a number of model classes of operational situations and by assigning certain processing algorithms to the defined processing regimes. This technique - similarly to anytime processing - offers a tradeoff between resource (including time and data) consumption and output quality.
The presentation gives an overview of the basics of anytime and situational approaches. Besides summarizing theoretical results and pointing out the arising open questions (e.g. accuracy measures, data interpretation, transients), the author enlightens some possibilities offered by these new techniques by showing successful applications taken from the fields of signal and image processing, control and fault diagnosis of plants, analysis, and expert systems. Some of the discussed topics are:
The heart is a complex organic engine that converts chemical energy into work. Each heartbeat begins with an electrically-released pulse of calcium, which triggers force development and cell shortening, at the cost of energy and oxygen, and the dissipation of heat. My group has developed new instrumentation systems to measure all of these processes simultaneously while subjecting isolated samples of heart tissue to realistic contraction patterns that mimic the pressure-volume-time loops experienced by the heart with each beat. These devices are effective 'dynamometers' for the heart, that allow us to measure the performance of the heart and its tissues, much in the same way that you might test the performance of your motor vehicle on a 'dyno.'
This demanding undertaking has required us to develop our own actuators, force transducers, heat sensors, and optical measurement systems. Our instruments make use of several different measurement modalities which are integrated in a robotic hardware-based real-time acquisition and control environment and interpreted with the aid of a computational model. In this way, we can now resolve (to within a few nanoWatts) the heat released by living cardiac muscle fibers as they perform work at 37 °C.
Muscle force and length are controlled and measured to microNewton and nanometer precision by a laser interferometer, while the muscle is scanned in the view of an optical microscope equipped with a fluorescent calcium imaging system. Concurrently, the changing muscle geometry is monitored in 4D by a custom-built optical coherence tomograph, and the spacing of muscle-proteins is imaged in real-time by transmission-microscopy and laser diffraction systems. Oxygen consumption is measured using fluorescence-quenching techniques.
Equipped with these unique capabilities, we have probed the mechano-energetics of failing hearts from rats with diabetes. We have found that the peak stress and peak mechanical efficiency of tissues from these hearts was normal, despite prolonged twitch duration. We have thus shown that the compromised mechanical performance of the diabetic heart arises from a reduced period of diastolic filling and does not reflect either diminished mechanical performance or diminished efficiency of its tissues. In another program of research, we have demonstrated that despite claims to the contrary, dietary supplementation by fish-oils has no effect on heart muscle efficiency. Neither of these insights was fully revealed until the development of this instrument.
Optical sensors and techniques are used widely in many areas of instrumentation and measurement. Optical sensors are often, conveniently, ‘non-contact’, and thus impose negligible disturbance of the parameter undergoing measurement. Valuable information can be represented and recorded in space, time, and optical wavelength. They can provide exceptionally high spatial and/or temporal resolution, high bandwidth, and range. Moreover, optical sensors can be inexpensive and relatively simple to use.
At the Bioinstrumentation Lab at the Auckland Bioengineering Institute, we are particularly interested in developing techniques for measuring parameters from and inside and outside the body. Such measurements help us to quantify physiological performance, detect and treat disease, and develop novel medical and scientific instruments. In making such measurements we often draw upon and develop our own optical sensing and measurement methods – from interferometry, fluorimetry and diffuse light imaging, to area-based and volume-based optical imaging and processing techniques.
In this talk, I will overview some of the new interesting optically-based methods that we have recently developed for use in bioengineering applications. These include 1) diffuse optical imaging methods for monitoring the depth of a drug as it is rapidly injected through the skin, without requiring a needle; 2) stretchy soft optical sensors for measuring strains of up to several 100 % during movement; 3) multi-camera image registration techniques for measuring the 3D shape and strain of soft tissues; 4) optical coherence tomography techniques for detecting the 3D shape of deforming muscle tissues, and 5) polarization-sensitive imaging techniques for classifying the optical and mechanical properties of biological membranes.
While these techniques sensors and techniques have been motivated by applications in bioengineering, the underlying principles have broad applicability to other areas of instrumentation and measurement.
Semiconductor chip manufacturing cost consists of die cost, package cost, and test cost. The trends of increasing design complexity, increasing quality needs, and new process nodes and defect models are pushing test cost to the forefront. This is especially true for high-resolution data converters, whose accurate testing requires expensive instruments and is extremely time-consuming. As a result, linearity test of data converters often dominates the overall test cost of SoCs. This talk will present several recently developed techniques for reducing linearity test cost by dramatically reducing measurement time and dramatically relaxing instrumentation requirements.
The IEEE standard for ADC linearity test requires the stimulus signal to be at least 10 times more accurate than the ADC under test. To relax this stringent requirement, the SEIR (stimulus error identification and removal) algorithm is developed to accurately test high-resolution ADCs using nonlinear stimuli. It has been demonstrated by industries that more than 16 bits of ADC test accuracy was achieved using 7-bit linear ramps instead of 20-bit linear ramps as required by IEEE, a relaxation of well over 1000 times on the instrumentation accuracy requirement.
The biggest contributor to test cost is the long measurement time. The recently developed uSMILE (ultrafast Segmented Model Identification for Linearity Errors) algorithm can dramatically reduce the measurement time needed for ADC linearity test. With a system identification approach using a segmented model for the integral nonlinearity, the algorithm can reduce the test time by a factor of over 100 and still achieve test accuracies superior to the standard histogram test method. This method has been extensively validated by industry and has been adopted for production test for multiple product families.
By combining the salient features of both SEIR and uSMILE, the ultrafast stimulus error removal and segmented model identification of linearity errors (USER- SMILE) algorithm are developed. The USER-SMILE algorithm uses two nonlinear signals as input to the ADC under test. One signal is shifted by a constant voltage with respect to the other nonlinear signal. By subtracting the two sets of output codes, input signal is canceled and the nonlinearity of ADC, modeled by a segmented non-parametric INL model, will be identified with the least square method.
A completely on-chip ADC BIST circuit is developed based on the USER-SMILE algorithm and demonstrated on a 28nm CMOS automotive microcontroller. The ADC test subsystem includes a nonlinear DAC as a signal generator, a built-in voltage shift generator, a BIST computation engine, and dedicated memory cells. The silicon measurement results show accurate test results. The INL test results are further used to correct ADC linearity errors, thus providing a method for reliably calibrating the ADC. Measurement results demonstrated that the BIST-based calibration method achieved >10dB THD/SFDR improvements over the existing calibration method used by industry.
In this talk, we discuss how computer vision can facilitate the interpretation of medical imaging data, or help to make inferences based on models of such data. In order to illustrate this presentation, several applications of medical imaging measurements and modeling are discussed, focusing in areas such as the correction of imaging artifacts that may occlude visual information, tumor detection, modeling, and measurement in different imaging modalities.
When interpreting medical imaging data with computer vision, usually we are trying to describe anatomic structures (or medical phenomena) using one or more images, and reconstruct some of its properties based on imaging data (like shape, texture, or color). Actually, this is an ill-posed problem that humans can learn to solve effortlessly, but computer algorithms often are prone to errors. Nevertheless, in some cases, computers can surpass humans and interpret medical images more accurately, given the proper choice of models, as we will show in this talk.
Reconstructing interesting properties of real-world objects or phenomena from captured imaging data involves solving an inverse problem, in which we seek to recover some unknowns given insufficient information to specify a unique solution. Therefore, we disambiguate between possible solutions relying on models based on physics, mathematics, or statistics. Modeling the real world in all its complexity still is an open problem. However, if we know the phenomenon or object of interest, we can construct detailed models using specialized techniques and domain-specific representations, that are efficient at describing reliably the measurements (or obtaining measurements in some cases). In this talk, we briefly overview some challenging problems in computer vision for medical imaging and measurements, with illustrations and insights about model selection and model-based prediction. Some of the applications discussed in this talk are: modeling tumor shape and size, and making inferences about its future growth or shrinkage; modeling relevant details in the background of medical images to discriminate them from useless background noise, and modeling shading artifacts to minimize their influence when detecting and measuring skin lesions in standard camera images.
Medical images contain a wealth of information, which makes modeling of medical images a challenging task. Therefore, medical images often are segmented into multiple elementary parts, simplifying their representation and changing the image model into something that is more meaningful, or easier to analyze and measure (e.g. by describing the boundaries of the object by lines or curves, or the image segments by their textures, colors, etc.). Nevertheless, these simpler image elements may be easy to perceive visually but difficult to describe. For example, the texture of a skin lesion may not have an identifiable texture element or a model known a priori, and regardless of that skin lesion detection must be accurate and precise. Segmentation of medical imaging data segmentation and analysis still is an open question, and some current directions are discussed in this talk.
Computer vision and modeling are interrelated. Modeling imaging measurements often involve errors, and estimating the expected error of a model can be important in applications (e.g. estimating a tumor size and its potential growth, or shrinkage, in response to treatment). This issue can be approached by adapting machine learning and pattern recognition techniques to solve problems in medical imaging measurements. Typically, a model has tuning parameters, and these tuning parameters may change the model complexity. We wish to minimize modeling errors and the model complexity, in other words, to get the ‘big picture’ we often sacrifice some of the small details. For example, estimating tumor growth (or shrinkage) in response to treatment requires modeling the tumor shape and size, which can be challenging for real tumors, and simplified models may be justifiable if the predictions obtained are informative (e.g. to evaluate the treatment effectiveness). To conclude this talk, we outline the current trends in computer vision in medical imaging measurements and discuss some open problems.