Semi-autonomous and fully autonomous assembly and manipulation of micro- objects is a complex process. As part of the MINIMAN micro-robot project, a vision subsystem is required which can recognise and track objects both under an optical microscope and within a scanning electron microscope. This subsystem is required as part of a more complex system for assembling and manipulating micro-objects. The two different operating environments provide many challenges when building a generic vision system due to the vast differences in the quality of the images. This paper provides a detailed description of this new vision system, together with a discussion and analysis of its flexibility and extensibility. Recent results of the system's ability to recognise rigid objects robust to camera noise and object occlusions are given, and the adaptability of the system to recognising biological objects is discussed. Finally, an overview is provided of the communication strategy between the vision subsystem and the micro-robot control system.
This paper describes recent work in the field of computer vision and relates the results to the much broader class of smart sensors. The paper begins with an overview of recent work concerning the combination of chemical sensors with neural networks, Such devices allow classification of samples into distinct states, forming, to give one example, an electronic nose. These often require a broad selectivity and are formed from small arrays of sensor elements. This is followed by a description of two aspects of the authors' own work in the field of computer vision. One is an automatic control system of a micro robot-based microassembly station using computer vision. The other concerns that automatic recognition of objects regardless of the scale of the object. For the latter, we have shown that when the senor input noise is taken into consideration, a conventional CCD array is unable to provide a robust representation of an object, regardless of scale. In contrast to this, biological based retinal arrays are able to achieve this. The paper concludes with the perspective that all sensor systems are data dependent. This is of little concern if the sensor consists of a single element, but becomes more important as larger arrays are fabricated. These sensor arrays may have to emulate biological systems, in an analogous manner to a retinal camera.
KEYWORDS: Visualization, Model-based design, Information visualization, 3D modeling, Environmental sensing, Sensors, Motion models, Distance measurement, Mobile robots, Data modeling
The two most popular methods used by current researchers for tackling the problem of autonomous mobile robot map building and navigation are: (1) to use model-based methods to construct 3D models of the environment and, (2) to use view-based, or topological methods, in which the environment is represented in some non-topographical manner which the robot can typically relate directly to its sensors. Model-based methods have been shown to be good methods for local navigation, yet unreliable and computationally intensive methods for map building. View-based methods have been shown to be good for map building, however poor at local navigation.
This paper shows the results of recent research which yields a solution which utilises the benifits of each of these two methods, to overcome the inadequacies of the other. This solution results in a new approach to map building and path planning, and one which provides a robust and extensible solution. In particular the solution is largely unaffected by the size of complexity of the environmnet being learnt. It thus forms a key milestone in the evolution of MR autonomy. It is shown that an AMR can successfully use this method to navigate around an unknown environment. Learning is accomplished through segmenting the environment into discrete ‘locations’, based upon visual similarity. Navigation is achieved through storing minimal distance-to-visual-feature information at all visually distinct locations. No self-consistent geometric map is stored - yet the geometric information stored can be used for navigation between visually distinct locations and for local navigation tasks such as obstacle avoidance.
The rapidly growing markets for new microproducts is placing increasing demands on industry and the research community for graduates with both interdisciplinary skills and specialized knowledge. To meet this challenge a range of courses in microsystems technology have been and are being developed by universities, research centers and companies worldwide. In this paper the general characteristics of training courses and programs from a UK perspective are outlined. One example is given of a university based modular masters degree course which gives students a basic understanding of silicon and non-silicon fabrication technologies and major design and assembly issues. Design and simulation is introduced in a practical way, via commercial finite element analysis and electromagnetic/electrostatic computer aided design exercises. The importance of design-for-manufacture is implicit and is a central theme in the course project. Another course currently at the design stage, has a slightly different emphasis but with similar underlying principles, is outlined. A mechanisms is also described of how training for industry can be facilitated through knowledge and technology transfer to companies by means of an industry- academic network.
Vapor detection has been realized by the shift of the whole surface plasmon resonance (SPR) curve under dynamic state of adsorption as well as by measuring SPR reflectivity signal at a fixed angle of incidence. Selective, fast and reversible adsorption of the vapor molecules has been observed. The increase of both film thickness and refractive index of spun films during adsorption are found to correspond to the calixarenes behavior and may be explained by capturing of guest molecules in the film matrix, followed by their condensation. A model of the vapor registration system has been established and we also report in this paper on the extent of the selectivity, thus leading to the establishment of a sensor array.
We present a novel composite sensing agent consist of calix[4]resorcinarene and the conducting polyorthomethoxyaniline and propose different sensing mechanisms that can take advantage of its nanoporosity and unique complexation reactions.
The main idea of the present work is to combine in the same film different enzymes and pH sensitive organic dyes in order to form an optical transducer. Decomposition of substrate molecules, catalyzed by enzymes, usually accompanied by pH changes in local surrounding, which can be registered by spectral transformations of indicator molecules. This idea was realized by using cyclo- tetrachromotropylene (Chromo1) as an indicator and an enzyme, such as Urease.
This paper describes a novel, electromagnetically levitated, micromachined gyroscope or yaw rate sensor. The device uses a rotor spun at high speed and has the potential for several orders of magnitude improvement in yaw rate sensitivity when compared to other micromachined sensors. The first prototype rotates at just over 1,000 revolutions per minute, with a predicted sensitivity of approximately 0.5 degrees per second. The model indicates that this rotation speed may be increased by orders of magnitude, with corresponding increases in the sensitivity, as the speed of rotation is only limited by the drag force in air. The device also has the advantage that it is extremely simple to fabricate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.