Saturday, November 29, 2008

DLP technonogy

DLP TechnologyDLP technology is based on an optical semiconductor, called a Digital Micromirror Device (DMD), which uses mirrors made of aluminum to reflect light to make the picture. The DMD is often referred to as the DLP chip. The chip can be held in the palm of your hand, yet it can contain more than 2 million mirrors each, measuring less than one-fifth the width of a human hair. The mirrors are laid out in a matrix, much like a photo mosaic, with each mirror representing one pixel.
When you look closely at a photo mosaic, each piece of it holds a tiny, square photograph. As you step away from the mosaic, the images blend together to create one large image. The same concept applies to DMDs. If you look closely at a DMD, you would see tiny square mirrors that reflect light, instead of the tiny photographs. From far away (or when the light is projected on the screen), you would see a picture.
The number of mirrors corresponds to the resolution of the screen. DLP 1080p technology delivers more than 2 million pixels for true 1920x1080p resolution, the highest available.
In addition to the mirrors, the DMD unit includes:
A CMOS DDR SRAM chip, which is a memory cell that will electrostatically cause the mirror to tilt to the on or off position, depending on its logic value (0 or 1) A heat sink An optical window, which allows light to pass through while protecting the mirrors from dust and debris DLP ProjectionBefore any of the mirrors switch to their on or off positions, the chip will rapidly decode a bit-streamed image code that enters through the semiconductor. It then converts the data from interlaced to progressive, allowing the picture to fade in. Next, the chip sizes the picture to fit the screen and makes any necessary adjustments to the picture, including brightness, sharpness and color quality. Finally, it relays all the information to the mirrors, completing the whole process in just 16 microseconds.
The mirrors are mounted on tiny hinges that enable them to tilt either toward the light source (ON) or away from it (OFF) up to +/- 12°, and as often as 5,000 times per second. When a mirror is switched on more than off, it creates a light gray pixel. Conversely, if a mirror is off more than on, the pixel will be a dark gray.
The light they reflect is directed through a lens and onto the screen, creating an image. The mirrors can reflect pixels in up to 1,024 shades of gray to convert the video or graphic signal entering the DLP into a highly detailed grayscale image. DLPs also produce the deepest black levels of any projection technology using mirrors always in the off position.
To add color to that image, the white light from the lamp passes through a transparent, spinning color wheel, and onto the DLP chip. The color wheel, synchronized with the chip, filters the light into red, green and blue. The on and off states of each mirror are coordinated with these three basic building blocks of color. A single chip DLP projection system can create 16.7 million colors. Each pixel of light on the screen is red, green or blue at any given moment. The DLP technology relies on the viewer’s eyes to blend the pixels into the desired colors of the image. For example, a mirror responsible for creating a purple pixel will only reflect the red and blue light to the surface. The pixel itself is a rapidly, alternating flash of the blue and red light. Our eyes will blend these flashes in order to see the intended hue of the projected image.
A DLP Cinema projection system has three chips, each with its own color wheel that is capable of producing no fewer than 35 trillion colors. In a 3-chip system, the white light generated from the lamp passes through a prism that divides it into red, green and blue. Each chip is dedicated to one of these three colors. The colored light that the mirrors reflect is then combined and passes through the projection lens to form an image.
Texas Instruments has created a new system called BrilliantColor, which uses system level enhancements to give truer, more vibrant colors. Customers can expand the color scope beyond red, green and blue to add yellow, white, magenta and cyan, for greater color accuracy.
Once the DMD has created a picture out of the light and the color wheel has added color, the light passes through a lens. The lens projects the image onto the screen.

mathematical tools for modeling

A mathematical model uses mathematical language to describe a system. Mathematical models are used not only in the natural sciences and engineering disciplines (such as physics, biology, earth science, meteorology, and electrical engineering) but also in the social sciences (such as economics, psychology, sociology and political science); physicists, engineers, computer scientists, and economists use mathematical models most extensively.
Eykhoff (1974) defined a mathematical model as 'a representation of the essential aspects of an existing system (or a system to be constructed) which presents knowledge of that system in usable form'.
Mathematical models can take many forms, including but not limited to dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures.
Building blocksThere are six basic groups of variables: decision variables, input variables, state variables, exogenous variables, random variables, and output variables. Since there can be many variables of each type, the variables are generally represented by vectors.
Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables).
Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally).
Classifying mathematical modelsMany mathematical models can be classified in some of the following ways:
Linear vs. nonlinear: Mathematical models are usually composed by variables, which are abstractions of quantities of interest in the described systems, and operators that act on these variables, which can be algebraic operators, functions, differential operators, etc. If all the operators in a mathematical model present linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise.The question of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity. Deterministic vs. probabilistic (stochastic): A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables. Therefore, deterministic models perform the same way for a given set of initial conditions. Conversely, in a stochastic model, randomness is present, and variable states are not described by unique values, but rather by probability distributions. Static vs. dynamic: A static model does not account for the element of time, while a dynamic model does. Dynamic models typically are represented with difference equations or differential equations. Lumped vs. distributed parameters: If the model is homogeneous (consistent state throughout the entire system) the parameters are lumped. If the model is heterogeneous (varying state within the system), then the parameters are distributed. Distributed parameters are typically represented with partial differential equations.
A priori informationMathematical modeling problems are often classified into black box or white box models, according to how much a priori information is available of the system. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept only works as an intuitive guide for approach.
Usually it is preferable to use as much a priori information as possible to make the model more accurate. Therefore the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function. But we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model.
In black-box models one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. The problem with using a large set of functions to describe a system is that estimating the parameters becomes increasingly difficult when the amount of parameters (and different types of functions) increases.

Cluster computing for nanotechnology parameters calculation

A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.

Cluster categorizations
High-availability (HA) clusters (also known as failover clusters) are implemented primarily for the purpose of improving the availability of services which the cluster provides. They operate by having redundant nodes, which are then used to provide service when system components fail. The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy. HA cluster implementations attempt to manage the redundancy inherent in a cluster to eliminate single points of failure.
There are many commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux OSs.
Load-balancing clusters
Load-balancing clusters operate by distributing a workload evenly over multiple back end nodes. Typically the cluster will be configured with multiple redundant load-balancing front ends.
Grid computingGrids are thus more like a computing utility than like a single computer. In addition, grids typically support more heterogeneous collections than are commonly supported in clusters.
Grid computing is optimized for workloads which consist of many independent jobs or packets of work, which do not have to share data between the jobs during the computation process. Grids serve to manage the allocation of jobs to computers which will perform the work independently of the rest of the grid cluster. Resources such as storage may be shared by all the nodes, but intermediate results of one job do not affect other jobs in progress on other nodes of the grid.

Quantum encryption

A Quantum Leap in Data Encryption
With security on the Internet, there's always some nagging doubt. Can you ever be absolutely certain, for example, that the e-mail you're sending with some confidential business information attached isn't going to be intercepted and read as it travels the digital highways and byways?
Using the Internet for anything sensitive requires some faith that everything in place to ensure the security of the information you're working with—all the encryption, passwords, and security policies—will, in fact, work. But as with most things in life, nothing is certain except uncertainty itself.
But uncertainty can be useful. For years, security researchers have been experimenting with harnessing one of the underlying rules of quantum physics, known as the Uncertainty Principle, which states that at the quantum level, where objects are infinitely small, it's impossible to measure electrons and photons and other similarly tiny particles without affecting them.
How does quantum physics apply to the world of security? The idea is to harness this inherent uncertainty to create a data-encryption scheme that's essentially unbreakable. It's called quantum cryptography, and already some governments and private companies are using it to build absolutely secure lines for data and voice communications.
One Step Beyond
Today, companies and governments encrypt their ultra-sensitive secrets by using mathematical formulas to make the info look like gibberish to anyone who may be intercepting it. Currently, most data are encrypted using the Advanced Encryption Standard, a method first approved for government use by the National Institute of Standards & Technology in 2002 and then widely adopted in the private sector. So far, AES is serving its purpose. It's hard to break—at least for now. But that may not always be the case.
The next step in security is quantum cryptography, and a few companies are developing encryption products using it. New York-based MagiQ Technologies is one of them. It builds boxes that harness the properties of quantum physics to create encryption keys it claims can't be broken.
Why is MagiQ so confident? Uncertainty. MagiQ's gear generates particles of light called photons, which are so small, the conventional rules of physics don't apply to them. In 1927 a German physicist named Werner Heisenberg found that merely observing a particle as small as a photon alters it. Once you look at it, it's never the same again.
Checkpoints
This is known as Heisenberg's Uncertainty Principle, and it turns out that if you use the state of a photon to generate an encryption key—essentially a secret set of random numbers—it's easy to determine whether anyone else has looked at it while trying to get a copy of the key you used.
"Uncertainty is the principle we exploit," says Mike LaGasse, MagiQ's vice-president for engineering. "It's fundamentally impossible to observe the key, because the photon can be measured once and only once. An eavesdropper can't measure it, and so can't get the key."
Magic combines a computer, a finely tuned laser, a photon detector, and a fiber-optic line. The laser inside the MagiQ QPN box is adjusted to produce single photons, which are then sent over the fiber-optic cable to a second QPN box, which detects them and notes precisely their time of arrival.
The two boxes then compare how the photon appeared when it left the first box to how it appears when it arrived at the second. If they match, the photon is used to generate a key, which is used to encrypt the data. If they don't match, the photon is ignored. The obvservations of each good photon are saved and used as needed to generate keys. This process repeats itself hundreds of times a second.
Once the key is generated, it's a relatively simple matter to encrypt the data you want to send, whether it's a voice conversation or corporate strategic plan. But since the keys are impregnable, the data that's encrypted are too. Further complicating the problem for eavesdroppers is the fact that keys are generated hundreds of times a second, so the chance of getting enough information about the key to generate a copy and thus break the encryption is essentially zero.

Monday, October 27, 2008

WSN using CNt......... just have a look on the proposal

The combination of recent technological advances in electronics, nanotechnology, wireless communications, computing, networking, and robotics has enabled the development of Wireless Sensor Networks (WSNs), a new form of distributed computing where sensors (tiny, lowcost and low-power nodes, colloquially referred to as ”motes”) deployed in the environment communicate wirelessly to gather and report information about physical phenomena. WSNs have been successfully used for a variety of applications, such as environmental monitoring, object and event detection or military surveillance. The increasing expectations and demands for greater functionality and capabilities from these devices often result in greater software complexity for applications. Sensor and actor programming is a tedious error-prone task usually carried out from scratch. In order to facilitate the software development of this kind of system new tools and methodologies must be proposed. Middleware can simplify the creation and configuration of WSAN applications offering a set of high level services. The component-oriented paradigm seems to be a good alternative to define middleware. Software components offer several features (reusability, adaptability, ...) very suitable for WSANs which are dynamic environments with rapidly changing situations. Moreover, there is an increasing need to abstract and encapsulate the different middleware and protocols used to perform the interactions between nodes. However, the classical designs of component models and architectures (.NET, COM, JavaBeans) either suffer from extensive resource demands (memory, communication bandwidth, ...) or dependencies on the operating system,protocol or middleware. For this reason we propose to use UM-RTCOM , a previous development focused on software components for real-time distributed systems adapted to the unique characteristics of WSANs. The model is platform independent and uses light-weight components which make the approach very appropiate for this kind of system. In addition UM-RTCOM uses an abstract model of the components which allows different analysis to be performed, including real-time analysis. In order to facilitate the development of WSAN applications we propose a novel middleware called MWSAN. It is specified with UM-RTCOM allowing us to define real time characteristics such as, establishing timing requirements (priority, periods,...) in the services and then to improve the temporal behavior of applications, attending to the most critical events first. MWSAN is composed of several components that provide a set of primitives related to sensor/actor interconnection, quality of service (QoS) and the actor coordination. These issues are very important and necessary and we think that high level primitives will simplify the work of the designers. The middleware is highly configurable in order to fit in with the limitations of sensorsand actors. For example, the middleware for motes does not include the component for actor coordination thereby reducing the required memory space. In order to implement our proposals, automatic tools will map the UM-RTCOM specification of MWSAN to RT Java source code using an implementation called jRate for actors. jRate allows us to meet the timing specifications of UM-RTCOM, implementing a priority based application where the highest priority events received by actors will be executed first. In the case of motes, due to the limited resources, tools will map MWSAN to nesC, a component language for sensorson TinyOS.
Some approaches have incorporated the componentoriented paradigm to deal with WSAN programming. The most used is possibly nesC. This is an event-driven component-oriented programming language for networked embedded systems. Our proposal, UM-RTCOM, presents a higher level concurrency model and a higher level shared resource access model. In addition, real-time requirements of WSAN applications can be directly supported by means of the model primitives and the abstract model provided,reflecting the component behavior, which allows real-time analysis to be performed. Other languages such as Esterel, Signal, and Lustre target embedded, hard realtime, control systems with strong time guarantees but they are not general purpose programming languages. Much work has targeted the development of coordination models and the supporting middleware in the effort to meet the challenges of WSNs. However, since the above listed requirements impose stricter constraints, they may not be suitable for application to WSANs. In UM-RTCOM is used to specify a coordination model based on tuple channels. The use of this component model allowed us to include real time characteristics in the tuple channels improving the application performance. 1.2 Operational Setting Our reference operational setting is based on an architecture where there is a dense deployment of stationary sensors forming clusters, each one governed by a (possibly mobile) actor. Communication between a cluster actor and the sensors is carried out in a single-hop way. Although singlehop communication is inefficient in WSNs due to the long distance between sensors and the base station, in WSANs this may not be the case, because actors are close to sensors. Our approach, however, does not preclude the possibility of having one of these sensors acting as the “root”of a sensor “sub-network” whose members communicate in a multi-hop way. Therefore, this root sensor cannot send only its sensor measurements to the actor but also the information collected from the multi-hop sub-network. On the other hand, several sensors may form a redundancy group inside a cluster. The zone of the cluster monitored by a redundancy group will still be covered in spite of failures or sleeping periods of the group members. Several actors may form a (super-)cluster which is governed by one of them, the so-called cluster leader actor. It takes centralized decisions related to the task assignment or QoS for the actors. Moreover, clustering together with the single-hop communication scheme minimizes the eventtransmission time from sensors to actors, which helps to support the real-time communication required in WSANs.

Wednesday, October 1, 2008

Three-Dimensional Electrostatic Effects of CNTs -my research work

SINGLE wall carbon nanotubes (CNTs) are of great interest for future electron device applications because of their excellent electrical properties. Electrostatics are an important factor in transistor performance and therefore need to be carefully studied. It is well known that the electrostatics of CNT devices can be significantly different from bulk devices due to the one-dimensional (1-D) channel geometry. Previous theoretical studies of CNT-FET electrostatics assumed a coaxial geometry . These studies thoroughly described the role of the two–dimensional (2-D) coaxial environment on the electrostatics of the 1-D channel. The coaxial geometry provides good gate control with subthreshold swings very close to 60 mV/decade, and the oxide thickness and dielectric constant both play an important role by determining the gate capacitance of the device. The contact geometry also plays an important role in the transfer of charge from the metal contact to the CNT. For low Schottky barriers (SB), a large charge transfer can be achieved if the contact has a large surface area and the oxide dielectric constant is large. Thin oxides and small contact areas can achieve very short electrostatic scaling lengths (the distance by which the source and drain fields penetrate into the channel) and therefore good electrostatic behavior. While a coaxial geometry provides important qualitative insights into the behavior of experimental devices (which typically have a planar top or bottom gated geometry), a full threedimensional (3-D) treatment of planar devices is needed. In this paper, we performed a careful study of the 3-D electrostatics of planar-gated CNTFETs. Our objective is to provide both qualitative and quantitative insights valid for realistic devices. Several of the results are analogous to those that occur in a coaxial geometry, but we show quantitatively how they play out in a realistic planar geometry. The results should be useful for interpreting experiments and for designing high-performance CNTFETs. Among the several techniques available to treat 3-D electrostatics, we find the method of moments (also known as the boundary element method) well suited for simulating planargate CNTFETs. This method is computationally inexpensive because it uses grid points only on the surfaces where charge exists and not in the entire 3-D domain. The computational domain is thus reduced only to the important device regions: the channel, the contacts, and the gate. The problem of boundary conditions in the open areas and termination of the simulation domain does not appear at all. Since all the boundary elements are assumed to be point charges, the electrostatic potential of the system decays to zero at infinity, where both the potential and electric field are zero. In this way, the method of moments inherently assumes a zero-field boundary condition as the distance , whichfacilitates the simulation of devices with electrostatically open boundaries, (e.g., the back-gated CNTFET).

We examine the effect of the oxide thickness, the oxide dielectric constant, and the contact geometry for two different device geometries, the bottom-gated (BG) and top-gated (TG) devices shown in Fig. 1.We will show that for a CNTFET with either of these two geometries, the scaling length is mostly determined by the gate oxide thickness. The geometry of the source and drain contacts can also play an important role. The BG device is more sensitive to the contact width rather than the contact height because a wider device more effectively screens the gate field and prevents it from terminating on the CNT.We will also show that high-k dielectric materials do not offer a significant advantage for the BG device because the oxide thickness plays the dominant role. Both the effects of contact height and high-k materials are, however, more pronounced for the TG device, where the high-k dielectric in the upper region increases not only the gate to CNT coupling, but also the contact-to-CNT coupling and the contact-to-gate parasitics as well. When the contacts are thick and can screen the gate field and prevent it from terminating on the channel, high-k dielectrics can actually degrade the electrostatic performance of the device. A careful geometry optimization for the contacts of planar short channel devices is, therefore, important. A properly designed TG device with very thin gate oxide can provide near ideal subthreshold behavior, similarly to what is predicted for coaxial geometries. Finally, we find that one-dimensional “needle-like” contacts offer the best electrostatic performance. In practice, however, this advantage would have to be balanced against the increased series resistance.

WSN and nanotechnology

IN RECENT years, advances in miniaturization, low-power circuit design, simple and reasonably efficient wireless communication equipment, and improved small-scale energy supplies have combined with reduced manufacturing costs to make a new technological vision possible, i.e., wireless sensor networks. These networks combine simple wireless communication infrastructure, minimal computation facilities, and minimally invasive sensors to form a network that can be deeply embedded in our physical environment to create an information world. Typical sensing tasks for such a device could be temperature, light, vibration, sound, and radiation. The desired size would be a few cubic millimeters or even smaller. The target price should be less than US$ 1, including radio front end, microcontroller, power supply, and the actual sensor. All these components are integrated together in a single device to form a “sensor node.” While these networks of sensor nodes share many commonalities with existing ad hoc network concepts, there are also a number of fundamental differences and specific challenges. While designing and deploying a wireless sensor network, we found several major limitations that must be addressed.

BACKGROUND AND APPLICATIONS

wireless sensor networks have benefited from advances in both microelectromechanical systems (MEMS) and networking technologies. Such environments may have many inexpensive wireless nodes, each capable of collecting, storing, and processing environmental information, and communicating with neighboring nodes. A sensor node is made up of four basic components, as shown in Fig. 1, namely 1) a “sensing unit,” 2) a “processing unit,” 3) a “transceiver unit,” and 4) a “power unit.” They may also have additional application-dependent components such as a “location finding system,” “power generator,” and “mobilizer.” Sensing units are usually composed of two subunits, namely 1) sensors and 2) analog-to-digital converters (ADCs). The analog signals produced by the sensors in response to the observed phenomenon are converted to digital signals by ADC and then fed into the processing unit. The processing unit, which is generally associated with a small storage unit, manages the procedures that make the sensor node collaborate with the other nodes to carry out the assigned sensing tasks. A transceiver unit connects the node to the network. One of the most important components of a sensor node is the power unit. Power units may be supported by power scavenging units such as solar cells. There are also other subunits that are application dependent. Most of the sensor network routing techniques and sensing tasks require knowledge of location with high accuracy. Thus, it is common that a sensor node has a location finding system. A mobilizer may sometimes be needed to move sensor nodes when it is required to carry out their assigned tasks. All of these subunits may need to fit into a matchbox-sized module or even smaller. In the past, sensors are connected by wire lines. Today, this environment is combined with the novel ad hoc networking and wireless technologies to facilitate intersensor communication, which greatly improves the flexibility of installing and configuring a sensor network. Sensor nodes coordinate among themselves to produce high-quality information about the physical environment. A base station (the sink) may be a fixed node or a mobile node capable of connecting the sensor network to an existing communications infrastructure or to the Internet where a user can have access to the reported data. Networking unattended sensor nodes may have profound effects on the efficiency of many military and civil applications such as target field imaging, intrusion detection, weather monitoring, security, tactical surveillance, and distributed computing; on detecting ambient conditions such as temperature, movement, sound, and light; or the presence of certain objects, inventory control, and disaster management. Deployment of a sensor network in these applications can be in random fashion (e.g., dropped from an airplane) or can be planted manually (e.g., fire alarm sensors in a facility). For example, in a disaster management application, a large number of sensors can be dropped from a helicopter. These networked sensors can assist rescue operations by locating survivors, identifying risky areas, and making the rescue team more aware of the overall situation in the disaster area, such as tsunami, earthquake, etc. Sensor nodes are ensely deployed in close proximity or embedded within the medium to be observed. Therefore, they usually work unattended in remote geographic areas. They may be working in the interior of large machinery, at the bottom of an ocean continuously, in a biologically or chemically contaminated field, in a battlefield beyond enemy lines, and in a home or large building. Driven by all these exciting and demanding applications of wireless sensor network, several critical requirements have been addressed actively from the network prospective for supporting viable deployments, as follows:

• long longevity;
• noninvasive form factor;
• optimal sensing coverage and connectivity;
• high sensing resolution.

Thursday, September 4, 2008

Nanotechnology Products

You might be surprised to find out how many products on the market are already benefiting from nanotechnology.

  • Sunscreen - Many sunscreens contain nanoparticles of zinc oxide or titanium oxide. Older sunscreen formulas use larger particles, which is what gives most sunscreens their whitish color. Smaller particles are less visible, meaning that when you rub the sunscreen into your skin, it doesn't give you a whitish tinge.
  • Self-cleaning glass - A company called Pilkington offers a product they call Activ Glass, which uses nanoparticles to make the glass photocatalytic and hydrophilic. The photocatalytic effect means that when UV radiation from light hits the glass, nanoparticles become energized and begin to break down and loosen organic molecules on the glass (in other words, dirt). Hydrophilic means that when water makes contact with the glass, it spreads across the glass evenly, which helps wash the glass clean.
  • Clothing - Scientists are using nanoparticles to enhance your clothing. By coating fabrics with a thin layer of zinc oxide nanoparticles, manufacturers can create clothes that give better protection from UV radiation. Some clothes have nanoparticles in the form of little hairs or whiskers that help repel water and other materials, making the clothing stain-resistant.
  • Scratch-resistant coatings - Engineers discovered that adding aluminum silicate nanoparticles to scratch-resistant polymer coatings made the coatings more effective, increasing resistance to chipping and scratching. Scratch-resistant coatings are common on everything from cars to eyeglass lenses.
  • Antimicrobial bandages - Scientist Robert Burrell created a process to manufacture antibacterial bandages using nanoparticles of silver. Silver ions block microbes' cellular respiration . In other words, silver smothers harmful cells, killing them.
  • Swimming pool cleaners and disinfectants - EnviroSystems, Inc. developed a mixture (called a nanoemulsion) of nano-sized oil drops mixed with a bactericide. The oil particles adhere to bacteria, making the delivery of the bactericide more efficient and effective.

Sunday, August 31, 2008

My IDEA and DESIGN about Micromirror

Micro-Electro-Mechanical Systems, or MEMS, are integrated micro devices or systems combining electrical and mechanical components. They are fabricated using integrated circuit (IC) batch processing techniques and can range in size from micrometers to millimeters. These systems can sense, control and actuate on the micro scale, and function individually or in arrays to generate effects on the macro scale. This design presents an overview of MEMS technology with emphasis on optical applications. Applications of MEMS devices vary in many fields from automotive transducers, biomedical technologies, communication systems, robotics, aerospace, micro-optics, industrial sensors and actuators.
The applications of MEMS in optics include display systems, optical switching, optical communication, optical data storage, optical processing and interconnection, and adaptive optics.

This design focuses on the state-of-the-art of technologies for the design, fabrication and applications of MEMS micro-mirrors. I will discuss the major design issues, considerations, calculations and restrictions of micro mirrors. The materials employed for the reflective surfaces are described along with their properties. The materials used for the static and dynamic MEMS micro structures as well as the different configurations are also presented and summarized. The two most widely used techniques for actuation, namely, Electrostatic and Magnetic are presented along with the formulas, tables and curves used to design the movable structures. The different options of fabrication processes are presented and discussed. Finally, the applications of micro mirrors are described.

Micro mirrors are the centerpieces of MEMS optical switches. They are tiny mirrors fabricated in silicon using MEMS technology. The switching function is performed by changing the position of a micro mirror to deflect an incoming light beam into the appropriate outgoing optical fiber. The three important properties of micro mirrors are reflectivity, light transmission, and surface roughness. Coating its surface with metal can increase the reflectivity of a micro mirror.

MIRRORS are an important component in an optical system. Mirrors are found in almost every system that makes use of light sources and lenses. Their ability to reflect light and change its direction of propagation has been used for several centuries. Mirrors in the macroscopic world are very well understood and present no major challenges in their design and calculation. However, when the size of the mirror is in the order of a few microns, several issues and challenges come into place. The advances of MEMS technology in the last decade have made possible to design and fabricate micro mirrors that are actually used in real industrial and commercial applications. MEMS micro-mirrors are used today in fiber optic communications as switching devices, in digital displays and projectors, in laser scanners, printers and barcode readers, and in bioengineering for laser surgery.

The area in which MEMS has seen one of its biggest commercial breakthroughs lately is micro machined mirrors for optical switching in both fiber optic communications and data storage applications. Optical switches are to optical communications what transistors are to electronic signaling. What makes Si single-crystal attractive in this case is the optical quality of the Si surface. The quality of the mirror surface is primordial to obtain very low insertion loss even after multiple reflections. Wet bulk micromachining has an advantage here over deep reactive ion etching (DRIE),as the latter leaves lossier Si mirror surfaces due to the inevitable ripples.

MEMS optical switch. The optical signals passing through the optical fibers at the input port are switched independently by the gimbal–mounted MEMS mirrors with two-axis tilt control and then focused onto the optical fibers at the output ports. In the switch, any connection between input and output fibers can be accomplished by controlling the tilt angle of each mirror. As a result, the switch can handle several channels of optical signals directly without costly optical-electrical or electrical-optical conversion.

The MEMS micro-mirrors can be used in the making of optical sensors and display both of which involves the controlling and directing of the light band. Today, information is being transferred to people from electronic devices through display technologies like the Cathode Ray Tubes (CRTs) and Liquid Crystal Display (LCD). In the future, MEMS-based Micro-Mirror array is a likely candidate to replace them as the dominant form of display technologies. This is due to the low-cost and high performance of the micro-mirrors. Furthermore, due to the similar processes and facilities used in the fabrication of the MEMS micro-mirrors, it is relatively easy to incorporate them with their controlling IC chip onto a single silicon substrate.

I have used the angle of the rigid mirror in order to control the location of a reflected beam
of light. This allows the mirror to act as an optical switch for optical fiber networks. The mirror is made of a material with sufficient stiffness to prevent bending. The mirror is attached to a thin beam that is anchored in place at the opposite end. The beam is free to twist and acts as a torsion spring. the mirror is attached to two beams, one on each side rather than a single beam. Thermo pneumatic, thermo elastic, electrostatic, magnetic, or piezoelectric forces may be used to make the mirror move. Here the use of electrostatic force is imagined.A pair of electrodes reside below the mirror. When a potential difference is applied between either of the electrodes and the mirror, a torque is exerted causing the desired rotation.

When electrostatic forces are used to actuate the system, the range of operation is limited by the pull-in instability. The device is actuated by applying a potential difference between the mirror and one of the ground electrodes. This causes a torque on the system, which is countered by the effect of a torsion spring. For this system, the pull-in instability occurs once rotation through a critical angle has taken place. That is, the range of angular motion is limited by the pull-in instability.

Micro-mirror arrays are currently being developed by a number of companies for optical switching. To bring these products to the market quickly, a rapid development cycle is needed, leaving little time for multiple fabrication runs. CAD tools for MEMS can reduce the number of fabrication iterations run by allowing prototyping to occur within the virtual environment. Here, many models can be tested quickly and the design can be optimized prior to initial fabrication.

Modeling of micro-mirrors requires unique simulation capabilities beyond those required for traditional MEMS devices. In many cases, the device alone has unique features which require that CAD tools offer additional capabilities beyond typical electro-mechanical analysis. Second, the entire array must be studied, both from a fabrication and optical perspective. The output of a MEMS software CAD tool is input into an optical ray-tracing program to provide optical system performance of the total MEMS micro-mirror and provide non-sequential ray-tracing of a micro-mirror array.

Friday, August 22, 2008

Nanotechnology Investements and Insurance

As a matter of fact that this is one of the emerging field, it requires a lot of investment and the investors are subjected to risk that their investments may fail to give some fruitful output. so there is a term known as nanotechnology investment and nanotechnology insurance. Investment is another thing and insurance is another thing. the main point is that the fund managers must keep in mind that the investors and not under any critical situation when there is a lot of chance that their investment may bankrupt then. so some of the investors are also insuring there fund as it will surely secure then up to some point, so that there losses if they happen may not prove to be critical to them. so insurance of this field is quite important and hence nanotechnology investment is as important as nanotechnology insurance.

Wednesday, August 13, 2008

MEMS and Nanotechnology Applications

There are numerous possible applications for MEMS and Nanotechnology. As a breakthrough technology, allowing unparalleled synergy between previously unrelated fields such as biology and microelectronics, many new MEMS and Nanotechnology applications will emerge, expanding beyond that which is currently identified or known. Here are a few applications of current interest:

Biotechnology

MEMS and Nanotechnology is enabling new discoveries in science and engineering such as the Polymerase Chain Reaction (PCR) microsystems for DNA amplification and identification, micromachined Scanning Tunneling Microscopes (STMs), biochips for detection of hazardous chemical and biological agents, and microsystems for high-throughput drug screening and selection.

Communications

High frequency circuits will benefit considerably from the advent of the RF-MEMS technology. Electrical components such as inductors and tunable capacitors can be improved significantly compared to their integrated counterparts if they are made using MEMS and Nanotechnology. With the integration of such components, the performance of communication circuits will improve, while the total circuit area, power consumption and cost will be reduced. In addition, the mechanical switch, as developed by several research groups, is a key component with huge potential in various microwave circuits. The demonstrated samples of mechanical switches have quality factors much higher than anything previously available.

Reliability and packaging of RF-MEMS components seem to be the two critical issues that need to be solved before they receive wider acceptance by the market.

Accelerometers

MEMS accelerometers are quickly replacing conventional accelerometers for crash air-bag deployment systems in automobiles. The conventional approach uses several bulky accelerometers made of discrete components mounted in the front of the car with separate electronics near the air-bag; this approach costs over $50 per automobile. MEMS and Nanotechnology has made it possible to integrate the accelerometer and electronics onto a single silicon chip at a cost between $5 to $10. These MEMS accelerometers are much smaller, more functional, lighter, more reliable, and are produced for a fraction of the cost of the conventional macroscale accelerometer elements.

Thursday, August 7, 2008

Applications of nanotechnology-Energy

Energy

The majority of the world’s energy comes from fossil fuels – primarily coal, oil, and natural gas. All three were formed on Earth about 360 million years ago during the Carboniferous Period and long before the age of the dinosaurs.

We rely on fossil fuels for much more than gasoline to power our cars. For example, tremendous amounts of oil are required to produce all plastics, all computers and high tech devices. According to the American Chemical Society, it takes 3.5 pounds of fossil fuels to make a single 32 megabyte DRAM computer chip, and the construction of a single desktop computer consumes ten times its weight in fossil fuels. Our food is produced by high-tech, oil-powered industrial methods of agriculture, and in the US each piece of food travels about 1,500 miles before it reaches the grocery store. Pesticides are made from oil, and commercial fertilizers are made from ammonia, which is made from natural gas. Fossil fuels are needed to make many medical devices and supplies such as life-support systems, anesthesia bags, catheters, dishes, drains, gloves, heart valves, needles, syringes, and tubes.

There is a limited supply of fossil fuels and they are nonrenewable. Today, we are using fossil fuels faster than we are finding them. In fact, the Oil Depletion Analysis Center (ODAC) predicts that in the near future the demand for fossil fuels will far exceed the Earth’s supply.

Researchers are exploring ways in which nanotechnology could help us accomplish the following two goals:

(1) Access and use fossil fuels much more efficiently so that we can get more energy out of current reserves.

(2) Develop new ways to generate energy.

One example of processes being developed to use fossil fuels more efficiently is the current research to design zeolite catalysts at the nanoscale.

A natural zeolite collected in 2001.

A zeolite is an inorganic porous material which works as a kind of sieve - allowing some molecules to pass through while excluding or breaking down others. Zeolites can be either natural or synthetic, and new zeolites are still being discovered and invented. In 1960, Charles Plank and Edward Rosinski developed a process to use zeolites to speed up chemical reactions. Plank and Rosinski’s process used zeolites to break down petroleum into gasoline more quickly and efficiently. Today, researchers are working to design zeolite catalysts at the nanoscale. By adjusting the size of the zeolite pores on the nanoscale, they can control the size and shape of molecules that can enter. In the case of gasoline production, this technique could mean that we would get more and cleaner gasoline from every barrel of oil.

New Energy Producers

Solar Power

Researchers around the world are working on the development of new energy sources. What are the alternatives and what role could nanotechnology play? We already generate energy through hydropower (i.e., water). Damns have been built on most of the major waterways. Energy generated by wind turbines could help to lessen some of our dependence on fossil fuels. Although wind turbines are in use today, their energy output is far less than what is needed. Harnessing the sun’s rays to make solar power is another alternative, but with current technology, solar panels could only make a limited impact.

Of these three methods, solar energy holds the most promise. Right now you need silicon wafers to make solar panels, and silicon wafers, like computer chips, require a large amount of fossil fuels for production. However, researchers like Paul Alivisatos at the University of California, Berkeley hope to use nanotechnology to develop nano solar cells that would be energy-intensive and far less expensive to make. Researchers at the University of Toronto are using nanotechnology to develop solar panels capable of harnessing not only the visible light from the sun, but the infrared spectrum as well, thus doubling the energy output. What’s more, these new solar cells could be sprayed on surfaces like paint, making them highly portable. Researchers at Rice University want to take solar energy research even further. They hope to someday build a solar power station in space capable of catching the solar energy that bypasses the Earth every day and providing about nine times the efficiency of solar cells on Earth.

In these and other energy-producing advances, nanotechnology will play a critical role.

New Energy Producers II

Fuel Cells

Another area where nanotechnology is expected to play a major role is in the development of new fuel cells.

The first fuel cell (or "gas battery") was invented by William Robert Grove in 1839, only 39 years after Alessandro Volta invented the battery (the voltaic cell). Unfortunately, the materials that Grove used to make his fuel cell were unstable and the technology was unusable. One hundred and twenty years later, NASA revived fuel cell technology with new materials and used it on manned space flights. These new fuel cells were quiet, reliable, and clean, and produced water as a by-product. It was an ideal scenario. The fuel cells produced both power and drinking water for the astronauts.

Researchers around the world are using nanotechnology to make new fuel cell membranes that would substantially increase the energy output. Nanomaterials are being developed to take the place of the highly expensive platinum parts in current fuel cells, and nanotubes hold promise as hydrogen delivery systems.

All of this research could someday pave the way for better energy solutions.

Nanotech Instruments

Molecules with more than 10 atoms fall into the nanometer range. Scientists were not able to visualize individual molecules and nanoscale objects until the invention of some very powerful microscopes. These microscopes don’t use light to create an image of a nanoscale object.

Believe it or not, waves of light are too large – visible light has a wavelength between 400 and 750 nanometers. That’s much larger than many nanoscale objects and definitely larger than most molecules. Instead the specialized microscopes use very small probes or electrons.

With these microscopes, a very small, very sharp tip on the end of a lever is dragged across a nanoscale object. The movement of the lever is monitored with a computer, which creates an image. The method is much like a person moving their fingers over words written in Braille to read. This tip-scanning method is known as Scanning Probe Microscopy (SPM). Electron microscopes are similar to light microscopes except instead of directing light to a sample, they direct electrons to the sample.

What Is Nano?




































Over the past decade a new term has entered our vocabulary and that word is “nano.” We hear the word in movies. It’s mentioned on television and in newspapers and magazines. Futurists say it will pave the way for unimaginable new possibilities. Pessimists are unsure.

There are many different opinions about where this new field will take us, but everyone agrees that this science and the new technologies that come from it have the possibility of significantly impacting our world.



Thursday, July 24, 2008

Reliability and Statistics

Reliability and Statistics is one of the major part to be studied while working with mems as till date nothing can be said about the reliability of the mems device.

MEMS MICROMIRROR




This is an image of MEMS based micromirror
MEMS micromirrors can be used in a variety of applications including: optical switching in fiber optic networks, maskless EUV lithography, adaptive optics, and projection display devices. For telecommunications and EUV lithography applications, micromirrors will need to have 2 Degrees-of-Freedom (2-DOF) and be actuated to extremely precise positions. Micromirror prototypes that have been publicized at present do not include advanced control or features to guarantee that these precision requirements will be satisfied. LOT of R&D is going all around to design 2-DOF micromirrors that will offer simpler dynamics (less coupling due to electrostatic interference), which will aid in eventually implementing a feedback control system.

MEMS micromirrors can be used in a variety of applications including: optical switching in fiber optic networks, maskless EUV lithography, adaptive optics, and projection display devices. For telecommunications and EUV lithography applications, micromirrors will need to have 2 Degrees-of-Freedom (2-DOF) and be actuated to extremely precise positions. Micromirror prototypes that have been publicized at present do not include advanced control or features to guarantee that these precision requirements will be satisfied. We are investigating designs for 2-DOF micromirrors that will offer simpler dynamics (less coupling due to electrostatic interference), which will aid in eventually implementing a feedback control system. The micromirror designs are modeled in FEM software and fabricated by a two structural polysilicon layer micromachining process for testing in our laboratory.

MEMS (Micro-Electro-Mechanical Systems) is a growing market with the capability of improving many technologies by simplification and cost reduction. By coupling MEMS with the available and developing integrated circuit (IC) technology, microscopic devices can be built on silicon wafers with on-chip controls. The applications for MEMS can be found in many fields from biomedical to defense. MEMS devices including sensors and actuators such as accelerometers, pressure sensors, pollution monitoring devices, gyroscopes, and internal drug-delivery systems have been created using micrometer sized components. MEMS micromirrors have been developed for use in projection display devices and adaptive optics for telescopes. The proposed research will investigate the use of optical MEMS, specifically micromirrors, for telecommunication applications.