Saturday, November 29, 2008

DLP technonogy

DLP TechnologyDLP technology is based on an optical semiconductor, called a Digital Micromirror Device (DMD), which uses mirrors made of aluminum to reflect light to make the picture. The DMD is often referred to as the DLP chip. The chip can be held in the palm of your hand, yet it can contain more than 2 million mirrors each, measuring less than one-fifth the width of a human hair. The mirrors are laid out in a matrix, much like a photo mosaic, with each mirror representing one pixel.
When you look closely at a photo mosaic, each piece of it holds a tiny, square photograph. As you step away from the mosaic, the images blend together to create one large image. The same concept applies to DMDs. If you look closely at a DMD, you would see tiny square mirrors that reflect light, instead of the tiny photographs. From far away (or when the light is projected on the screen), you would see a picture.
The number of mirrors corresponds to the resolution of the screen. DLP 1080p technology delivers more than 2 million pixels for true 1920x1080p resolution, the highest available.
In addition to the mirrors, the DMD unit includes:
A CMOS DDR SRAM chip, which is a memory cell that will electrostatically cause the mirror to tilt to the on or off position, depending on its logic value (0 or 1) A heat sink An optical window, which allows light to pass through while protecting the mirrors from dust and debris DLP ProjectionBefore any of the mirrors switch to their on or off positions, the chip will rapidly decode a bit-streamed image code that enters through the semiconductor. It then converts the data from interlaced to progressive, allowing the picture to fade in. Next, the chip sizes the picture to fit the screen and makes any necessary adjustments to the picture, including brightness, sharpness and color quality. Finally, it relays all the information to the mirrors, completing the whole process in just 16 microseconds.
The mirrors are mounted on tiny hinges that enable them to tilt either toward the light source (ON) or away from it (OFF) up to +/- 12°, and as often as 5,000 times per second. When a mirror is switched on more than off, it creates a light gray pixel. Conversely, if a mirror is off more than on, the pixel will be a dark gray.
The light they reflect is directed through a lens and onto the screen, creating an image. The mirrors can reflect pixels in up to 1,024 shades of gray to convert the video or graphic signal entering the DLP into a highly detailed grayscale image. DLPs also produce the deepest black levels of any projection technology using mirrors always in the off position.
To add color to that image, the white light from the lamp passes through a transparent, spinning color wheel, and onto the DLP chip. The color wheel, synchronized with the chip, filters the light into red, green and blue. The on and off states of each mirror are coordinated with these three basic building blocks of color. A single chip DLP projection system can create 16.7 million colors. Each pixel of light on the screen is red, green or blue at any given moment. The DLP technology relies on the viewer’s eyes to blend the pixels into the desired colors of the image. For example, a mirror responsible for creating a purple pixel will only reflect the red and blue light to the surface. The pixel itself is a rapidly, alternating flash of the blue and red light. Our eyes will blend these flashes in order to see the intended hue of the projected image.
A DLP Cinema projection system has three chips, each with its own color wheel that is capable of producing no fewer than 35 trillion colors. In a 3-chip system, the white light generated from the lamp passes through a prism that divides it into red, green and blue. Each chip is dedicated to one of these three colors. The colored light that the mirrors reflect is then combined and passes through the projection lens to form an image.
Texas Instruments has created a new system called BrilliantColor, which uses system level enhancements to give truer, more vibrant colors. Customers can expand the color scope beyond red, green and blue to add yellow, white, magenta and cyan, for greater color accuracy.
Once the DMD has created a picture out of the light and the color wheel has added color, the light passes through a lens. The lens projects the image onto the screen.

mathematical tools for modeling

A mathematical model uses mathematical language to describe a system. Mathematical models are used not only in the natural sciences and engineering disciplines (such as physics, biology, earth science, meteorology, and electrical engineering) but also in the social sciences (such as economics, psychology, sociology and political science); physicists, engineers, computer scientists, and economists use mathematical models most extensively.
Eykhoff (1974) defined a mathematical model as 'a representation of the essential aspects of an existing system (or a system to be constructed) which presents knowledge of that system in usable form'.
Mathematical models can take many forms, including but not limited to dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures.
Building blocksThere are six basic groups of variables: decision variables, input variables, state variables, exogenous variables, random variables, and output variables. Since there can be many variables of each type, the variables are generally represented by vectors.
Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables).
Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally).
Classifying mathematical modelsMany mathematical models can be classified in some of the following ways:
Linear vs. nonlinear: Mathematical models are usually composed by variables, which are abstractions of quantities of interest in the described systems, and operators that act on these variables, which can be algebraic operators, functions, differential operators, etc. If all the operators in a mathematical model present linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise.The question of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity. Deterministic vs. probabilistic (stochastic): A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables. Therefore, deterministic models perform the same way for a given set of initial conditions. Conversely, in a stochastic model, randomness is present, and variable states are not described by unique values, but rather by probability distributions. Static vs. dynamic: A static model does not account for the element of time, while a dynamic model does. Dynamic models typically are represented with difference equations or differential equations. Lumped vs. distributed parameters: If the model is homogeneous (consistent state throughout the entire system) the parameters are lumped. If the model is heterogeneous (varying state within the system), then the parameters are distributed. Distributed parameters are typically represented with partial differential equations.
A priori informationMathematical modeling problems are often classified into black box or white box models, according to how much a priori information is available of the system. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept only works as an intuitive guide for approach.
Usually it is preferable to use as much a priori information as possible to make the model more accurate. Therefore the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function. But we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model.
In black-box models one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. The problem with using a large set of functions to describe a system is that estimating the parameters becomes increasingly difficult when the amount of parameters (and different types of functions) increases.

Cluster computing for nanotechnology parameters calculation

A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.

Cluster categorizations
High-availability (HA) clusters (also known as failover clusters) are implemented primarily for the purpose of improving the availability of services which the cluster provides. They operate by having redundant nodes, which are then used to provide service when system components fail. The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy. HA cluster implementations attempt to manage the redundancy inherent in a cluster to eliminate single points of failure.
There are many commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux OSs.
Load-balancing clusters
Load-balancing clusters operate by distributing a workload evenly over multiple back end nodes. Typically the cluster will be configured with multiple redundant load-balancing front ends.
Grid computingGrids are thus more like a computing utility than like a single computer. In addition, grids typically support more heterogeneous collections than are commonly supported in clusters.
Grid computing is optimized for workloads which consist of many independent jobs or packets of work, which do not have to share data between the jobs during the computation process. Grids serve to manage the allocation of jobs to computers which will perform the work independently of the rest of the grid cluster. Resources such as storage may be shared by all the nodes, but intermediate results of one job do not affect other jobs in progress on other nodes of the grid.

Quantum encryption

A Quantum Leap in Data Encryption
With security on the Internet, there's always some nagging doubt. Can you ever be absolutely certain, for example, that the e-mail you're sending with some confidential business information attached isn't going to be intercepted and read as it travels the digital highways and byways?
Using the Internet for anything sensitive requires some faith that everything in place to ensure the security of the information you're working with—all the encryption, passwords, and security policies—will, in fact, work. But as with most things in life, nothing is certain except uncertainty itself.
But uncertainty can be useful. For years, security researchers have been experimenting with harnessing one of the underlying rules of quantum physics, known as the Uncertainty Principle, which states that at the quantum level, where objects are infinitely small, it's impossible to measure electrons and photons and other similarly tiny particles without affecting them.
How does quantum physics apply to the world of security? The idea is to harness this inherent uncertainty to create a data-encryption scheme that's essentially unbreakable. It's called quantum cryptography, and already some governments and private companies are using it to build absolutely secure lines for data and voice communications.
One Step Beyond
Today, companies and governments encrypt their ultra-sensitive secrets by using mathematical formulas to make the info look like gibberish to anyone who may be intercepting it. Currently, most data are encrypted using the Advanced Encryption Standard, a method first approved for government use by the National Institute of Standards & Technology in 2002 and then widely adopted in the private sector. So far, AES is serving its purpose. It's hard to break—at least for now. But that may not always be the case.
The next step in security is quantum cryptography, and a few companies are developing encryption products using it. New York-based MagiQ Technologies is one of them. It builds boxes that harness the properties of quantum physics to create encryption keys it claims can't be broken.
Why is MagiQ so confident? Uncertainty. MagiQ's gear generates particles of light called photons, which are so small, the conventional rules of physics don't apply to them. In 1927 a German physicist named Werner Heisenberg found that merely observing a particle as small as a photon alters it. Once you look at it, it's never the same again.
Checkpoints
This is known as Heisenberg's Uncertainty Principle, and it turns out that if you use the state of a photon to generate an encryption key—essentially a secret set of random numbers—it's easy to determine whether anyone else has looked at it while trying to get a copy of the key you used.
"Uncertainty is the principle we exploit," says Mike LaGasse, MagiQ's vice-president for engineering. "It's fundamentally impossible to observe the key, because the photon can be measured once and only once. An eavesdropper can't measure it, and so can't get the key."
Magic combines a computer, a finely tuned laser, a photon detector, and a fiber-optic line. The laser inside the MagiQ QPN box is adjusted to produce single photons, which are then sent over the fiber-optic cable to a second QPN box, which detects them and notes precisely their time of arrival.
The two boxes then compare how the photon appeared when it left the first box to how it appears when it arrived at the second. If they match, the photon is used to generate a key, which is used to encrypt the data. If they don't match, the photon is ignored. The obvservations of each good photon are saved and used as needed to generate keys. This process repeats itself hundreds of times a second.
Once the key is generated, it's a relatively simple matter to encrypt the data you want to send, whether it's a voice conversation or corporate strategic plan. But since the keys are impregnable, the data that's encrypted are too. Further complicating the problem for eavesdroppers is the fact that keys are generated hundreds of times a second, so the chance of getting enough information about the key to generate a copy and thus break the encryption is essentially zero.