Pages

Thursday, December 23, 2010

DIAAMOND CHIP

Electronics without silicon is unbelievable, but it will come true with the evolution of Diamond or Carbon chip. Now a day we are using silicon for the manufacturing of Electronic Chip's. It has many disadvantages when it is used in power electronic applications, such as bulk in size, slow operating speed etc. Carbon, Silicon and Germanium are belonging to the same group in the periodic table. They have four valance electrons in their outer shell. Pure Silicon and Germanium are semiconductors in normal temperature. So in the earlier days they are used widely for the manufacturing of electronic components. But later it is found that Germanium has many disadvantages compared to silicon, such as large reverse current, less stability towards temperature etc so in the industry focused in developing electronic components using silicon wafers
Now research people found that Carbon is more advantages than Silicon. By using carbon as the manufacturing material, we can achieve smaller, faster and stronger chips. They are succeeded in making smaller prototypes of Carbon chip. They invented a major component using carbon that is "CARBON NANOTUBE", which is widely used in most modern microprocessors and it will be a major component in the Diamond chip
WHAT IS IT?
In single definition, Diamond Chip or carbon Chip is an electronic chip manufactured on a Diamond structural Carbon wafer. OR it can be also defined as the electronic component manufactured using carbon as the wafer. The major component using carbon is (cnt) Carbon Nanotube. Carbon Nanotube is a nano dimensional made by using carbon. It has many unique properties.
HOW IS IT POSSIBLE?
Pure Diamond structural carbon is non-conducting in nature. In order to make it conducting we have to perform doping process. We are using Boron as the p-type doping Agent and the Nitrogen as the n-type doping agent. The doping process is similar to that in the case of Silicon chip manufacturing. But this process will take more time compared with that of silicon because it is very difficult to diffuse through strongly bonded diamond structure. CNT (Carbon Nanotube) is already a semi conductor.
ADVANTAGES OF DIAMOND CHIP
1 SMALLER COMPONENTS ARE POSSIBLE
As the size of the carbon atom is small compared with that of silicon atom, it is possible to etch very smaller lines through diamond structural carbon. We can realize a transistor whose size is one in hundredth of silicon transistor.
2 IT WORKS AT HIGHER TEMPERATURE
Diamond is very strongly bonded material. It can withstand higher temperatures compared with that of silicon. At very high temperature, crystal structure of the silicon will collapse. But diamond chip can function well in these elevated temperatures. Diamond is very good conductor of heat. So if there is any heat dissipation inside the chip, heat will very quickly transferred to the heat sink or other cooling mechanics.
3 FASTER THAN SILICON CHIP
Carbon chip works faster than silicon chip. Mobility of the electrons inside the doped diamond structural carbon is higher than that of in he silicon structure. As the size of the silicon is higher than that of carbon, the chance of collision of electrons with larger silicon atoms increases. But the carbon atom size is small, so the chance of collision decreases. So the mobility of the charge carriers is higher in doped diamond structural carbon compared with that of silicon.
4 LARGER POWER HANDLING CAPACITY
For power electronics application silicon is used, but it has many disadvantages such as bulk in size, slow operating speed, less efficiency, lower band gap etc at very high voltages silicon structure will collapse. Diamond has a strongly bonded crystal structure. So carbon chip can work under high power environment. It is assumed that a carbon transistor will deliver one watt of power at rate of 100 GHZ. Now days in all power electronic circuits, we are using certain circuits like relays, or MOSFET inter connection circuits (inverter circuits) for the purpose of interconnecting a low power control circuit with a high power circuit .If we are using carbon chip this inter phase is not needed. We can connect high power circuit direct to the diamond chip

NIGHT VISION Technology

Night Vision technology consists of two major types: image intensification (light amplification) and thermal imaging (infrared). Most consumer night vision products are light amplifying devices. Light amplification technology takes the small amount of light, such as moonlight or starlight, that is in the surrounding area, and converts the light energy (scientists call it photons), into electrical energy (electrons). These electrons pass through a thin disk that's about the size of a quarter and contains over 10 million channels. As the electrons travel through and strike the walls of the channels, thousands more electrons are released. These multiplied electrons then bounce off of a phosphor screen which converts the electrons back into photons and lets you see an impressive nighttime view even when it's really dark.
All image intensified night vision products on the market today have one thing in common: they produce a green output image .There are three important attributes for judging performance. They are: sensitivity, signal-to-noise, and resolution. As the customer, you need to know about these three characteristics to determine the performance level of a night vision system.
Sensitivity, or photoresponse, is the image tube's ability to detect available light. It is usually measured in "µA/lm," or microamperes per lumen. That's why many of our products do not come with standard IR illuminators. With many applications illuminators aren't necessary. Some manufacturers put IR illuminators on their products in order to get acceptable performance under low light conditions.Signal-to-noise plays a key role in night vision performance. A microchannel plate used to transfer a signal from input to output. Just as high-end stereo equipment gives you quality sound.
Resolution is the third major consideration when purchasing night vision. This is the ability to resolve detail in your image. Some manufacturers put magnified optics in their systems to give the illusion that they have high resolving systems. In the trade-off, field of view is sacrificed. Some models give the option of higher magnification so you can have it if you want it, not because your system needs it to function effectively. Most of Morovision's products offer a uniquely formulated phosphor to create the highest contrasting images, therefore generating the highest resolution products available to the consumer.

VIRTUAL KEYBOARD

A virtual keyboard is actually a key-in device, roughly a size of a fountain pen, which uses highly advanced laser technology, to project a full sized keyboard on to a flat surface. Since the invention of computers they had undergone rapid miniaturization. Disks and components grew smaller in size, but only component remained same for decades -its keyboard. Since miniaturization of a traditional keyboard is very difficult we go for virtual keyboard. Here, a camera tracks the finger movements of the typist to get the correct keystroke. A virtual keyboard is a keyboard that a user operates by typing on or within a wireless or optical -dectable surface or area rather than by depressing physical keys.

Since their invention, computers have undergone rapid miniaturization from being a 'space saver' to 'as tiny as your palm'. Disks and components grew smaller in size, but one component still remained the same for decades - it's the keyboard. Miniaturization of keyboard had proved nightmare for users. Users of PDAs and smart phones are annoyed by the tiny size of the keys. The new innovation Virtual Keyboard uses advanced technologies to project a full-sized computing key-board to any surface. This device has become the solution for mobile computer users who prefer to do touch-typing than cramping over tiny keys. Typing information into mobile devices usually feels about as natural as a linebacker riding a Big Wheel. Virtual Keyboard is a way to eliminate finger cramping. All that's needed to use the keyboard is a flat surface. Using laser technology, a bright red image of a keyboard is projected from a device such as a handheld. Detection technology based on optical recognition allows users to tap the images of the keys so the virtual keyboard behaves like a real one. It's designed to support any typing speed.

Keyboard
The part of the computer (also that of PDAs, smart phones etc.) that we come into most contact with is probably the piece that we think about the least. But the keyboard is an amazing piece of technology. For instance, did you know that the keyboard on a typical computer system is actually a computer itself?
Virtual Keyboard
A virtual keyboard is a keyboard that a user operates by typing (moving fingers) on or within a wireless or optical-detectable surface or area rather than by depressing physical keys. In one technology, the keyboard is projected optically on a flat surface and, as the user touches the image of a key, the optical device detects the stroke and sends it to the computer. In another technology, the keyboard is projected on an area and selected keys are transmitted as wireless signals using the short-range Bluetooth technology. With either approach, a virtual keyboard makes it possible for the user of a very small smart phone or a wearable computer to have full keyboard capability.
Theoretically, with either approach, the keyboard can be in space and the user can type by moving fingers through the air! The regular QWERTY keyboard layout is provided. All that's needed to use the keyboard is a flat surface. Using laser technology, a bright red image of a keyboard is projected from a device such as a handheld. Detection technology based on optical recognition allows users to tap the images of the keys so the virtual keyboard behaves like a real one. It's designed to support any typing speed. Several products have been developed that use virtual keyboard to mean a keyboard that has been put on a display screen as an image map. In some cases, the keyboard can be customized. Depending on the product, the user (who may be someone unable to use a regular keyboard) can use a touch screen or a mouse to select the keys.
Advantages Of Virtual Keyboard
· Portability
· Accuracy
· Speed of text entry
· Lack of need for flat or large typing surface
· Ability to minimize the risk for repetitive strain injuries
· Flexibility
· Keyboard layouts can be changed by software allowing for foreign or
· Alternative keyboard layouts.

IPV6...

The Internet is one of the greatest revolutionary innovations of the twentieth century.It made the 'global village utopia ' a reality in a rather short span of time. It is changing the way we interact with each other, the way we do business, the way we educate ourselves and even the way we entertain ourselves. Perhaps even the architects of Internet would not have foreseen the tremendous growth rate of the network being witnessed today.With the advent of the Web and multimedia services, the technology underlying t he Internet has been under stress.
It cannot adequately support many services being envisaged, such as real time video conferencing, interconnection of gigabit networks with lower bandwidths, high security applications such as electronic commerce, and interactive virtual reality applications. A more serious problem with today's Internet is that it can interconnect a maximum of four billion systems only, which is a small number as compared to the projected systems on the Internet in the twenty-first century.
Each machine on the net is given a 32-bit address. With 32 bits, a maximum of about four billion addresses is possible. Though this is a large a number, soon the Internet will have TV sets, and even pizza machines connected to it, and since each of them must have an IP address, this number becomes too small. The revision of IPv4 was taken up mainly to resolve the address problem, but in the course of refinements, several other features were also added to make it suitable for the next generation Internet.
This version was initially named IPng (IP next generation) and is now officially known as IPv6. IPv6 supports 128-bit addresses, the source address and the destination address, each being, 128 bits long. IPv5 a minor variation of IPv4 is presently running on some routers. Presently, most routers run software that support only IPv4. To switch over to IPv6 overnight is an impossible task and the transition is likely to take a very long time.
However to speed up the transition, an IPv4 compatible IPv6 addressing scheme has been worked out. Major vendors are now writing softwares for various computing environments to support IPv6 functionality. Incidentally, software development for different operating systems and router platforms will offer major jobs opportunities in coming years.

BLU RAY DISC

Optical disks share a major part among the secondary storage devices.Blu .ray Disc is a next .generation optical disc format. The technology utilizes a blue laser diode operating at a wavelength of 405 nm to read and write data. Because it uses a blue laser it can store enormous more amounts of data on it than was ever possible.
Data is stored on Blu .Ray disks in the form of tiny ridges on the surface of an opaque 1.1 .millimetre .thick substrate. This lies beneath a transparent 0.1mm protective layer. With the help of Blu .ray recording devices it is possible to record up to 2.5 hours of very high quality audio and video on a single BD.
Blu ray also promises some added security, making ways for copyright protections. Blu .ray discs can have a unique ID written on them to have copyright protection inside the recorded streams. Blu .ray disc takes the DVD technology one step further, just by using a laser with a nice color.
History of Blu ray Disc
First Generation
When the CD was introduced in the early 80s, it meant an enormous leap from traditional media. Not only did it offer a significant improvement in audio quality, its primary application, but its 650 MB storage capacity also meant a giant leap in data storage and retrieval. For the first time, there was a universal standard for pre .recorded, recordable and rewritable media, offering the best quality and features consumers could wish for themselves, at very low costs.
1.2 Second Generation

Although the CD was a very useful medium for the recording and distribution of audio and some modest data .applications, demand for a new medium offering higher storage capacities rose in the 90s. These demands lead to the evolution of the DVD specification and a five to ten fold increase in capacity. This enabled high quality, standard definition video distribution and recording. Furthermore, the increased capacity accommodated more demanding data applications. At the same time, the DVD spec used the same form factor as the CD, allowing for seamless migration to the next generation format and offering full backwards compatibility.

HDTV (High Definition Video)
This high resolution 16:9 ratio, progressive scan format can now be recorded to standard miniDV cassettes Consumer high definition cameras are becoming available but this is currently an expensive, niche market. It is also possible to capture video using inexpensive webcams. These normally connect to a computer via USB. While they are much cheaper than DV cameras, webcams offer lower quality and less flexibility for editing purposes, as they do not capture video in DV format. Digital video is available on many portable devices from digital stills cameras to mobile phones. This is contributing to the emergence of digital video as a standard technology used and shared by people on a daily basis.

MPEG
MPEG, the Moving Picture Experts Group, overseen by the International Standards Organization (ISO), develops standards for digital video and digital audio compression. MPEG .1 with a default resolution of 352x240 was designed specifically for Video .CD and CD .imedia and is often used in CD .ROMs.

MPEG .1 audio layer .3 (MP3) compression evolved from early MPEG work. MPEG1 is an established, medium quality format (similar to VHS) supported by all players and platforms. Although not the best quality it will work well on older specification machines.
MPEG .2 compression (as used for DVD movies and digital television set .top boxes) is an excellent format for distributing video, as it offers high quality and smaller file sizes than DV. Due to the way it compresses video MPEG .2 .encoded footage is more problematic to edit than DV footage. Despite this, MPEG2 is becoming more common as a capture format. MPEG 2 uses variable bit rates allowing frames to be encoded with more or less data depending on their contents. Most editing software now supports MPEG2 editing. Editing and encoding MPEG2 requires more processing power than DV and should be done on well specified machines. It is not suitable for internet delivery.

BIO-MOLECULAR COMPUTING

Molecular computing is an emerging field to which chemistry, biophysics, molecular biology, electronic engineering, solid state physics and computer science contribute to a large extent. It involves the encoding, manipulation and retrieval of information at a macromolecular level in contrast to the current techniques, which accomplish the above functions via IC miniaturization of bulk devices. The biological systems have unique abilities such as pattern recognition, learning, self-assembly and self-reproduction as well as high speed and parallel information processing. The aim of this article is to exploit these characteristics to build computing systems, which have many advantages over their inorganic (Si,Ge) counterparts.
DNA computing began in 1994 when Leonard Adleman proved thatDNA computing was possible by finding a solution to a real- problem, a Hamiltonian Path Problem, known to us as the Traveling Salesman Problem,with a molecular computer. In theoretical terms, some scientists say the actual beginnings of DNA computation should be attributed to Charles Bennett's work. Adleman, now considered the father of DNA computing, is a professor at the University of Southern California and spawned the field with his paper, "Molecular Computation of Solutions of Combinatorial Problems." Since then, Adleman has demonstrated how the massive parallelism of a trillion DNA strands can simultaneously attack different aspects of a computation to crack even the toughest combinatorial problems.
Adleman's Traveling Salesman Problem:
The objective is to find a path from start to end going through all the points only once. This problem is difficult for conventional computers to solve because it is a "non-deterministic polynomial time problem" . These problems, when they involve large numbers, are intractable with conventional computers, but can be solved using massively parallel computers like DNA computers. The Hamiltonian Path problem was chosen by Adleman because it is known problem.

The following algorithm solves the Hamiltonian Path problem:
1.Generate random paths through the graph.
2.Keep only those paths that begin with the start city (A) and conclude with the
end city (G).
3.If the graph has n cities, keep only those paths with n cities. (n=7)
4.Keep only those paths that enter all cities at least once.
5.Any remaining paths are solutions.

The key was using DNA to perform the five steps in the above algorithm. Adleman's first step was to synthesize DNA strands of known sequences, each strand 20 nucleotides long. He represented each of the six vertices of the path by a separate strand, and further represented each edge between two consecutive vertices, such as 1 to 2, by a DNA strand which consisted of the last ten nucleotides of the strand representing vertex 1 plus the first 10 nucleotides of the vertex 2 strand. Then, through the sheer amount of DNA molecules (3x1013 copies for each edge in this experiment!) joining together in all possible combinations, many random paths were generated. Adleman used well-established techniques of molecular biology to weed out the Hamiltonian path, the one that entered all vertices, starting at one and ending at six. After generating the numerous random paths in the first step, he used polymerase chain reaction (PCR) to amplify and keep only the paths that began on vertex 1 and ended at vertex 6. The next two steps kept only those strands that passed through six vertices, entering each vertex at least once. At this point, any paths that remained would code for a Hamiltonian path, thus solving the problem.

4G SYSTEMS

                Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency . The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.
Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems, making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services. The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.Frequency band is 2 ]8 GHz. it gives the ability for world wide roaming to access cell anywhere.
Features:
o Support for interactive multimedia, voice, streaming video, Internet, and other broadband services
o IP based mobile system
o High speed, high capacity, and low cost per bit
o Global access, service portability, and scalable mobile services
o Seamless switching, and a variety of Quality of Service driven services
o Better scheduling and call admission control techniques
o Ad hoc and multi hop networks (the strict delay requirements of voice make multi hop network service a difficult problem)
o Better spectral efficiency
o Seamless network of multiple protocols and air interfaces (since 4G will be all ]IP, look for 4G systems to be compatible with all common network technologies, including802.11, WCDMA, Blue tooth, and Hyper LAN).
o An infrastructure to handle pre existing 3G systems along with other wireless technologies, some of which are currently under development.

Face Detection and Recognition technology

Humans are very good at recognizing faces and if computers complex patterns. Even a passage of time doesn't affect this capability and therefore it would help become as robust as humans in face recognition. Machine recognition of human faces from still or video images has attracted a great deal of attention in the psychology, image processing, pattern recognition, neural science, computer security, and computer vision communities. Face recognition is probably one of the most non-intrusive and user-friendly biometric authentication methods currently available; a screensaver equipped with face recognition technology can automatically unlock the screen whenever the authorized user approaches the computer.
Face is an important part of who we are and how people identify us. It is arguably a person's most unique physical characteristic. While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up.
Visionics, a company based in New Jersey, is one of many developers of facial recognition technology. The twist to its particular software, FaceIt, is that it can pick someone's face out of a crowd, extract that face from the rest of the scene and compare it to a database full of stored images. In order for this software to work, it has to know what a basic face looks like. Facial recognition software is designed to pinpoint a face and measure its features. Each face has certain distinguishable landmarks, which make up the different facial features. These landmarks are referred to as nodal points. There are about 80 nodal points on a human face. Here are a few of the nodal points that are measured by the software:

Distance between eyes
" Width of nose
" Depth of eye sockets
" Cheekbones
" Jaw line
" Chin

These nodal points are measured to create a numerical code, a string of numbers that represents the face in a database. This code is called a faceprint. Only 14 to 22 nodal points are needed for the FaceIt software to complete the recognition process.
Software
Facial recognition software falls into a larger group of technologies known as biometrics. Biometrics uses biological information to verify identity. The basic idea behind biometrics is that our bodies contain unique properties that can be used to distinguish us from others. Besides facial recognition, biometric authentication methods also include:

" Fingerprint scan
" Retina scan
" Voice identification

Facial recognition methods generally involve a series of steps that serve to capture, analyze and compare a face to a database of stored images. The basic processes used by the FaceIt system to capture and compare images are:
Detection - When the system is attached to a video surveillance system, the recognition software searches the field of view of a video camera for faces. If there is a face in the view, it is detected within a fraction of a second. A multi-scale algorithm is used to search for faces in low resolution. The system switches to a high-resolution search only after a head-like shape is detected.
2. Alignment - Once a face is detected, the system determines the head's position, size and pose. A face needs to be turned at least 35 degrees toward the camera for the system to register it.
3. Normalization -The image of the head is scaled and rotated so that it can be registered and mapped into an appropriate size and pose. Normalization is performed regardless of the head's location and distance from the camera. Light does not impact the normalization process.
4. Representation - The system translates the facial data into a unique code. This coding process allows for easier comparison of the newly acquired facial data to stored facial data.
5. Matching - The newly acquired facial data is compared to the stored data and (ideally) linked to at least one stored facial representation.

An INITIATION...

               Hi friends, This is your friend from RISE GROUPS OF INSTITUTIONS, took the initiation to present you a wide collection of seminar topics related to branches like CSE, ECE, CIVIL, EEE, etc...
               I request you to utilise this information for the contruction of your life...

NEED OF PRESENTATIONS...

 We will be awarded with a certificate either if we won or if we lose also...Those certificates will
 be useful during your search towards the JOB. These certificates could give more impression to the interviewer and makes your way more clear for you...