A computer network is an interconnection of a group of computers. Networks may be classified by what is called the network layer at which they operate according to basic reference models considered as standards in the industry such as the four-layer Internet Protocol Suite model. While the seven-layer Open Systems Interconnection (OSI) reference model is better known in academia, the majority of networks use the Internet Protocol Suite (IP) as their network model.
January 16, 2008
January 16, 2008
Leave a Comment
Anyone paying attention to the Sine Nomine demo, two weeks ago, now knows that Solaris is inevitably coming to System z. In the wake of all of this, I have heard from a number of you – and people throughout the industry about – you guessed it: “What’s next, Windows?” I’m guessing many of you have the same question. To really get at the dynamics of this, we have to look to the past. Back in the early 90’s, Windows was not just on the x86 architecture (at the time, only on Intel – remember Wintel?). It was also running on MIPS, Alpha and the PowerPC architecture. There’s something a bit different about the x86 architecture….this is a bit technical here, but it uses the Little Endian bit representation within a byte while the RISC architectures under the popular UNIX platforms and IBM’s mainframe use Big Endian architecture. Solaris and Linux were written in a way that makes that bit representation transparent. But Microsoft decided long ago that Windows would only run in Little Endian mode. Back in 1994, there was a skunk works within the IBM mainframe division that looked at running Windows NT as a native operating system on what was then a 10-way S/390 platform. It figured out how to boot the machine up as a Little Endian server and it could have run Windows across those 10 processors. But guess what? No hypervisor or virtualization capabilities would exist. They have been written in Big Endian mode. So it would be an entire mainframe dedicated to a single instance of Windows. My palm can do that. So, IBM realized then that this had no future in terms of consolidation value. In turn, Microsoft decided to uniquely support the x86 architecture and the Alpha and MIPS implementations of Windows died a rather quick death.
Next up was to bring some Windows portability to the mainframe. So working with Bristol Technology (now a subsidiary of HP), IBM looked at getting a set of Windows 32 bit APIs and its OLE and COM capabilities on OS/390. This was just after IBM had announced its intention to brand OS/390 as a UNIX operating system. Bristol Technology had a license to the Microsoft source code to facilitate that. Well, Microsoft must have gotten afraid of the possibilities of Windows applications running easily on the mainframe, so they took away the software license from
. Bristol, in turn, sued them for unfair trade and they won. But by then, Microsoft’s approach had driven these types of developers from their platform. Today, Mainsoft Corporation provides a Windows portability layer across UNIX systems and z/OS, but we’ll never see a day when Windows will run natively on the mainframe.
So what are the implications to the mainframe? Let’s start with development tools for creating new applications. If you only use Microsoft .Net development tooling, those solutions will be relegated to the Wintel platform. Should those applications want to interoperate with the mainframe, there are a variety of connectors that enable interoperability with both 3270 and SOAP/Web service based applications as well as distributed data requests. As mentioned earlier, you can use Mainsoft’s technology to translate the .Net code into Java byte codes and run that on z/OS and Linux for System z as well. Therefore you get some developers synergy, but deployment options beyond Wintel platforms.
If you really want cross platform deployment from the Windows desktop environment, eclipse.org is an open standards group comprised of a number of leading tooling vendors to facilitate rapid application development, good tool integration and provide a flexible choice for platforms to which those applications can be deployed. Leveraging this tool set will facilitate exploitation of mainframe technology and is highly recommended to deliver the best qualities of service for software running on System z. IBM’s Rational Developer for z is an implementation of the eclipse capabilities for System z.
January 16, 2008
Mainframes (often colloquially referred to as Big Iron) are computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, ERP, and financial transaction processing.
The term probably originated from the early mainframes, as they were housed in enormous, room-sized metal boxes or frames.  Later the term was used to distinguish high-end commercial machines from less powerful units which were often contained in smaller packages.
Today in practice, the term usually refers to computers compatible with the IBM System/360 line, first introduced in 1965. (IBM System z9 is IBM’s latest incarnation.) Otherwise, systems with similar functionality but not based on the IBM System/360 are referred to as “servers.” However, “server” and “mainframe” are not synonymous (see client-server).
Some non-System/360-compatible systems derived from or compatible with older (pre-Web) server technology may also be considered mainframes. These include the Burroughs large systems, the UNIVAC 1100/2200 series systems, and the pre-System/360 IBM 700/7000 series. Most large-scale computer system architectures were firmly established in the 1960s and most large computers were based on architecture established during that era up until the advent of Web servers in the 1990s. (Interestingly, the first Web server running anywhere outside Switzerland ran on an IBM mainframe at Stanford University as early as 1990. See History of the World Wide Web for details.)
There were several minicomputer operating systems and architectures that arose in the 1970s and 1980s, but minicomputers are generally not considered mainframes. (UNIX arose as a minicomputer operating system; Unix has scaled up over the years to acquire some mainframe characteristics.)
Modern mainframe computers have abilities not so much defined by their single task computational speed (flops or clock rate) as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility for older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and even software and hardware upgrades taking place during normal operation. For example, ENIAC remained in continuous operation from 1947 to 1955. More recently, there are several IBM mainframe installations that have delivered over a decade of continuous business service as of 2007, with upgrades not interrupting service. Mainframes are defined by high availability, one of the main reasons for their longevity, as they are used in applications where downtime would be costly or catastrophic. The term Reliability, Availability and Serviceability (RAS) is a defining characteristic of mainframe computers.In the 1960s, most mainframes had no interactive interface. They accepted sets of punch cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, at least for system operators. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds or thousands of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Many mainframes supported graphical terminals (and terminal emulation) by the 1980s (if not earlier). Nowadays most mainframes have partially or entirely phased out classic user terminal access in favor of Web user interfaces.Historically mainframes acquired their name in part because of their substantial size and requirements for specialized HVAC and electrical power. Those requirements ended by the mid-1990s, with CMOS mainframe designs replacing the older bipolar technology. In fact, in a major reversal, IBM touts the mainframe’s ability to reduce data center energy costs for power and cooling and reduced physical space requirements compared to server farms.
Characteristics of mainframes:
Nearly all mainframes have the ability to run (or host) multiple operating systems and thereby operate not as a single computer but as a number of virtual machines. In this role, a single mainframe can replace dozens or even hundreds of smaller servers, reducing management and administrative costs while providing greatly improved scalability and reliability.Mainframes can add system capacity nondisruptively and granularly. Modern mainframes, notably the IBM zSeries and System z9 servers, offer three levels of virtualization: logical partitions (LPARs, via the PR/SM facility), virtual machines (via the z/VM operating system), and through its operating systems (notably z/OS with its key-protected address spaces and sophisticated goal-oriented workload scheduling,[clarify] but also Linux and Java). This virtualization is so thorough, so well established, and so reliable that most IBM mainframe customers run no more than two machines: one in their primary data center, and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. All test, development, training, and production workload for all applications and all databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two mainframe installation can support continuous business service, avoiding both planned and unplanned outages.Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the mid-1960’s, mainframe designs have included several subsidiary computers (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Giga-record or tera-record files are not unusual. Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it much faster.
Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Some argue that the modern mainframe is not cost-effective. Hewlett-Packard and Dell unsurprisingly take that view at least at times, and so do a few independent analysts. Sun Microsystems used to take that view but, beginning in mid-2007, started promoting its new partnership with IBM, including probable support for the company’s OpenSolaris operating system running on IBM mainframes. The general consensus (held by Gartner and other independent analysts) is that the modern mainframe often has unique value and superior cost-effectiveness, especially for large scale enterprise computing. In fact, Hewlett-Packard also continues to manufacture its own mainframe (arguably), the NonStop system originally created by Tandem. Logical partitioning is now found in many UNIX-based servers, and many vendors are promoting virtualization technologies, in many ways validating the mainframe’s design accomplishments.
Mainframes also have unique execution integrity characteristics for fault tolerant computing. System z9 servers execute each instruction twice, compare results, and shift workloads “in flight” to functioning processors, including spares, without any impact to applications or users. This feature, also found in HP’s NonStop systems, is known as lock-stepping, because both processors take their “steps” (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.
Despite these differences, the IBM mainframe, in particular, is still a general purpose business computer in terms of its support for a wide variety of popular operating systems, middleware, and applications.
Several manufacturers produced mainframe computers from the late 1950s through the 1970s. The group of manufacturers was first known as “IBM and the Seven Dwarfs“: IBM, Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA. Later, shrinking, it was referred to as IBM and the BUNCH. IBM’s dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries/z9 mainframes which, along with the then Burroughs and now Unisys MCP-based mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. That said, while they can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the USA were Siemens and Telefunken in Germany, ICL in the United Kingdom, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the Strela is an example of an independently designed Soviet computer.Shrinking demand and tough competition caused a shakeout in the market in the early 1980s — RCA sold out to UNIVAC and GE also left; Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1991, AT&T briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks.That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Another factor currently increasing mainframe use is the development of the Linux operating system, which can run on many mainframe systems, typically in virtual machines. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.)
Mainframes vs. supercomputers:
The distinction between supercomputers and mainframes is not a hard and fast one, but supercomputers generally focus on problems which are limited by calculation speed while mainframes focus on problems which are limited by input/output and reliability (“throughput computing”) and on solving multiple business problems concurrently (mixed workload). The differences and similarities include:
- Both types of systems offer parallel processing. Supercomputers typically expose it to the programmer in complex manners, while mainframes typically use it to run multiple tasks. One result of this difference is that adding processors to a mainframe often speeds up the entire workload transparently.
- Supercomputers are optimized for complicated computations that take place largely in memory, while mainframes are optimized for comparatively simple computations involving huge amounts of external data. For example, weather forecasting is suited to supercomputers, and insurance business or payroll processing applications are more suited to mainframes.
- Supercomputers are often purpose-built for one or a very few specific institutional tasks (e.g. simulation and modeling). Mainframes typically handle a wider variety of tasks (e.g. data processing, warehousing). Consequently, most supercomputers can be one-off designs, whereas mainframes typically form part of a manufacturer’s standard model lineup.
- Mainframes tend to have numerous ancillary service processors assisting their main central processors (for cryptographic support, I/O handling, monitoring, memory handling, etc.) so that the actual “processor count” is much higher than would otherwise be obvious. Supercomputer design tends not to include as many service processors since they don’t appreciably add to raw number-crunching power.
There has been some blurring of the term “mainframe,” with some PC and server vendors referring to their systems as “mainframes” or “mainframe-like.” This is not widely accepted and the market generally recognizes that mainframes are genuinely and demonstrably different.
Speed and performance:
The CPU speed of mainframes has historically been measured in millions of instructions per second (MIPS). MIPS have been used as an easy comparative rating of the speed and capacity of mainframes. The smallest System z9 IBM mainframes today run at about 26 MIPS and the largest about 17,801 MIPS. IBM’s Parallel Sysplex technology can join up to 32 of these systems, making them behave like a single, logical computing facility of as much as about 569,632 MIPS.The MIPS measurement has long been known to be misleading and has often been parodied as “Meaningless Indicator of Processor Speed.” The complex CPU architectures of modern mainframes have reduced the relevance of MIPS ratings to the actual number of instructions executed. Likewise, the modern “balanced performance” system designs focus both on CPU power and on I/O capacity, and virtualization capabilities make comparative measurements even more difficult. See benchmark (computing) for a brief discussion of the difficulties in benchmarking such systems. IBM has long published a set of LSPR (Large System Performance Reference) ratio tables for mainframes that take into account different types of workloads and are a more representative measurement. However, these comparisons are not available for non-IBM systems. It takes a fair amount of work (and maybe guesswork) for users to determine what type of workload they have and then apply only the LSPR values most relevant to them.To give some idea of real world experience, it is typical for a single mainframe CPU to execute the equivalent of 50, 100, or even more distributed processors’ worth of business activity, depending on the workloads. Merely counting processors to compare server platforms is extremely perilous.
January 16, 2008
Leave a Comment
Nanotechnology refers broadly to a field of applied science and technology whose unifying theme is the control of matter on the atomic and molecular scale, normally 1 to 100 nanometers, and the fabrication of devices with critical dimensions that lie within that size range.
It is a highly multidisciplinary field, drawing from fields such as applied physics, materials science, interface and colloid science, device physics, supramolecular chemistry (which refers to the area of chemistry that focuses on the noncovalent bonding interactions of molecules), self-replicating machines and robotics, chemical engineering, mechanical engineering, biological engineering, and electrical engineering. Much speculation exists as to what may result from these lines of research. Nanotechnology can be seen as an extension of existing sciences into the nanoscale, or as a recasting of existing sciences using a newer, more modern term.
Two main approaches are used in nanotechnology. In the “bottom-up” approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the “top-down” approach, nano-objects are constructed from larger entities without atomic-level control. The impetus for nanotechnology comes from a renewed interest in Interface and Colloid Science, coupled with a new generation of analytical tools such as the atomic force microscope (AFM), and the scanning tunneling microscope (STM). Combined with refined processes such as electron beam lithography and molecular beam epitaxy, these instruments allow the deliberate manipulation of nanostructures, and led to the observation of novel phenomena.
Examples of nanotechnology in modern use are the manufacture of polymers based on molecular structure, and the design of computer chip layouts based on surface science. Despite the great promise of numerous nanotechnologies such as quantum dots and nanotubes, real commercial applications have mainly used the advantages of colloidal nanoparticles in bulk form, such as suntan lotion, cosmetics, protective coatings, drug delivery, and stain resistant clothing.
Origins:The first use of the concepts in ‘nano-technology’ (but predating use of that name) was in “There’s Plenty of Room at the Bottom,” a talk given by physicist and chemist Richard Feynman at an American Physical Society meeting at Caltech on December 29, 1959. Feynman described a process by which the ability to manipulate individual atoms and molecules might be developed, using one set of precise tools to build and operate another proportionally smaller set, so on down to the needed scale. In the course of this, he noted, scaling issues would arise from the changing magnitude of various physical phenomena: gravity would become less important, surface tension and Van der Waals attraction would become more important, etc. This basic idea appears plausible, and exponential assembly enhances it with parallelism to produce a useful quantity of end products. The term “nanotechnology” was defined by Tokyo Science University Professor Norio Taniguchi in a 1974 paper (N. Taniguchi, “On the Basic Concept of ‘Nano-Technology’,” Proc. Intl. Conf. Prod. London, Part II, British Society of Precision Engineering, 1974.) as follows: “‘Nano-technology’ mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or by one molecule.” In the 1980s the basic idea of this definition was explored in much more depth by Dr. K. Eric Drexler, who promoted the technological significance of nano-scale phenomena and devices through speeches and the books Engines of Creation: The Coming Era of Nanotechnology (1986) and Nanosystems: Molecular Machinery, Manufacturing, and Computation, and so the term acquired its current sense. Nanotechnology and nanoscience got started in the early 1980s with two major developments; the birth of cluster science and the invention of the scanning tunneling microscope (STM). This development led to the discovery of fullerenes in 1986 and carbon nanotubes a few years later. In another development, the synthesis and properties of semiconductor nanocrystals was studied; This led to a fast increasing number of metal oxide nanoparticles of quantum dots. The atomic force microscope was invented six years after the STM was invented.
One nanometer (nm) is one billionth, or 10-9 of a meter. For comparison, typical carbon-carbon bond lengths, or the spacing between these atoms in a molecule, are in the range .12-.15 nm, and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular lifeforms, the bacteria of the genus Mycoplasma, are around 200 nm in length. To put that scale in to context the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth. Or another way of putting it: a nanometer is the amount a man’s beard grows in the time it takes him to raise the razor to his face.Larger to smaller: a materials perspective:
A number of physical phenomena become noticeably pronounced as the size of the system decreases. These include statistical mechanical effects, as well as quantum mechanical effects, for example the “quantum size effect” where the electronic properties of solids are altered with great reductions in particle size. This effect does not come into play by going from macro to micro dimensions. However, it becomes dominant when the nanometer size range is reached. Additionally, a number of physical (mechanical, electrical, optical, etc.) properties change when compared to macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal and catalytic properties of materials. Novel mechanical properties of nanosystems are of interest in the nanomechanics research. The catalytic activity of nanomaterials also opens potential risks in their interaction with biomaterials.Materials reduced to the nanoscale can suddenly show very different properties compared to what they exhibit on a macroscale, enabling unique applications. For instance, opaque substances become transparent (copper); inert materials become catalysts (platinum); stable materials turn combustible (aluminum); solids turn into liquids at room temperature (gold); insulators become conductors (silicon). A material such as gold, which is chemically inert at normal scales, can serve as a potent chemical catalyst at nanoscales. Much of the fascination with nanotechnology stems from these unique quantum and surface phenomena that matter exhibits at the nanoscale.
Simple to complex: a molecular perspective:
Modern synthetic chemistry has reached the point where it is possible to prepare small molecules to almost any structure. These methods are used today to produce a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble these single molecules into supramolecular assemblies consisting of many molecules arranged in a well defined manner.These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into some useful conformation through a bottom-up approach. The concept of molecular recognition is especially important: molecules can be designed so that a specific conformation or arrangement is favored due to non-covalent intermolecular forces. The Watson-Crick basepairing rules are a direct result of this, as is the specificity of an enzyme being targeted to a single substrate, or the specific folding of the protein itself. Thus, two or more components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole.
Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, there are many examples of self-assembly based on molecular recognition in biology, most notably Watson-Crick basepairing and enzyme–substrate interactions. The challenge for nanotechnology is whether these principles can be used to engineer novel constructs in addition to natural ones.
Molecular nanotechnology: a long-term view:
Molecular nanotechnology, sometimes called molecular manufacturing, is a term given to the concept of engineered nanosystems (nanoscale machines) operating on the molecular scale. It is especially associated with the concept of a molecular assembler, a machine that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to, and should be clearly distinguished from, the conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles.
When the term “nanotechnology” was independently coined and popularized by Eric Drexler (who at the time was unaware of an earlier usage by Norio Taniguchi) it referred to a future manufacturing technology based on molecular machine systems. The premise was that molecular-scale biological analogies of traditional machine components demonstrated molecular machines were possible: by the countless examples found in biology, it is known that sophisticated, stochastically optimised biological machines can be produced.
It is hoped that developments in nanotechnology will make possible their construction by some other means, perhaps using biomimetic principles. However, Drexler and other researchers have proposed that advanced nanotechnology, although perhaps initially implemented by biomimetic means, ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification (PNAS-1981). The physics and engineering performance of exemplar designs were analyzed in Drexler’s book Nanosystems.
But Drexler’s analysis is very qualitative and does not address very pressing issues, such as the “fat fingers” and “Sticky fingers” problems. In general it is very difficult to assemble devices on the atomic scale, as all one has to position atoms are other atoms of comparable size and stickyness. Another view, put forth by Carlo Montemagno], is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Yet another view, put forward by the late Richard Smalley, is that mechanosynthesis is impossible due to the difficulties in mechanically manipulating individual molecules.
This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003. Though biology clearly demonstrates that molecular machine systems are possible, non-biological molecular machines are today only in their infancy. Leaders in research on non-biological molecular machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories and UC Berkeley. They have constructed at least three distinct molecular devices whose motion is controlled from the desktop with changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator.
An experiment indicating that positional molecular assembly is possible was performed by Ho and Lee at Cornell University in 1999. They used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal, and chemically bound the CO to the Fe by applying a voltage.
Tools and techniques:
The first observations and size measurements of nano-particles was made during first decade of 20th century. They are mostly associated with the name of Zsigmondy who made detail study of gold sols and other nanomaterials with sizes down to 10 nm and less. He published a book in 1914. . He used ultramicroscope that employes dark field method for seeing particles with sizes much less than light wavelength.There are traditional techniques developed during 20th century in Interface and Colloid Science for characterizing nanomaterials. These are widely used for first generation passive nanomaterials specified in the next section.
These methods include several different techniques for characterizing particle size distribution. This characterization is imperative because many materials that are expected to be nano-sized are actually aggregated in solutions. Some of methods are based on light scattering. Other apply ultrasound, such as ultrasound attenuation spectroscopy for testing concentrated nano-dispersions and microemulsions .
There is also a group of traditional techniques for characterizing surface charge or zeta potential of nano-particles in solutions. These information is required for proper system stabilzation, preventing its aggregation or flocculation. These methods include microelectrophoresis, electrophoretic light scattering and electroacoustics. The last one, for instance colloid vibration current method is suitable for characterizing concentrated systems.
Next group of nanotechnological techniques include those used for fabrication of nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers. However, all of these techniques preceded the nanotech era, and are extensions in the development of scientific advancements rather than techniques which were devised with the sole purpose of creating nanotechnology and which were results of nanotechnology research.
There are several important modern developments. The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two early versions of scanning probes that launched nanotechnology. There are other types of scanning probe microscopy, all flowing from the ideas of the scanning confocal microscope developed by Marvin Minsky in 1961 and the scanning acoustic microscope (SAM) developed by Calvin Quate and coworkers in the 1970s, that made it possible to see structures at the nanoscale. The tip of a scanning probe can also be used to manipulate nanostructures (a process called positional assembly). Feature-oriented scanning–positioning methodology suggested by Rostislav Lapshin appears to be a promising way to implement these nanomanipulations in automatic mode. However, this is still a slow process because of low scanning velocity of the microscope. Various techniques of nanolithography such as dip pen nanolithography, electron beam lithography or nanoimprint lithography were also developed. Lithography is a top-down fabrication technique where a bulk material is reduced in size to nanoscale pattern.
The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are currently made. Scanning probe microscopy is an important technique both for characterization and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning–positioning approach, atoms can be moved around on a surface with scanning probe microscopy techniques. At present, it is expensive and time-consuming for mass production but very suitable for laboratory experimentation.
In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Another variation of the bottom-up approach is molecular beam epitaxy or MBE. Researchers at Bell Telephone Laboratories like John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE allows scientists to lay down atomically-precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics.
Newer techniques such as Dual Polarisation Interferometry are enabling scientists to measure quantitatively the molecular interactions that take place at the nano-scale.
Although there has been much hype about the potential applications of nanotechnology, most current commercialized applications are limited to the use of “first generation” passive nanomaterials. These include titanium dioxide nanoparticles in sunscreen, cosmetics and some food products; silver nanoparticles in food packaging, clothing, disinfectants and household appliances; zinc oxide nanoparticles in sunscreens and cosmetics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide nanoparticles as a fuel catalyst. The Woodrow Wilson Center for International Scholars’ Project on Emerging Nanotechnologies hosts an inventory of consumer products which now contain nanomaterials.However further applications which require actual manipulation or arrangement of nanoscale components await further research. Though technologies currently branded with the term ‘nano’ are sometimes little related to and fall far short of the most ambitious and transformative technological goals of the sort in molecular manufacturing proposals, the term still connotes such ideas. Thus there may be a danger that a “nano bubble” will form, or is forming already, from the use of the term by scientists and entrepreneurs to garner funding, regardless of interest in the transformative possibilities of more ambitious and far-sighted work.
The National Science Foundation (a major source of funding for nanotechnology in the United States) funded researcher David Berube to study the field of nanotechnology. His findings are published in the monograph “Nano-Hype: The Truth Behind the Nanotechnology Buzz”. This published study (with a foreword by Mihail Roco, Senior Advisor for Nanotechnology at the National Science Foundation) concludes that much of what is sold as “nanotechnology” is in fact a recasting of straightforward materials science, which is leading to a “nanotech industry built solely on selling nanotubes, nanowires, and the like” which will “end up with a few suppliers selling low margin products in huge volumes.”
Another large and beneficial outcome of nanotechnology is the production of potable water through the means of nanofiltration. Where much of the developing world lacks access to reliable water sources, nanotechnology may alleviate these issues upon further testing as have been performed in countries, such as South Africa. It is important that solute levels in water sources are maintained and reached to provide necessary nutrients to people. And in turn, further testing would be pertinent so as to measure for any signs of nanotoxicology and any negative affects to any and all biological creatures.