This is the blog in which you can update yourself in latest Technology with just a click. Daily new posts and updates for the sure. Total technically information about LATEST AND GREATEST TECHNOLOGY.
Like us in Facebook
THANK YOU FOR VISITING MY BLOG . I'M A STUDENT OF ELECTRICAL AND ELECTRONIC ENGINEERING , SEARCHING FOR LATEST TECHNOLOGY DEVELOPMENTS . I'M VERY MUCH INTERESTED IN ELECTRONICS , SO SEARCHING FOR JOB IN ELECTRONIC DEPARTMENT.
Ready for a mind-bending news story that will forever change your perception of life? Quantum physicists in Israel have successfully entangled two photons that don’t exist at the same time. They create one photon and measure its polarization, destroying it — they then create another photon, and though it never coexisted with the first, it always has the exact opposite polarization, proving they’re entangled.
Don’t worry if you have a little trouble trying to bend your head around this: Quantum mechanics, almost by definition, is completely different from our own perceptions and experiences, which are governed by classical mechanics. Believe it or not, quantum mechanics actually has no problem with the behavior demonstrated by the Israeli physicists — entanglement was never a tangible, physical property, and this experiment is a perfect example of why it’s sometimes very naive to boil quantum ideas into classical analogies.
Entanglement is a state where the state of two quantum particles (photons, for example) are intrinsically and absolutely linked. Quantum particles, due a principle called quantum superposition, exist in every theoretically possible state at the same time. A photon, for example, spins horizontally and vertically (different polarizations) at the same time. When you measure a quantum particle, though, it fixes on a single state. With entanglement, when you measure one half of the entangled pair, the other half instantly assumes the exact opposite state. If you measure one photon and it’s vertically polarized, its entangled sibling will be horizontally polarized.
Quantum entanglement, between photons that never coexist [Image credit: Science]
As for how the Israelis entangled two photons that never coexist, the technique is rather complex. They start by producing two photons (1 & 2) and entangling them. The first photon (1) is immediately measured, destroying it and fixing the state of the second photon (2). Now a second pair of entangled photons (3 & 4) is created. They then use a technique called “projection measurement” to entangle 2 and 3 — which, by association, entangles 1 and 4. Even though photons 1 and 4 never coexisted, they know the state of 4 is the exact opposite of 1.
As we’ve covered before, entanglement seems to occur instantly, even if the particles are on opposite ends of the universe. This experiment shows how entanglement exists through time, as well as space — or, in scientific terms, the non-locality of quantum mechanics in spacetime.
Does this experiment have any implications, beyond its use as a sublime example of the weirdness of quantum mechanics? As always with quantum entanglement, there is a possibility that “projection measurement” could be used in quantum networks. Instead of waiting for one half of the entangled pair to arrive at its destination (along a normal fiber optic network), this two-pair approach would allow the sender to manipulate his photon instantly. As Anton Zeilinger, a quantum physicist not involved with the study, tells Science: ”This sort of thing opens up people’s minds and suddenly somebody has an idea to use it in quantum computing or something.”
Research paper: arXiv:1209.4191 - “Entanglement Between Photons that have Never Coexisted”
Two European theoretical physicists have shown that it may be possible to build a near-perfect, entangled quantum battery. In the future, such quantum batteries might power the tiniest of devices — or provide power storage that is much more efficient than state-of-the-art lithium-ion battery packs.
To understand the concept of quantum batteries, we need to start (unsurprisingly) at a very low level. Today, most devices and machines that you interact with are governed by the rules of classical mechanics (Newton’s laws, friction, and so on). Classical mechanics are very accurate for larger systems, but they fall apart as we begin to analyze microscopic (atomic and sub-atomic) systems — which led to a new set of laws and theories that describe quantum mechanics.
In recent years, as our ability to observe and manipulate quantum systems has grown — thanks to machines such as the Large Hadron Collider and scanning tunneling electron microscopes — physicists have started theorizing about devices and machines that use quantum mechanics, rather than classical. In theory, these devices could be much smaller, more efficient, or simply act in rather unsurprising ways. In this case, Robert Alicki of the University of Gdansk in Poland, and Mark Fannes of the University of Leuven in Belgium, have defined a battery that stores and releases energy using quantum mechanics.
The increasing amount of energy that can be extracted from a quantum battery, as you increase the number of entangled copies. This graph probably won’t make much sense unless you’re a quantum physicist.
A quantum system (say, the single proton and electron in a hydrogen atom) has a quantum state, defined by the electron’s movements. (Quick aside: In our previous discussions of spintronics and quantum computing, it is thespin of the electron (clockwise, counterclockwise, etc.) that is converted into a qubit value). Some quantum states have a very small amount of energy that can be extracted, returning it to a passive, neutral state. In theory, according to Alicki and Fannes, it should be possible to build a quantum battery that is full of energy-rich quantum states — and then, somehow, recharge it when you run out of juice.
Better yet, the physicists also theorize thatquantum entanglementcould be used to create an even more efficient quantum battery. In essence, Alicki and Fannes say that you can link together any number of quantum batteries, allowing you to extract all of the stored energy in one big gulp (pictured above). Their research paper goes on to say that with enough entanglement, these batteries would be perfect — with no energy lost/wasted during charge or discharge.
As the Physics Arxiv Blog notes, such perfect energy transfer readily occurs in nature, such as during photosynthesis — but no one knows why. It’s just one possible explanation, of course, but maybe Gaea has a bit of a head start on quantum batteries.
Research paper: arXiv:1211.1209 “Extractable work from ensembles of quantum batteries. Entanglement helps.”
The National Ignition Facility in California has become the first fusion power facility to create a fusion reaction that generates more power than it requires to get the reaction started. This is perhaps the most important step ever towards the always-just-out-of-reach realization of clean, self-sustaining, limitless fusion power.
The NIF, operated by the Lawrence Livermore National Laboratory (LLNL), creates a fusion reaction by focusing the world’s most powerful laser (some 500 terawatts), split into 192 separate beams, onto a small capsule (called a hohlraum) containing a mix of deuterium and tritium (isotopes of hydrogen), situated in a fusion chamber (pictured above). The lasers strike the hohlraum (pictured bottom) with precision timing, causing a perfectly uniform explosion, which creates a massive reaction force that causes the deuterium/tritium fuel to perfectly and uniformly implode — hopefully starting a fusion reaction. This process is called inertial confinement fusion, as opposed to magnetic confinement fusion, which before today was generally considered to be a more mature technology.
For the past half-century, fusion power has always remained tantalizingly out of reach. Whenever we think we’re getting close, another roadblock pushes us back a few years. This has led to the coining of the phrase, “fusion is always 20 years away.” The irritating thing is, we have an almost complete understanding of how fusion should work, but transporting those theoretical ideas into the physical universe has proven to be surprisingly difficult, usually due to various system or material inefficiencies that weren’t evident until they were put under enormous strain.
With this latest breakthrough, it seems that the NIF overcame an inefficiency in how energy from the laser beams is transferred to D/T (deuterium/tritium) fuel. Previously, the ablator — a plastic shell that surrounds the D/T fuel — had been breaking up improperly and interfering with the implosion. By changing the shape of the laser pulse, the scientists reduced the asymmetricity of the explosion/implosion, reducing the and thus increasing overall efficiency. The end result is that the fusion of the D/T fuel produced more energy than the energy delivered to the hohlraum — but the overall energy consumption, measured at the (large) wall outlet, was still higher than the energy produced. This is due to other inefficiencies in the system.
The next step, of course, is actually achieving the NIF’s eponymous goal: ignition. To do this, the facility will have to develop a system that is efficient enough that the fusion reaction actually creates enough energy to sustain itself. Realistically, we still have some way to go — and even if NIF does reach ignition, it’s not set up to act as a fusion power plant; it’s a research facility, nothing more. As far as actual, usable fusion power goes, the European ITER fusion reactor in France is probably our best bet, with a tentative timeline of 2027 for the first D/T fusion.
For a lot more information about the National Ignition Facility, hit up our feature story, Inside California’s star power fusion facility. Not only is it full of data on fusion power, but there’s tons of beautiful photos, too. If that doesn’t slake your thirst for pretty photos, 500MW from half a gram of hydrogen: The hunt for fusion power heats up has a few more that you might like, too.
One of the main impediments to making electric vehicles (EVs) a viable option for mainstream adoption continues to be the weight of batteries. Even with next-generation li-ion cells like those used in Tesla vehicles, it’s very difficult and expensive to get enough juice packed into such a small space. Volvo has been toying with EVs for a few years now, but the Swedish auto manufacturer has just announced a potentially revolutionary approach to designing electric vehicles. Volvo wants to replace some of the steel body panels in its cars with carbon fiber composite materials that can store power like a battery. Using a standard Volvo S80 as a test platform, the company replaced the trunk lid, door panels, and hood with this new material. The rechargeable panels are composed of multiple layers of carbon fiber, which are insulated from each other by fiberglass inserts. The layers of polymer-infused carbon fiber are actually acting as the cathode and anode in this system with super capacitors built into the skin. The result is a structural component that can be charged like the battery in any other EV, either with regenerative braking or by plugging the vehicle into the power grid.
Because the battery panels in this test care are replacing heavy structural components while doing away with the centralized battery, the total weight is reduced dramatically. Volvo estimates that replacing an EV’s entire power system with battery-infused panels like this would reduce the weight by 15%. To put that in context, Tesla’s entry-level battery pack weighs over 1,000 pounds. Perhaps even more importantly, the weight will be evenly distributed to allow for better handling and less engineering hassle.
Beyond the realm of fully electric vehicles, even replacing a few parts of conventional cars could do away with the need for heavy 12-volt batteries that power the starter, lights, and other components. In the middle ground, hybrid vehicles could also benefit from rechargeable paneling.
This approach to powering EVs is not without its drawbacks, though. While carbon fiber is very strong, some parts of a car are designed to collapse in the event of an accident. By absorbing some of the energy from impact, the crumple zones can protect your squishy human body from the worst of it. That means more damage to the power paneling in places like the hood and trunk, which will be expensive to repair. Emergency crews could also face a new challenge trying to rescue people from such a vehicle after a crash — they would essentially be trying to fish someone out of a giant damaged battery.
Volvo didn’t say when it might use the nanomaterial in a production vehicle, but was keen to point out the sustainability of such an approach. If Volvo does put structural batteries into production, it will need to work on lowering the cost of carbon fiber, which is still quite high.
Ending silicon’s central role in transistors could maintain the march of Moore’s Law
The transistor isn’t shrinking the way it used to. The best ones we have today are a patchwork of fixes and kludges: speed-boosting materials that push or pull on the silicon center, exotic insulators added to stanch leaks, and a new geometry that pops things out of the plane of the chip and into the third dimension. Now, to keep Moore’s Law going, chipmakers are eyeing another monumental change in transistor architecture.
This time, they’re taking aim at the current-carrying channels at the very heart of the device, replacing the silicon there with germanium and compound semiconductors known as III-Vs. If all goes well, these materials could usher in a new generation of speedier, less power-hungry transistors, allowing for denser, faster, cooler-running chips.
But for alternate transistor channels to be accepted, engineers must find a way to build them on industry-standard silicon wafers. That’s no small feat. The atoms in the alternative semiconductors are spaced farther apart than in silicon, making the crystals difficult to grow without creating device-killing defects.
Still, industry experts say, it is quite possible that silicon fabs will ramp up production of these transistors as early as 2017. One promising approach, under development in Belgium, saves on materials and minimizes defects by precisely depositing the new materials into nanometer-scale trenches etched into standard silicon wafers. The resulting chips could trim energy consumption at data centers, boost the battery life of mobile devices, and help keep Moore’s Law going well into the next decade.
Modern transistors are built into silicon wafers through the addition of trace amounts of other materials, called dopants. Dopant atoms alter the electronic properties of the material in order to form the three core parts of the transistor: the source and drain regions, which spit out and receive charge carriers, and the current-carrying channel, which runs between them.
For decades, chipmakers could speed up their microprocessors simply by shrinking the transistors and packing more of them onto a chip. They relied on a basic rule: Smaller transistorsswitch faster and consume less energy in the process. But in the late 1990s, this rule started to break down. As chips got more and more dense, power consumption began to put circuits at risk of overheating.
One way to tackle this heat problem is to lower the supply voltage—the voltage that is applied to the drain to pull charge carriers across the channel. This reduces power consumption, but it also means that less current is available to charge capacitors down the line, ultimately resulting in less speedy circuits.
Indeed, by the mid-2000s, CPU clock speeds began to stall. Companies began to work around the problem at the processor level by introducing multiple cores. But heat problems have persisted, and with each successive jump in transistor density, the fraction of transistors that may be active at any one time has gotten smaller.
At the same time, chipmakers have devised new ways to boost performance without adding more heat. One early strategy, debuted by Intel in 2002, was to mix silicon with germanium in the source and drain regions of the transistor. Atoms in this alloy are spaced differently than in pure silicon. The resulting strain alters the crystal properties—and thus the electrical properties—of the silicon channel, boosting the speed with which an electron or a hole (the absence of an electron that responds to an electric field as if it were a positive charge) could be tugged through the device. This hike in mobility resulted in faster-switching transistors that can carry more current for a given voltage, which makes for faster circuits, too.
Now chipmakers are adapting this basic strategy to make a more drastic change: the wholesale replacement of the silicon channel. A few materials have emerged as front-runners for the two kinds of transistors needed for logic circuits. For the positive-channel field-effect transistor (pFET), which carries holes across the channel, the leading candidate is germanium, which sits just below silicon on the periodic table and can transport charge four times as fast. For the negative-channel FET, or nFET, which depends on the movement of electrons, engineers are considering a mix of elements from groups III and V of the periodic table. One of the most promising is indium gallium arsenide (InGaAs), which boasts an electron mobility of about 10 000 square centimeters per volt second, more than six times that of silicon.
Intel, which has traditionally led the industry in transistor design changes, has already done some work on alternative transistor channel materials. In 2009, the company reported it had made InGaAs devices with a gate length of 80 nanometers. Although twice as long as what was then state of the art for plain silicon chips, they were shown to perform just as well with less power. The company has since incorporated the materials into new 3-D devices, called FinFETs, which have channels that pop out of the plane of the wafer.
But to build its InGaAs transistors, Intel had to blanket an entire silicon wafer with a fairly thick layer of the III-V material, then etch away the unneeded areas. That’s too expensive for high-volume production, says Richard Hill of the U.S.-based nonprofit Sematech, a chip industry research consortium.
The future, Hill says, lies in the alternative approach pioneered by Imec, a research outfit headquartered in Leuven, Belgium. There, a team of engineers, now 50 strong, has been working for more than 10 years on a way to grow each of the billions of transistor channels on a silicon chip in trenches just tens of nanometers across.
The approach is so attractive that last year Sematech abandoned its own wafer-blanketing approach to follow suit. And although Imec cannot disclose which industry heavyweights may want to use the approach, there are strong indications of interest. “[It’s] a very valuable option that we are taking into consideration,” says Lukas Czornomaz, a researcher in the Advanced Functional Materials Group at IBM Research–Zurich.
Imec’s work is based on a simple axiom of crystal growth: The right geometry can make all the difference. The Polish chemist Jan Czochralskidiscovered this in 1916 when he showed that it’s possible to make nearly perfect crystals by drawing a seed crystal from a bath of molten metal. A key lesson was that growing material in narrow columns limits defects. The most common defect occurs when an atom fails to adhere to the right spot, causing an entire plane of atoms to go missing down the line. Fortunately, these defects tend to propagate at an angle of around 45 degrees to the direction of growth, and if crystal growth starts with a long, narrow neck, the dislocation will generally propagate for just a short distance before reaching the edge, where it terminates.
Growing each of the billions of nanometer-scale transistor channels in a tiny vat to make a chip would be impractical. But engineers can still take advantage of this geometric “necking” effect when growing crystals in vapor-filled reactors. The pioneer of this approach was Eugene Fitzgerald, a professor in the materials science and engineering department at MIT. In the 1990s, while based at Bell Laboratories, he showed that small patches of III-V material could be built on silicon if the “neck” that begins the crystal is built into the bottom of a rectangular trench that’s about twice as deep as it is wide. By the time the material is flush with the surrounding silicon surface, most of the defects have ended at one of the trench’s sidewalls [see illustration, “Where the Defects Stop”].
Matty Caymax, a chemist who specializes in postsilicon device fabrication at Imec, set out with his colleagues to see if they could make this approach fast, robust, and reliable enough to work in silicon fabs. Trenches themselves aren’t new to the semiconductor industry: For 15 years, fabs have etched away silicon and then refilled the trenches with silicon dioxide. Such “shallow trench isolation” creates stretches of insulating substrate between transistors so they can be packed closer together with minimal electrical interference.
Because silicon dioxide is noncrystalline, it can be packed into a trench without regard to where each individual atom ends up. Filling troughs with materials that have a high charge-carrier mobility is another matter. To work properly, they must form high-quality crystals, even though the spacing between their atoms is quite different from that of the silicon they are grown on. Germanium atoms are spaced, on average, 0.566 nanometers apart, compared with 0.543 nm for silicon atoms. InGaAs is even worse, with a spacing of 0.59 nm. The basic mismatch easily results in stacking errors.
When Caymax and his group began working on alternate channels in 2002, they decided to focus on giving a speed boost to pFETs. The pFET was a natural place to start. Holes don’t move as fast through silicon as electrons do. Without straining the crystal, a silicon pFET might carry only about a quarter as much current as an nFET can, Caymax says. Introducing a higher-mobility material can address that imbalance.
Growing pure germanium on pure silicon was a big jump, so Imec first started working with mixtures of silicon and germanium, and then began experimenting with growing a layer of pure germanium on top of the SiGe mix. The SiGe layer helped ease the mismatch in atomic spacing, reducing the number of defects in the Ge. But Caymax and his colleagues also realized this approach gave them an extra knob to turn. By fine-tuning the ratio of silicon and germanium, the team could compress the germanium channel that lies above it and slightly change the spacing between atoms. Hit the sweet spot—enough silicon to boost mobility but not so much that it degrades crystal quality—and germanium hole mobility could in theory jump by as much as a factor of six.
In 2008, Imec’s engineers reported a record current for a germanium FET with a 65-nm gate length, a dimension that was a few years behind the state of the art for silicon. But then progress ground to a halt. Part of the delay came as the team transitioned from a 200-millimeter wafer line to a 300-mm line. But they also found they had to tackle an unexpected issue: excess leakage.
There seemed to be too much strain on the pure germanium channel, so the engineers resumed work on silicon-germanium. They built a ring oscillator that can switch 25 percent faster than silicon equivalents at today’s standard operating voltage, 1.1 volts, Caymax says. At 0.9 V, the performance gap grows to 40 percent. The group also demonstrated an 8-bit multiplier that can operate well at 0.6 V, a level where silicon-based circuits struggle.
The only drawback is the size: The channel in the VLSI device is 200 nm wide. The transistors on today’s chips boast channels that are about a tenth as wide, and even smaller ones will be required for next-generation CMOS. But Caymax is optimistic. “There are no obvious showstoppers to going further down after this first run,” he says, adding that he and his colleagues have achieved “good filling” with germanium in trenches that are just 11 to 12 nm wide. If the team can halve that, they will be in the ballpark needed for the devices at the 7-nm node, about the point at which industry watchers expect alternate channel materials will be needed.
Building the nFET has turned out to be trickier. The speediest materials for electrons are III-V compounds. Caymax’s group opted to make III-V transistors that were a mix of two materials: a trench filled with indium phosphide topped with a thin layer of ultraspeedy InGaAs. Filling the bulk of the trench with InP helps cut down on losses. Current tends to leak across a transistor in the deepest part of the transistor channel, the area farthest from the gate. By making the bulk of the trench out of InP, this avenue can be eliminated, because electrons moving through InGaAs don’t have enough energy to jump into that material.
But filling a trench with InP is challenging. If the atoms are not ordered correctly, they will form metallic bonds that can short out a device. This wouldn’t be a problem if the bottom of a trench were perfectly flat. But there are often atom-scale variations in surface height. This creates steps that can alter the orientation of a crystal built on top, resulting in planes of indium-indium and phosphor-phosphor bonds that are especially conductive. “If you used these materials for electrical applications, the devices would simply short-circuit,” Caymax says.
His team found they could eradicate these bonds by first growing a little germanium in a trench etched to form a concave base and then baking the wafer. The surface rearranges to steps two atoms high, cutting out the geometric defect.
Although the quality of the InGaAs material making up the channel is much higher than that of the underlying InP, it is still riddled with defects—a square centimeter would have hundreds of millions of them, about 100 times as many as are present in Imec’s germanium layers and a million or so times more than you would historically find in a patch of silicon wafer. Such a high defect density would likely horrify many within the silicon industry; the number of defects is directly linked to yield and reliability.
But Caymax notes that many of the recent modifications to transistor architecture, such as the introduction of strained silicon, also create a lot of defects. Intel’s chips aren’t defect free; they’re more like “quasi-perfect,” Caymax says. His team has set up a program to see how much they must reduce the InGaAs defect density in order to make competitive devices.
There are still more challenges: Any overhaul of the channels will probably require changes in other places, too. New materials may also need to be introduced into the source and drain portions of the transistor, and a layer of insulation will be needed to separate the channels from the gate electrode. Germanium channels should be able to use the standard insulation—a thin silicon dioxide layer capped with a thicker film of hafnium oxide. But this approach won’t work for InGaAs. Charge carriers tend to get trapped at the junction between InGaAs and silicon dioxide. Engineers are still working to identify an alternate material that performs well.
At the same time, researchers still haven’t shown they can make high-quality transistors small enough for introduction at the 7-nm node, which is slated to go into mass production by 2017. And size isn’t the only concern. Alternate materials must also be built to whatever structure is on the books. That could mean FinFETs. But the chip industry may instead decide to move in a different direction—toward nanowires, which offer the possibility of controlling the channel from all sides with a wraparound gate. Chances are, these will first emerge with silicon-based channels.
One big stumbling block in the adoption of III-V materials is the concern over contamination of fab equipment. Arsenic can drastically alter the electronic properties of silicon, and it must be carefully accounted for. “The biggest challenge, even at this stage of R&D, is the stigma, the perception, that the fabs have with respect to arsenic cross-contamination,” says Errol Sanchez, a crystal-growth specialist for the equipment vendor Applied Materials.
Finally, there is still a fair amount of uncertainty over the fabrication method. IBM and Imec are exploring a backup should the trenching strategy fall through: Grow the channel materials on separate wafers, then bond them to another silicon wafer, leaving behind a very thin film of either germanium or III-V. This method promises good crystal quality, but it is also expected to be more expensive, since it requires blanketing large wafers with a lot of material that will ultimately be etched away.
Such stumbling blocks are nothing new. The industry faced many challenges as it worked to push strained silicon channels and FinFETs into production, says Chenming Hu, coinventor of the FinFET and TSMC Distinguished Professor of the Graduate School at the University of California, Berkeley. “The challenges will pale compared to what will be faced by the introduction of a very different material,” Hu says.
Still, he’s convinced silicon’s days are numbered. “I’m certain our children or grandchildren will not be using silicon,” he says. “The world is large; there must be a better material.”
This article originally appeared in print as “Changing the Channel.”
If you, like me, took some offline time this weekend, we’re a bit late to the latest slap fight in the world of Windows RT. Until recently, there was only one functional player in the Windows RT space – Microsoft, and its Surface 2 tablet – but Nokia has stepped into the ring, and one of its suppliers is talking a little trash.
No shame in that, of course. Bragging is as old as language. But how Qualcomm – the supplier of the Nokia Lumia 2520 Windows RT tablet’s processor – is taking the Surface 2 to task is interesting.
Both the Nokia 2520 and the Surface 2 run Windows RT, so when it comes to software, they are on parity. Certainly, you could argue that the Surface 2 might behave better with Windows RT than rival devices, given that Microsoft builds both, but that’s edge work.
Qualcomm, as quoted by CNet, thinks that the Lumia 2520 is “bigger, faster, [and] lower power” than Microsoft’s rival Surface 2 tablet. Ok.
The kicker to this is that, for the Surface line of tablet hybrids, the hardware component of the devices has largely not been the point of complaint raised by reviewers and users. Instead, it’s been the software that the Surface devices run on – Windows 8 at first, and now Windows 8. 1- that was the sticking point. Windows 8 was not ready at launch. And Windows 8.1 has yet to be tested against consumer demand.
Why Qualcomm is trumpeting the “speeds and feeds” of the Lumia 2520 is simple: It provides the silicon that powers the device. Microsoft’s Surface 2 runs on Nvidia chips.
Keep in mind, however, that Microsoft is in the process of buying the Nokia assets that built the Lumia 2520, so we could see reconciliation. For now, however, Nokia’s tablet does directly challenge its future brother. Microsoft recently reported that Surface unit volume doubled in its most recent quarter, compared with the sequentially preceding quarter. Surface revenue totaled $400 million for that period.
Here’s the question: Will the Windows 8.1 and Windows RT 8.1 markets become akin to the Android realm, where OEMs race to best the hardware specifications of their rivals in their devices?
For the first time beams of “twisted” light have been used to transfer data through optical fibre. A team of researchers from Boston University and University of Southern California succeeded in transmitting 1.6 terabits per second using one kilometer of optical fiber. Last June, a team of researchers transmitted data through the air at 2.56 Tb/s using twisted light, but this is the first time the method has been used to send data through optical fiber.
Twisting light means photons have a quantum characteristic called orbital angular momentum (OAM). Photons with OAM have electric and magnetic fields that corkscrew rather than oscillate in a plane. There are a theoretically infinite number of OAM values, and multiple beams having different orbital angular momentums can occupy the same fiber, allowing more data to be transferred.
"For several decades since optical fibers were deployed, the conventional assumption has been that OAM-carrying beams are inherently unstable in fibers," said Ramachandran. "Our discovery, of design classes in which they are stable, has profound implications for a variety of scientific and technological fields that have exploited the unique properties of OAM-carrying light, including the use of such beams for enhancing data capacity in fibers."
With data high in demand, the number of wavelengths we can use to increase bandwidth has started to reach its limits. Researchers hope that OAM will allow bandwidth to expand even further.
Twisted light technology isn’t only for optical wavelengths. Researchers in Italy and Sweden were successful in applying OAM to radio waves. They were able to transmit two “twisted” radio beams a distance of 442 meters by bending a dish antenna. But some radio communication scientists aren’t convinced that this will actually increase capacity. Some argue that OAM radio frequencies mimic existing multiple input multiple output (MIMO)technologies.
A MEMS microgyroscope mimics a 19th-century instrument's mechanism to boost abilities of inertial guidance systems
Photo: Alexander Trusov/University of California, Irvine
A new type of microscopic gyroscope could lead to better inertial guidance systems for missiles, better rollover protection in automobiles, and balance-restoring implants for the elderly.
Researchers from the MicroSystems Laboratory at the University of California, Irvine (UCI), described what they’re calling a Foucault pendulum on a chip at last week’s IEEE 2011 conference on microelectromechanical systems (MEMS) in Cancun, Mexico. A Foucault pendulum is a large but simple mechanism used to demonstrate Earth’s rotation. The device the UCI engineers built is a MEMS gyroscope made of silicon that is capable of directly measuring angles faster and more accurately than current MEMS-based gyroscopes.
”Historically it has been very pie-in-the-sky to do something like this,” says Andrei Shkel, professor of mechanical and aerospace engineering at UCI.
Today’s MEMS gyroscopes don’t measure angles directly. Instead, they measure angular velocity, then perform a calculation to figure out the actual angle. When something is in motion, such as a spinning missile, keeping track of its orientation requires many measurements and calculations, and each new calculation introduces more error. Shkel says his gyroscope is more accurate because it measures the angle directly and skips the calculation. ”You’re pretty much eliminating one step,” he says.
The gyroscope works on the same principle as does the Foucault’s pendulum you’d find in many museums, demonstrating Earth’s spin. The plane on which the pendulum oscillates stays in one position relative to the fixed stars in the sky, but its path over the floor gradually rotates as the world turns. Similarly, the oscillation of a mass in the gyroscope stays the same with respect to the universe at large, while the gyroscope spins around it.
Of course, the pendulum in Shkel’s two-dimensional device is not a bob on a string. Instead, four small masses of silicon a few hundred micrometers wide sit at the meeting point of two silicon springs that are at right angles to each other. A small electric current starts the mass vibrating in unison. As the gyroscope spins, the direction of the vibrational energy precesses the same way a swinging pendulum would.
The gyroscope operates with a bandwidth of 100 hertz and has a dynamic range of 450 degrees per second, meaning it detects as much as a rotation and a quarter in that time. Many conventional microgyroscopes (at least those of the ”mode matching” variety) operate at only 1 to 10 Hz and have a range of only 10 degrees per second. But inertial guidance systems—such as those that stabilize an SUV when it hits a curb or keep a rapidly spinning missile on track—require both high dynamic range and high-measurement bandwidth to accurately and quickly measure directional changes in such moving objects.
Shkel described and patented the concept for a chip-scale Foucault pendulum back in 2002, but the device’s architecture requires such precise balance among its elements that it is too hard to manufacture, even nine years later. But last week, Shkel’s colleague Alexander Trusov presented a new design, which Shkel says is more complicated in concept but easier to make, requiring standard silicon processes and only a single photolithographic mask.
But it’s just one possible design. Shkel is on leave from his academic post and currently working with the U.S. Defense Advanced Research Projects Agency (DARPA), which has launched a program to create angle-measuring gyroscopes for better inertial guidance systems. Three-dimensional designs that use concepts other than the one behind his 2-D device might be preferable for DARPA’s needs because they’ll take up less space, Shkel says. He hopes the DARPA program will also improve manufacturing processes in general, giving conventional microgyroscopes higher precision for applications that don’t require the bandwidth and dynamic range of a chip-scale Foucault pendulum.
”We will have a new class of devices,” he says, ”but we will also help existing devices.”
Exploring the Future of High Tech Glass
Specializing in glass and ceramics, Corningmade this video called A Day of Glass. It explores the possibility of glass integrated into our everyday technology, including but not limited to mirrors, stove tops, phones, and interactive surfaces. I, for one, cannot wait for some of this technology become a part of everyday life Corning Incorporated (NYSE:GLW) announced the commercial launch of Corning Lotus™ Glass, an environmentally friendly, high-performance display glass developed to enable cutting-edge technologies, including organic light-emitting diode (OLED) displays and next generation liquid crystal displays (LCD). The company made the announcement at FPD International, an industry trade show in Pacifico Yokohama, Japan, on Oct. 26. Corning Lotus Glass helps support the demanding manufacturing processes of both OLED and liquid crystal displays for high performance, portable devices such as smart phones, tablets, and notebook computers.
Corning formulated Corning Lotus Glass to perform exceptionally well in low-temperature poly-silicon (LTPS) and oxide thin-film transistor (TFT) backplane manufacturing environments.
“Corning Lotus Glass has a high annealing point that delivers the thermal and dimensional stability our customers require to produce high-performance displays,” said Andrew Filson, worldwide commercial director, Display Technologies, and vice president, Corning Holding Japan GK. “Because of its intrinsic stability, it can withstand the thermal cycles of customer processing better than conventional LCD glass substrates. This enables tighter design rules in advanced backplanes for higher resolution and faster response time.”
The thermal consistency of Corning Lotus Glass allows it to retain its shape and surface quality during high-temperature processing. This helps guard against thermal sag and warp, which improves the integration of components onto the glass.
Corning Lotus Glass has been qualified and is in production.
Produced using Corning’s proprietary fusion process, Corning Lotus Glass offers the advantages of the company’s other industry-leading glass substrates – including a pristine surface and advanced thickness control.
The end result is a thin, portable display device that consumes less power while delivering superior picture quality.
“Corning will continue to develop innovative glass compositions to enable the high-performance displays that will drive tomorrow’s consumer electronics,” Filson said.
So now we expect an Amazing future
First Computer Made From Carbon Nanotubes Debuts
The days of silicon’s reign may be numbered. A team has built a computer using carbon nanotubes.
The computer is rudimentary by modern standards: it contains just 178 carbon-nanotube-based transistors to the billions of silicon-based switches in modern chips. It operates on only 1 bit of information, where today we rely on 32- and 64-bit machines. And it clocks in at just 1 kHz, about a million times slower than the application processors we find in modern smartphones.
Still, researchers say, the machine, developed at Stanford by a team led by professors H.-S. Philip Wong and Subhasish Mitra and described this week inNature, is the first of its kind and an important step for a material that has long shown promise as an alternative to silicon.
"This is the first time that anybody has been able to put together a complete working computer based on any beyond-CMOS technology," says Naresh Shanbhag, a professor at the University of Illinois at Urbana-Champaign. (Shanbhag directs a chip research consortium called SONIC that includes Stanford University.)
Researchers are hunting for an alternative to silicon because the transistor is no longer shrinking like it used to. The switches leak current, and the circuits based on them get hot. The problem is only going to get worse as transistors get smaller and circuits get denser.
Carbon nanotubes, which are essentially rolled up, hollow sheets of carbon, have long shown promise as an alternative material. They are small, nanoscale structures that can theoretically be packed quite close together, and they exhibit very attractive electrical properties: current flows easily across them, and they can be switched on and off fairly easily.
The first carbon nanotube transistors emerged in 1998 out of research groups based at IBM and Delft University of Technology. Since then researchers have succeeded in building smaller circuits, but developing very large scale integration (VLSI) processes that could be used to mass produce chips based on the material has been slow going. “Everybody wanted to build a commercial or large-scale digital system using these materials, but they could not do it because of the substantial imperfections that are inherent in carbon nanotubes,” says Mitra.
There are two imperfections that have proved particularly problematic, say Mitra and his colleagues. One is that, although nanotubes can be grown fairly easily using chemical vapor deposition, the process tends to produce nanotubes with a mix of electrical properties: semiconducting and metallic. Metallic nanotubes are undesirable as they act as wires and can short out circuits.
The other stumbling block is one of alignment. Carbon nanotubes can be grown on a pre-patterned wafer so they form parallel tubes. But a small fraction inevitably are misaligned, and can cut across and connect with neighbors.
To make a process that could work in a commercial chip fab, the Stanford team developed a set of techniques they dub imperfection-immune design. They first eliminated the metallic nanotubes on their wafer by turning off all of the semiconducting nanotubes and driving a high current through the circuits. This caused the metallic nanotubes to overheat, oxidize, and ultimately vaporize into small puffs of carbon dioxide. The team then etched away sections of the mat of remaining nanotubes to form circuits. This etching was guided by an algorithm, based on graph theory, that can produce circuits that are mathematically guaranteed to work regardless of any imperfections, Mitra says, by cutting off at least part of any possible cross-cutting nanotube.
The computer, built by graduate student Max Shulakar, isn’t one we’d recognize today. It uses only p-type metal-oxide-semiconductor (PMOS) logic. PMOS transistors are switched on when a negative voltage is applied. NMOS switches work with positive voltage. Although PMOS logic was the first to emerge in computing, today’s logic chips use both to make complementary (CMOS) logic. The disadvantage of PMOS is that it is effectively always on: it is a sort of network or resistors that relies on the presence of transistors of variable widths to regulate the flow of current and create voltage drops.
But Shulaker and his colleagues show the computer is capable of everything you’d expect from a general purpose processor. It can run a basic operating system and multitask. It can perform counting and number sorting. They also demonstrate more modern silicon capabilities, by running 20 different instructions from the commercial MIPS instruction set.
How does the computer work? It uses just one instruction to perform all computations: SUBNEG (subtract and branch if negative). As Franz Kreupl, a professor of electrical engineering at the University of Munich in Germany, writes in an accompanying commentary:
SUBNEG takes the content of a first memory address, subtracts it from the content of a second memory address and stores the result in the second memory address. If the result of this subtraction is negative, it goes to a third memory address. Because the instruction contains this conditional statement, it guarantees Turing completeness -- that is, it can make any calculation if the computer has enough memory available.
Rudimentary as the computer is, Kreupl says it is a notable advance in the hunt for silicon alternatives. The new work could do a lot to draw more attention to carbon nanotubes, which have lost the spotlight in recent years to graphene. “A circuit as complex as this one has never been devised,” Kreupl says. “I think it’s an important step forward.”
There are of course, still a number of open questions. One is how well this approach will scale down. Working in an academic fab, the Stanford team was limited to an optical lithography resolution of just 1 micrometer, which limited the length of the carbon nanotubes transistors to about that distance (today silicon transistor dimensions are measured in the tens of nanometers). The density of the nanotubes patterned on the wafer was also low: just 5 per micrometer. That density would have to be boosted to 100 to 200 per micrometer, if not more, to increase speed and to make the process cost effective.
Much progress will depend on how uniformly the transistors can be laid down. "If the density varies too much, that would have an effect on the energy efficiency of the final circuit," Mitra says. There is work being done to address that issue. In the meantime, he says, we've reached a notable crossing point. "People said you couldn’t build anything big with carbon nanotubes," he says. "Now that question has been resolved."
Image: Butch Colyear
Movea’s Data Fusion Transforms Sensors Into Indoor Navigation
Software company Movea helps app developers corral the data from the sensors that pack today’s mobile devices
Today’s smartphones and other mobile devices are jam-packed with sensors. Micromechanical gyroscopes track the phone’s tilt. Magnetometers figure out which way the phone is pointing. Accelerometers detect motion. GPS receivers track location. Pressure sensors help determine altitude. Add in ambient light detectors, air sensors, temperature sensors, microphones, and cameras, and it all means that app developers can make mobile devices do amazing things.
But tapping into the power of all these sensors isn’t easy. There are no industry-wide standards today for how sensors operate and communicate. Mobile device operating systems access sensor data in a variety of ways. And even the way sensors are built into phones differ—some architectures use sensor hubs, with a dedicated processor controlling the various sensors, others rely on the mobile device’s main processor to do this control.
A 2007 spin-off of Leti based in Grenoble, Movea thinks app developers could do a lot more interesting things if they had a helper to round up the data from the sensors and make it easy to work with—sort of a sensor wrangler.
So Movea has introduced MotionCore, an application programming interface (API) for Android and Windows 8 devices. MotionCore, built into a phone either as software on the main processor or firmware on a sensor hub, is sensor agnostic. That is, it will allow apps to connect with sensors from any manufacturer. That means mobile-device manufacturers will be able to mix sensors from different sensor manufacturers more easily, and older apps will be able to migrate to newer phones without being rewritten to talk to the new mix of sensors.
Movea doesn’t plan to produce those apps itself, preferring to leave that in the hands of the creative app development community. But the company did build a few demonstration apps to show off the power of its software, including a simple “air signature” to unlock a phone and an indoor navigation tool that can identify the floor as well as the location inside a hotel by sensing altitude changes. And, to show just how powerful its API can be, Movea engineers took 15 sensor pods and, instead of putting them in a phone, attached them to a dancer, processing their outputs in real time to generate hypnotic graphics.
With Dave Rothenberg, director of marketing at Movea, in January at the International Consumer Electronics Show, in Las Vegas, to see just how these worked. Rothenberg had a little trouble demonstrating the air password, though the fault was not necessarily that of Movea’s software, but more his memory; he’s just not used to an air password yet.
The navigation demo was set up to “know” that it was inside the LVH Hotel; in a commercial application, an app would get this initial geographic information from the phone’s GPS receiver. But once setting that general location, the phone did not use any outside signals to find its way around (like GPS, Wi-Fi, or cellphone locators). Instead, it relied on its sensors to spot pressure and direction changes and to count steps. The software checked the data from sensors against a map of the hotel to self-correct, for example, by understanding that a user is more likely to walk into an elevator than through a wall. The body area network on the dancer, Movea says, cost under US $10 000 in sensors and computer hardware to pull together, and is as powerful as professional motion capture systems that today run at more than $50 000.
Tekla S. Perry: Smartphones are filled with an amazing array of sensors that measure movement, altitude, pressure, and much more. But for that sensor data to be useful, it must be processed. Software company Movea creates APIs [application program interfaces] that allow developers to turn that data into an amazing variety of useful things, like indoor navigation apps and smart sports equipment. .
Dave Rothenberg: So we can look at a very basic orientation. And if you take the data coming from these three sensors and you fuse it together the resulting estimation of orientation is very robust.
These are all MEMS [microelectromechanical systems]-based sensors now: accelerometer, a gyroscope, and a magnetometer. And with the S3 there’s also a pressure sensor or a barometer in the phone. And we’re going to be using all three of those—all four of those sensors for our indoor navigation demo.
We ask the user to enter a little bit of information. The real key piece of information though is the height because there is a correlation between height and step length. And step length is a key input to our indoor navigation engine. So what you see is a map of the first floor of the lobby of the LVH hotel and at this point we’re going to start walking and let the app guide us to our meeting suite. So it’s detected that we’re by the elevator and now it’s asking us to go up to the sixth floor.
The motion and awareness come from sensors. They sense what we’re doing. They sense the environment.
Hospital To Use Microfluid Prototype For Diagnosing Tumors
Lucas Laursen
Chemist Emmanuel Delamarche held a thin slice of human thyroid tissue on a glass slide between his fingers. The tissue poses a mystery: does it contain a tumor or not? Delamarche, who works at IBM Research in Zurich, Switzerland, turned the slide around in his hand as he explained that the normal method of diagnosing a tumor involves splashing a chemical reagent, some of which are expensive, onto the uneven surface of the tissue and watching for it to react with disease markers. A pathologist "looks at them under a microscope, and he's using his expertise, his judgment, and looks at what chemical he used, what type of color he can see and what part and he has to come up with a diagnosis," Delamarche says, "he has a very, very hard job, OK?"
IBM is already good at precise application of materials to flat surfaces such as computer chips. Human tissue, sliced thin enough, turns out to receptive to the company's bag of tricks too. Delamarche, turning to one of three machines on lab benches, explained that a few years ago his team began trying to deliver reagents with more precision. University Hospital Zurich will be testing the results over the next few months.
The idea was that instead of a sprawling blot occupying most of a tissue sample, a tiny tube something like an inkjet printer could deliver many droplets onto the tissue. Pathologists might put multiple reagents on a single fingernail-sized tissue sample, saving them the need for more samples and surgery. They might make better-informed diagnoses because the printer-like machine would allow them to control how much reagent to place on the tissue and where it goes. Pathologists could also compare the effects of well-measured doses on suspected cancerous parts. "We are interested in maybe thinking about technology to go from qualitative info to more quantitative information," Delamarche says.
But that precise delivery of the reagents proved elusive. Some of it spilled outside the target area. In 2011 Delamarche and colleagues announced avertical microfluidic probe, that unlike previous microfluidic probes was not parallel to the target surface. It consisted of a glass and silicon wafer about one square centimeter with one channel about a micrometer across that shot liquid to the target and another channel that vacuumed up any excess. "The trick, or the invention actually, that we had was to put a second aperture that continuously re-aspirates what we inject," Delamarche says. Today the team can create spots just 50 micrometers across, though he says the sweet spot for diagnoses may be more like a few hundred micrometers.
The microfluidic machine is part of a trend toward keeping samples put and moving the thing that analyzes them, according to a recent review in Lab on a Chip.
The technology is attractive both to pathologists, such as those at University Hospital Zurich, and to basic researchers, with whom Delamarche and mechanical engineer Govind Kaigala can share a larger, more customizable version in their lab.