Like us in Facebook

THANK YOU FOR VISITING MY BLOG . I'M A STUDENT OF ELECTRICAL AND ELECTRONIC ENGINEERING , SEARCHING FOR LATEST TECHNOLOGY DEVELOPMENTS . I'M VERY MUCH INTERESTED IN ELECTRONICS , SO SEARCHING FOR JOB IN ELECTRONIC DEPARTMENT.

Blogger news

Get in on the ground floor as we look at the most exciting crowdfunded tech projects out there right now. This week: Heatworks, an alternative to the traditional water heater that promises not only previously unknown longevity but also a replacement for the mechanical flow switch and new, state-of-the-art electrodes that should deliver instant, super-accurate temperature adjustments.

Traditional water heating equipment uses fuel as an energy source to heat a rod-like element in the water tank. The element then warms the water.
If you've ever reheated a hot beverage by placing it in the microwave, you'll have an idea as to the alternatives. Microwave ovens heat liquids by moving molecules so that the molecules hit each other. This action creates heat -- a bit like rubbing your hands on a cold day.
Heatworks Model 1
ISI's Heatworks Model 1 water heater, a project looking for funding on Kickstarter at the moment, pitches itself somewhere in between these two existing technologies.
ISI's product uses electronics to directly energize water molecules instead of using a heating element rod, the company says. In other words, the appliance is designed to use water's natural resistance to heat itself.

Technical Details

Graphite electrodes -- rather than a traditional convective heating element -- are installed in an efficient, tankless water heater. Existing tankless water heaters are generally more efficient than water heaters with tanks because only the required water is heated; tanks heat more than is needed.
The acoustically quiet graphite electrodes in ISI's tank produce two gallons of hot water per minute, and tanks can be installed in series.
The tank measures 12 1/2 inches long by 6 inches in diameter and weighs about 16 lbs.
A WiFi module is planned that will allow for remote measurement and control of power and temperature.
Tagline: "Your next water heater."

The Numbers

ISI Technology currently has more than 300 backers for its water heater project pledging more than US$80,000 of a $125,000 goal. The funding period ends Feb. 16, 2014.
A pledge of $225 gets you a Heatworks Model 1. Note that a $395 retail price is proposed for the final product. Estimated delivery is in May 2014.

The Upsides

Theoretically, this equipment should provide efficiencies -- not least because minerals in water bind themselves to the heating rods in classic heaters, thus reducing rod life. If there's no rod, the heater should last longer.
The arguments in favor of this product are compelling, and if you've ever had to call a plumber to replace a failing domestic water heater for no apparent reason -- the rods have in fact gotten gunked-up with plating deposits, or have burnt out -- you'll know what we mean.
This water heater promises to introduce previously unknown longevity into this arena. Plus, the traditional water heater hasn't changed much, in technological terms, since the days of pot-on-fire.
What we haven't seen before is a replacement for the mechanical flow switch -- which ISI promises to deliver through microprocessors -- or new, state-of-the-art electrodes that should deliver instant, super-accurate temperature adjustments. Then, too, there's smartphone interactivity, which this product also promises.

The Downsides

The creator needs to be a bit clearer about exactly how its product "directly energizes and heats the water molecule," as opposed to simply heating water ultra-super-efficiently through use of graphite and microprocessors with split-second accuracy. Or is this just marketing gobbledygook?
No annual maintenance and a 3-year warranty is a lofty promise.
While we don't have any reason to doubt ISI's claims, prototyping and testing, we'd like to see some more concrete numbers before ripping out our existing water heating kit and replacing it with this gear -- seductive though it is.

The Conclusion

Graphite electrode technology is commonly used in arc furnace steel manufacturing. It's an efficient, responsive technology that can provide high levels of heat along with good electrical conductivity. We look forward to hearing how this rapidly financing Kickstarter project plays out in real-world use.

'Cheap' organic transistors to transform your TV
Researchers have invented world's fastest, and cheapest thin-film organic transistors.
NEW YORK: In a ground-breaking research for the fast-growing global electronic industry, researchers have invented world's fastest, and cheapest thin-film organic transistors. 

This new technology has the potential to achieve the high performance needed for high-resolution TV screens and similar electronic devices in an inexpensive way, said the researchers. 

Engineers from University of Nebraska-Lincoln (UNL) and Stanford University created thin-film organic transistors that could operate more than five times faster than previous examples of this experimental technology. 

The team led by Zhenan Bao, professor of chemical engineering at Stanford, and Jinsong Huang, assistant professor of mechanical and materials engineering at UNL, used the process to make cheaper organic thin-film transistors with electronic characteristics comparable to those found in expensive, curved-screen TV displays based on silicon technology. 

They achieved their speed boost by altering the basic process for making thin film organic transistors, said the study published in the journal Nature Communications. 

The researchers called this improved method 'off-centre spin coating'. 

Even at this initial stage, 'off-centre spin coating' produced transistors with a range of speeds far above those of previous organic semiconductors and comparable to the performance of thepolysilicon materials used in today's high-end electronics, claimed the study. 

Further improvements to this experimental process could lead to the development of inexpensive, high-performance electronics built on transparent substrates such as glass and, eventually, clear and flexible plastics, the study said.

The Brooklyn-based 3D printing company introduces new models, including the Makerbot Replicator Mini and Z18.

Bre Pettis at CES 2014
Makerbot Chief Executive Bre Pettis unveiled something "epic" at the Consumer Electronics Show on Monday.
Like a showman, he removed a black covering to reveal a mammoth 3D printer. And it's for making big, epic things, he said. The CEO announced the Replicator Z18 for printing large objects -- up to 12 by 12 by 18 inches tall.

"If you've been hampered with how big you can make things, then no more," said Pettis. The industrial strength printer was one of three new models he unveiled today. The other two are a new Replicator "prosumer" machine, and a Replicator Mini.

The idea of the press conference was simple: try to make something for everyone.

"It's not, are you going to get a 3D printer, it's which Makerbot printer are you going to get?" he said.

Price-wise, the Mini is $1,399, the new Replicator is $2,899, and the big bot Z18 is $6,499. While the Mini is intended to be the entry-level machine, it's still on the pricier side at more than a grand. All three machines will tap into the Makerbot 3D printing platform. Pettis touted the Mini's one-touch printing capability, Wi-Fi connectivity, and camera. It also has a new "smart extruder" that snaps on to the device magnetically.
                         The company also announced a line of apps that includes a desktop app with MakerWare printing software, a direct integration with Makerbot's online sharing community, Thingiverse, and the ability for a user to organize his or her designs with a cloud library. Pettis also announced a Makerbot mobile app for iOS.

                          Some other announcements from the presentation: Pettis introduced the Makerbot digital store, a new retail front end where people can buy high-quality designs made by pros. For example, one collection is called Chunky Trucks, a set of toy construction vehicles and workers. He also announced a partnership with SoftKinetic, a 3D sensor company, though other details were scant.
                          For many, Makerbot has become the de facto steward of 3D printing, with an outspoken CEO and a slick flagship store in New York City. The company, founded in 2009 by Pettis, was acquired in June by Stratasys for $403 million. Pettis boasts that there are more than 44,000 Makerbots in the world. The company has a number of other projects going, including Robo-Hand, which allows for 3D printing of prosthetic for children.
                          Though the technology has been around for some time, 3D has had quite the coming out party recently. It stoked controversy when the world's first fully 3D printed gun was made last May. Other companies like Shapeways, a 3D printing marketplace, have gotten the attention of investors like Andreessen Horowitz, a prominent Silicon Valley venture capital firm. And just like any good flag bearers for a nascent technology, Makerbot is leading the didactic push toward ubiquity -- recently announcing that it hopes to get 3D printers into every school.

Image: Parrot
We've always liked how Parrot manages to take some of the latest research-inspired technology and stuff it into its affordable (and fun!) consumer robots. Two new robots showed up at CES this year, sporting some capabilities that, until now, we've only spotted in research labs: there's an awesome little wheeled robot with a clever jumping mechanism, along with a new quadcopter that comes with a pair of giant wheels that allow it to move along the groundwhile doubling as a sort of protective roll cage.

Software redistributes tasks among networked data centers to optimize energy efficiency

The computing cloud may feel intangible to users, but it has a definite physical form and a corresponding carbon footprint. Facebook’s data centers, for example, were responsible for the emission of 298 000 metric tons of carbon dioxide in 2012, the equivalent of roughly 55 000 cars on the road. Computer scientists at Trinity College Dublin and IBM Research Dublin have shown that there are ways to reduce emissions from cloud computing, although their plan would likely cause some speed reductions and cost increases. By developing a group of algorithms, collectively called Stratus, the team was able to model a worldwide network of connected data centers and predict how best to use them to keep carbon emissions low while still getting the needed computing done and data delivered.
“The overall goal of the work was to see load coming from different parts of the globe [and] spread it out to different data centers to achieve objectives like minimizing carbon emissions or having the lowest electricity costs,” saysDonal O’Mahony, a computer science professor at Trinity.
For the simulation, the scientists modeled a scenario inspired by Amazon’sElastic Compute Cloud (EC2) data center setup that incorporated three key variables—carbon emissions, cost of electricity, and the time needed for computation and data transfer on a network. Amazon EC2 has data centers in Ireland and the U.S. states of Virginia and California, so the experimental model placed data centers there too, and it used queries from 34 sources in different parts of Europe, Canada, and the United States as tests.
The researchers then used the Stratus algorithms to optimize the workings of the network for any of the three variables. With the algorithms they were able to reduce the EC2 cloud’s emissions by 21 percent over a common commercial scheme for balancing computing loads. The key to the reduction, scientists found, was in routing requests to the Irish data center more than to those in California or Virginia. Ireland also tended to have faster-than-average service request times, so even when Stratus was tuned to reduce carbon, it shaved 38 milliseconds off the average time taken to request and receive a response from the data centers.
The researchers stress that the results have more value in representing trends than in predicting real-world numbers for quantities like carbon savings. Some of the key inputs were necessarily inexact. As an example, for some geographic locations, such as Ireland, it was easy to find real-time carbon intensity data or real-time electricity pricing data, but in other areas, including the United States, only seasonal or annual averages were available. “If we had the real-time data for California and Virginia, the simulations might look quite different,” says Joseph Doyle, a networks researcher at Trinity who worked with O’Mahony and IBM’s Robert Shroten on Stratus.
Christopher Stewart, who researches sustainable cloud computing at Ohio State University, says that although Stratus and other recent work have made significant progress toward modeling effective load balancing, data storage is another important factor [PDF] to consider. In order to handle requests, you have to have data stored on-site, he says. “With data growing rapidly, storage capacity is a major concern now, too, and that may limit your flexibility in terms of being able to route requests from one data center to another.”
The researchers hope that the easier it is to achieve load balancing and optimization in cloud computing, the more it will be implemented by environmentally conscious companies, or those just looking to save money. “A company like Twitter might have lots of options in how it decides that all the Twitter traffic is going to get served around the world,” O’Mahony says. “If they decided that greenness was one of the things that was most important to them, they could structure their load balancing accordingly. Or if getting it done as cheaply as possible was important, they could structure it that way. Or they could do anything in the middle.”
This article originally appeared in print as "Reducing the Carbon Cost of Cloud Computing."

A MEMS microgyroscope mimics a 19th-century instrument's mechanism to boost abilities of inertial guidance systems

Photo: Alexander Trusov/University of California, Irvine
1 February 2011—A new type of microscopic gyroscope could lead to better inertial guidance systems for missiles, better rollover protection in automobiles, and balance-restoring implants for the elderly.
Researchers from the MicroSystems Laboratory at the University of California, Irvine (UCI), described what they’re calling a Foucault pendulum on a chip at last week’s IEEE 2011 conference on microelectromechanical systems (MEMS) in Cancun, Mexico. A Foucault pendulum is a large but simple mechanism used to demonstrate Earth’s rotation. The device the UCI engineers built is a MEMS gyroscope made of silicon that is capable of directly measuring angles faster and more accurately than current MEMS-based gyroscopes.
”Historically it has been very pie-in-the-sky to do something like this,” says Andrei Shkel, professor of mechanical and aerospace engineering at UCI.
Today’s MEMS gyroscopes don’t measure angles directly. Instead, they measure angular velocity, then perform a calculation to figure out the actual angle. When something is in motion, such as a spinning missile, keeping track of its orientation requires many measurements and calculations, and each new calculation introduces more error. Shkel says his gyroscope is more accurate because it measures the angle directly and skips the calculation. ”You’re pretty much eliminating one step,” he says.
The gyroscope works on the same principle as does the Foucault’s pendulum you’d find in many museums, demonstrating Earth’s spin. The plane on which the pendulum oscillates stays in one position relative to the fixed stars in the sky, but its path over the floor gradually rotates as the world turns. Similarly, the oscillation of a mass in the gyroscope stays the same with respect to the universe at large, while the gyroscope spins around it.
Of course, the pendulum in Shkel’s two-dimensional device is not a bob on a string. Instead, four small masses of silicon a few hundred micrometers wide sit at the meeting point of two silicon springs that are at right angles to each other. A small electric current starts the mass vibrating in unison. As the gyroscope spins, the direction of the vibrational energy precesses the same way a swinging pendulum would.
The gyroscope operates with a bandwidth of 100 hertz and has a dynamic range of 450 degrees per second, meaning it detects as much as a rotation and a quarter in that time. Many conventional microgyroscopes (at least those of the ”mode matching” variety) operate at only 1 to 10 Hz and have a range of only 10 degrees per second. But inertial guidance systems—such as those that stabilize an SUV when it hits a curb or keep a rapidly spinning missile on track—require both high dynamic range and high-measurement bandwidth to accurately and quickly measure directional changes in such moving objects.
Shkel described and patented the concept for a chip-scale Foucault pendulum back in 2002, but the device’s architecture requires such precise balance among its elements that it is too hard to manufacture, even nine years later. But last week, Shkel’s colleague Alexander Trusov presented a new design, which Shkel says is more complicated in concept but easier to make, requiring standard silicon processes and only a single photolithographic mask.
But it’s just one possible design. Shkel is on leave from his academic post and currently working with the U.S. Defense Advanced Research Projects Agency (DARPA), which has launched a program to create angle-measuring gyroscopes for better inertial guidance systems. Three-dimensional designs that use concepts other than the one behind his 2-D device might be preferable for DARPA’s needs because they’ll take up less space, Shkel says. He hopes the DARPA program will also improve manufacturing processes in general, giving conventional microgyroscopes higher precision for applications that don’t require the bandwidth and dynamic range of a chip-scale Foucault pendulum.
”We will have a new class of devices,” he says, ”but we will also help existing devices.”
Northrop Grumman's MQ-4C Triton unmanned aircraft system. Photo: Alan Radecki/Northrop Grumman
Northrop Grumman’s MQ-4C Triton unmanned aircraft system. Photo: Alan Radecki/Northrop Grumman
A new drone with the mammoth wingspan of a Boeing 757 is set to give the U.S. Navy some serious surveillance power.
Northrop Grumman and the Navy say they’ve just completed the ninth flight trial of the Triton unmanned aircraft system (UAS), an improvement upon its predecessor in the Air Force, the Global Hawk.
With its 130-foot wingspan, Triton will provide high-altitude, real-time intelligence, surveillance and reconnaissance (ISR) from a sensor suite that supplies a 360-degree view at a radius of over 2,000 nautical miles, allowing monitoring from higher and farther away than any of its competitors.
But should a closer look be necessary, unique de-icing and lightning protection capabilities allow Triton to plunge through the clouds to get a closer view and automatically classify ships. And in recent tests, the drone was able to easily recover from perturbations in its flight path caused by turbulence.
Although Triton has a higher degree of autonomy than the most autonomous drones, operators on the ground will be able to obtain high-resolution imagery, use radar for target detection and provide information-sharing capabilities to other military units.
Thus far, Triton has completed flights up to 9.4 hours at altitudes of 50,000 feet at the company’s manufacturing facility in Palmdale, California. According to Northrop Grumman, Triton could support missions up to 24 hours.
Northrop Grumman reported earlier that Triton had demonstrated structural strength of the drone’s wing — a key capability that will allow the aircraft to descend from high altitudes to make positive identification of targets during surveillance missions — even when it was subjected to a load at 22 percent above the Navy’s requirement.
“During surveillance missions using Triton, Navy operators may spot a target of interest and order the aircraft to a lower altitude to make positive identification,” said Mike Mackey, Northrop Gumman’s Triton UAS program director, in a statement. “The wing’s strength allows the aircraft to safely descend, sometimes through weather patterns, to complete this maneuver.”
Under an initial contract of $1.16 billion in 2008, the Navy has ordered 68 of the MQ-4C Triton drones with expected delivery in 2017 — a slip from the initial anticipated date of December 2015.

Samsung, Micron, and SK Hynix bet that transistor redesigns and chip stacking will make memory smaller and faster


This article is part of the “2014 Top Tech to Watch” series, IEEE Spectrum’s annual prediction of technologies that will make headlines in the coming year.
A 3-D revolution is slowlymaking its way across the chip industry. Intel set it off in 2011 when it debuted logic chips bearing transistors that pop out of the plane of the chip. This year, memory makers are joining the game with two innovations of their own.
If you upgrade your smartphone in 2014, chances are you won’t see either of these technologies inside it. They will appear first in high-performance (and high-margin) processors and solid-state drives. But analysts say it’s only a matter of time before these 3-D memories migrate to consumer gadgets. And that could mean big gains in speed and storage space.
One of the 3-D memory movements centers on NAND flash, a memory that’s nonvolatile—that is, it holds on to information even when it’s powered down. This flash memory is already used to store data in smartphones, tablets, and many laptops and is supplanting hard drives inside data centers.
Flash stores data in transistors, by injecting or draining electrons from a conductive patch called a floating gate. The value of a gate can be read because the electrons inside it alter the conductivity of an adjacent current-carrying channel.
But flash, which celebrated its 25th anniversary in 2012, is now showing its age. As chip features shrink, cells sit closer and closer to one another, increasing interference and the chance of corrupted data. What’s more, fewer electrons—mere dozens in today’s most advanced versions—can be fit inside any cell. As a result, cells are more liable to leak charge and be affected by tiny changes.
Memory designers reckon the solution lies in the third dimension. And the first company with a fix is top memory maker Samsung, which announced in August that it had already started production on a 128-gigabyte “vertical” NAND chip. SK Hynix, another South Korea-based firm, and Micron, based in Boise, Idaho, should ship 3-D NAND chips this year.
The companies are expected to turn each line of memory cells on its side, stringing them vertically in a forest of pillars. This will allow the memory manufacturers to take what’s essentially a right turn around Moore’s Law: They’ll pack more bits together not by shrinking features but by layering cells.
The change in architecture is expected to drive down the cost per bit and relax lithographic printing requirements. Indeed, Samsung’s new 3-D cells are likely made with a 30- to 40-nanometer process, a few generations behind the current, 20-nm class, says Dee Robinson, a senior analyst at IHS in El Segundo, Calif. With bigger cells and more electrons, Robinson says, “it’s actually a better performing chip.” She adds that it’s still unclear how quickly the cost of the new technology will decline to match that of traditional 2-D flash. But IHS estimates that 3-D flash will make up more than half of the NAND market by 2017.
The move carries some risk. “It involves technologies that have never been put into production before,” says analyst Jim Handy of Objective Analysis in Los Gatos, Calif. While some flash manufacturers are moving to 3-D, others, such as Intel, SanDisk, and Toshiba, are expected to stick with planar NAND for the moment, by taking advantage of new insulating materials. But they willinevitably be forced to switch to 3-D, Handy says.
A second 3-D technology—the Hybrid Memory Cube, or HMC—will also be ramping up in 2014. This effort focuses not on storage but on the computer’s memory workhorse: dynamic RAM.
The HMC won’t be any denser, smaller, or cheaper than an ordinary DRAM chip—it’ll be faster. It’s designed to surmount the “memory wall,” a communications bottleneck that has developed between multicore CPUs and memory, says Mike Black, a technology strategist at Micron. “The latest generations of high-performance CPUs are not capable of getting access to enough memory bandwidth,” Black says.
Micron developed the HMC in collaboration with SK Hynix and Samsung as well as more than 100 other semiconductor firms, research institutions, and potential customers. Their aim was to change the way systems handle DRAM signals. Instead of forcing DRAM chips to drive communications straight to a processor, the HMC off-loads most of that responsibility to a high-speed logic chip. DRAM dies are stacked atop this logic layer and are connected using thousands of copper wires called through-silicon vias (TSVs).
These vias allow the DRAM chip broad access to its bus; the logic layer cuts the number of connections that information must traverse on its way to the CPU. “This is the first commercial memory product offering [TSVs] as a standard part of the construction,” Black says.
Micron announced in September that it had begun shipping the first samples of its 2-GB memory cube. They’re expected to be produced at volume later this year, along with a 4-GB version. A single cube can offer 160 gigabytes per second of bandwidth, Micron says, compared with about 12 GB/s for current DRAM and 20 for the next generation, DDR4.
The HMC is quite fast, says Handy of Objective Analysis, but its success will hinge on finding a sufficiently large market to drive down costs. A large player such as Intel could sway the technology’s fate. But “so far,” he says, “Intel’s been playing it close to the vest.”
This article originally appeared in print as “Memory In the Third Dimension.”
Image: Stef Simmons
A physical state crucial for quantum computing has managed to survive at room temperature for 39 minutes in a record-breaking experiment. The new study gives a huge boost to quantum computing's prospects of storing information under normal conditions for long periods.
The quantum state of superposition allows quantum bits (qubits) of information to exist as both 1s and 0s simultaneously—unlike classical computing bits that exist as either 1 or 0. That makes superposition one of the main keys to unlocking quantum computing's potential of performing calculations much faster than classical computers. But past experiments had only succeeded in maintaining superposition at room temperature for mere seconds, compared to the latest record-breaking run of 39 minutes at 25 degrees C. The longer a quantum state can last, the more quantum computing calculations can be performed with it.
The international team from Canada, the UK, and Germany that created the qubit set another benchmark by maintaining superposition for three hours at a cryogenic temperature of -269 degrees C (four degrees above absolute zero). They also showed how to maintain superposition while cycling from -269 degrees C to 25 degrees C and back again. All the study's achievements were detailed in the 15 November edition of the journal Science.
"These lifetimes are at least ten times longer than those measured in previous experiments," said Stephanie Simmons, a junior research fellow in materials science at Oxford University, in a press release.
Such results seem especially promising because the team used silicon as one of its hardware materials. Future quantum computers based on silicon could leverage the manufacturing processes of the existing semiconductor industry that gave "Silicon Valley" its name. The latest study also raises the tantalizing possibility of a silicon-based quantum computer operating at room temperature, Simmons told CBC News.
The big key to the team's success was the use of ionized phosphorus atoms implanted in the silicon. The nuclear spin states of the phosphorus atoms acted as the bits of information that the team could manipulate with magnetic fields—a spin state can point up to represent a 0 bit, down to represent a 1 bit, or any angle in between when in superposition. 
Other teams, especially in Australia, have tested the combination of phosphorus and silicon for quantum computing experiments before. But Simmons and her colleagues took advantage of recent studies showing how ionized phosphorus atoms—with missing electrons—could maintain their states of superposition for much longer than ordinary neutral phosphorus atoms.
The researchers also stabilized the quantum states of the phosphorus atoms by using isotopically enriched silicon to get rid of possible interference from impurities that arise in natural silicon samples. (Natural silicon is a mix of silicon-28 and silicon-29, but the team used crystals of pure silicon-28). And they applied a method called "dynamic decoupling" that uses electromagnetic pulses to help refocus the stability of a spin state.
Physicists still have a long way to go with quantum computing. The latest study manipulated the nuclear spins of about 10 billion phosphorus atoms so that they existed in the same quantum state—a simple way to run an endurance test.  But Simmons and her colleagues plan to take the next big step of testing different qubits in different quantum states.
Image: Stef Simmons

China’s new top-ranked supercomputer is at the top of the heap for some needs, but not for the kind of data sifting the NSA and Amazon.com do

This week, China reclaimed the distinction of running the world’s fastest supercomputer; it last held that first-place ranking for eightmonths starting in October 2010 with its Tianhe-1A machine. Its new Tianhe-2 (“Milky Way–2”) computer is putting its competition to shame, however, performing calculations about as fast as the most recent No. 2 and No. 3 machines combined, according to Jack Dongarra, one of the curators of the list of 500 most powerful supercomputers and professor of computer science at the University of Tennessee, Knoxville.
But in its ability to search vast data sets, says Richard Murphy—senior architect of advanced memory systems at Micron Technology, in Boise, Idaho—the 3-million-core machine is not nearly as singular. And it’s that ability that’s required to solve many of the most important and controversial big data problems in high-performance computing, including those at the recently revealed Prism program, at the U.S. National Security Agency (NSA).
Murphy chairs the executive committee of an alternative supercomputer benchmark called Graph 500. Named for the mathematical theory behind the study of networks and numbered like its long-established competitor,Top500, Graph 500 quantifies a supercomputer’s speed when running needle-in-a-haystack search problems.
While China’s Tianhe-2 claims a decisive victory in the newest Top500 list, the same machine doesn’t even place in the top 5 on the most recent Graph 500 list, Murphy says. (Both rankings are recompiled twice a year, in June and November.)
It’s no mark against the Tianhe-2’s architects that a No. 1 ranking in Top500 doesn’t yield the highest Graph 500 score too. As computers gain in processing power, very often their ability to quickly access their vast memory banks and hard drives decreases. And it’s this latter ability that’s key to data mining.
“If you look at large-scale commercial problems, the data is growing so fast compared to the improvement on performance you get from Moore’s Law,” says Murphy, who previously worked for Sandia National Laboratories, in New Mexico.
The big data sets supercomputers that must traverse today can be staggering. They can be as all-encompassing as the metadata behind every call or text you send and receive, as the latest revelations about the NSA’s operations suggest. On a slightly less Orwellian note, other big data sets in our digital lives include every purchase on Amazon.com or every bit of content viewed on Netflix—which is constantly combed through to discover new connections and recommendations.
Each of these everyday calculation problems is just another example of the general problem of searching the graph of connections between every point in a data set. Facebook, in fact, called out the problem by name earlier this year when it launched its Facebook Graph Search, a feature that enables users to find, say, restaurants or music that friends like, or to make other, more unusual connections. 
The difference between such tasks and the simulations supercomputers typically run boils down to math, of course. Dongarra says that at its heart, the Top500 benchmark tracks a computer’s ability to race through a slough offloating point operations, while Graph 500 involves rapid manipulation of integers—mostly pointers to memory locations.
But Murphy adds that the bottleneck for Graph 500 tests is often not in the processor but instead in the computer’s ability to access its memory. (Moving data in and out of memory is a growing problem for all supercomputers.)
“Graph 500 is more challenging on the data movement parts of the machine—on the memory and interconnect—and there are strong commercial driving forces for addressing some of those problems,” Murphy says. “Facebook is directly a graph problem, as is finding the next book recommendation on Amazon, as [are] certain problems in genomics, or if you want to do [an] analysis of pandemic flu. There’s just a proliferation of these problems.”
Both the Top500 and Graph 500 benchmark rankings are being released this week at the International Supercomputing Conference 2013, in Leipzig, Germany.

After a year of outside investigation, questions remain about a controversial quantum computer

When in 1935 physicist Erwin Schrödinger proposed his thought experiment involving a cat that could be both dead and alive, he could have been talking about D-Wave Systems. The Canadian start-up is the maker of what it claims is the world’s first commercial-scale quantum computer. But exactly what its computer does and how well it does it remain as frustratingly unknown as the health of Schrödinger’s poor puss. D-Wave has succeeded in attracting big-name customers such as Google and Lockheed Martin Corp. But many scientists still doubt the long-term viability of D-Wave’s technology, which has defied scientific understanding of quantum computing from the start.
D-Wave has spent the last year trying to solidify its claims and convince the doubters. “We have the world’s first programmable quantum computer, and we have third-party results to prove it computes,” says Vern Brownell, CEO ofD-Wave.
But some leading experts remain skeptical about whether the D-Wave computer architecture really does quantum computation and whether its particular method gives faster solutions to difficult problems than classical computing can. Unlike ordinary computing bits that exist as either a 1 or a 0, the quantum physics rule known as superposition allows quantum bits (qubits) to exist as both 1 and 0 at the same time. That means quantum computing could effectively perform a huge number of calculations in parallel, allowing it to solve problems in machine learning or figure out financial trading strategies much faster than classical computing could. With that goal in mind, D-Wave has built specialized quantum-computing machines of up to 512 qubits, the latest being a D-Wave Two computer purchased by Google for installation at NASA’s Ames Research Center in Moffett Field, Calif.
D-Wave has gained some support from independent scientific studies that show its machines use both superposition and entanglement. The latter phenomenon allows several qubits to share the same quantum state, connecting them even across great distances.
But the company has remained mired in controversy by ignoring the problem of decoherence—the loss of a qubit’s quantum state, which causes errors in quantum computing. “They conjecture you don’t need much coherence to get good performance,” says John Martinis, a professor of physics at the University of California, Santa Barbara. “All the rest of the scientific community thinks you need to start with coherence in the qubits and then scale up.”
Most academic labs have painstakingly built quantum-computing systems—based on a traditional logic-gate model—with just a few qubits at a time in order to focus on improving coherence. But D-Wave ditched the logic-gate model in favor of a different method called quantum annealing, also known as adiabatic quantum computing. Quantum annealing aims to solve optimization problems that resemble landscapes of peaks and valleys, with the lowest valley representing the optimum, or lowest-energy, answer.
Classical computing algorithms tackle optimization problems by acting like a bouncing ball that randomly jumps over nearby peaks to reach the lower valleys—a process that can end up with the ball getting trapped when the peaks are too high.
Quantum annealing takes a different and much stranger approach. The quantum property of superposition essentially lets the ball be everywhere at once at the start of the operation. The ball then concentrates in the lower valleys, and finally it can aim for the lowest valleys by tunneling through barriers to reach them.
That means D-Wave’s machines should perform best when their quantum-annealing system has to tunnel only through hilly landscapes with thin barriers, rather than those with thick barriers, Martinis says.
Independent studies have found suggestive, though not conclusive, evidence that D-Wave machines do perform quantum annealing. One such study—with Martinis among the coauthors—appeared in the arXiv e-print service this past April. Another study by a University of Southern California team appeared in June inNature Communications.
But the research also shows that D-Wave’s machines still have yet to outperform the best classical computing algorithms—even on problems ideally suited for quantum annealing.
“At this point we don’t yet have evidence of speedup compared to the best possible classical alternatives,” says Daniel Lidar, scientific director of the Lockheed Martin Quantum Computing Center at USC, in Los Angeles. (The USC center houses a D-Wave machine owned by Lockheed Martin.)
What’s more, D-Wave’s machines have not yet demonstrated that they can perform significantly better than classical computing algorithms as problems become bigger. Lidar says D-Wave’s machines might eventually reach that point—as long as D-Wave takes the problem of decoherence and error correction more seriously.
The growing number of independent researchers studying D-Wave’s machines marks a change from past years when most interactions consisted of verbal mudslinging between D-Wave and its critics. But there’s still some mud flying about, as seen in the debate over a May 2013 paper [PDF] that detailed the performance tests used by Google in deciding to buy the latest D-Wave computer.
Catherine McGeoch, a computer scientist at Amherst College, in Massachusetts, was hired as a consultant by D-Wave to help set up performance tests on the 512-qubit machine for an unknown client in September 2012. That client later turned out to be a consortium of Google, NASA, and the Universities Space Research Association.
Media reports focused on the fact that D-Wave’s machine had performed 3600 times as fast as commercial software by IBM. But such reporting overlooked McGeoch’s own warnings that the tests had shown only how D-Wave’s special-purpose machine could beat general-purpose software. The tests had not pitted D-Wave’s machines against the best specialized classical computing algorithms.
“I tried to point out the impermanency of that [3600x] number in the paper, and I tried to mention it to every reporter that contacted me, but apparently not forcefully enough,” McGeoch says.
Indeed, new classical computing algorithms later beat the D-Wave machine’s performance on the same benchmark tests, bolstering critics’ arguments.
“We’re talking about solving the one problem that the D-Wave machine is optimized for solving, and even for that problem, a laptop can do it faster if you run the right algorithm on it,” says Scott Aaronson, a theoretical computer scientist at MIT.
Aaronson worries that overblown expectations surrounding D-Wave’s machines could fatally damage the reputation of quantum computing if the company fails. Still, he and other researchers say D-Wave deserves praise for the engineering it has done.
The debate continues to evolve as more independent researchers study D-Wave’s machines. Lockheed Martin has been particularly generous in making its machine available to researchers, says Matthias Troyer, a computational physicist at ETH Zurich. (Troyer presented preliminary results at the 2013 Microsoft Research Faculty Summit suggesting that D-Wave’s 512-qubit machine still falls short of the best classical computing algorithms.)
Google’s coalition also plans to let academic researchers use its D-Wave machine.
“The change we have seen in the past years is that by having access to the machines that Lockheed Martin leased from D-Wave, we can engage with the scientists and engineers at D-Wave on a scientific level,” Troyer says
Flash memory is fantastic stuff. It's small, it's fast, and it's robust. It's also absurdly expensive if you want a lot of it, which is at odds with our evolving media-hungry mobile lifestyle. Google, Apple, and Amazon would like us to store everything in the cloud. But hard disk drive manufactures have other ideas.
For a few years now, Seagate has offered wireless traditional hard drives to give mobile devices a storage boost, but at CES this year, they're showing off a prototype tablet that skips the peripheral completely. And somehow, it does so without many compromises.
Seagate doesn't have a name for this prototype tablet, and they don't intend to jump into the tablet game. It's more of a design concept, intended to illustrate the feasibility of stuffing an old-school magnetic platter hard drive into a slim tablet.
The hard drive in question is Seagate's impressively skinny "Ultra Mobile HDD," a five-millimeter-thick single system with 500GB of storage, robust power management, and drop protection. It's cheap, too: Seagate won't tell us how much, exactly, except that it's "a fraction of the cost" of even just 64GB of flash memory.
Of course there's plenty of reasons we don't already have hard drives in tablets. The compromise that immediately leaps to mind when you add a spinning hard drive is, of course, battery life. Seagate's solution in this prototype was to hybridize the storage with the addition of 8GB of flash memory. The vast majority of the time, the tablet is just running on flash, and the magnetic drive is powered off. If you want to play a movie, though, the drive will spin up, swap the movie onto the flash memory through a fast 6 gb/s SATA interface, and then spin down again. The upshot of this is that you have 500GB that you can access whenever you want, but you're not paying for it in battery life, because it's almost never running.
With battery life rendered a non-issue, putting a drive like this into a tablet is almost entirely upside. You get a lot more storage, of course, and you also save a lot of money. According to Seagate, there's "no compromise" in battery life, robustness, or performance: you just get more storage for less money, and that's it. Hopefully, a manufacturer will take the plunge on this, and give us a consumer model to play with at some point in the near future.

Also: Fast, Portable Storage

The other interesting thing that Seagate had on display is something that you can buy, right now. It's called Backup Plus Fast, and it's a chubby 2.5" external USB 3.0 hard drive. It's chubby (the picture above shows it next to a regular sized external HD) because there are actually two drives in there, set up in a striped (RAID 0) configuration. You get a staggering four terabytes of bus-powered storage that can maximize its USB 3.0 connection with transfer speeds of up to 220 MB/s, great for working with video or piles of pictures.
While the drive is currently only available in RAID 0, Seagate told us that they're looking at whether they'll put out a RAID 1 (mirrored) version at some point in the future. Personally, I'm super paranoid about irreplaceable media like pictures and videos, and I'd love to have a portable solution that offers protection against drive failure, even if it means sacrificing the capacity and speed.
The Seagate Backup Plus Fast is available now for a penny under $300.
[ Seagate ]