Thursday, November 11, 2010

this the first step toward a flying car

A concept rendering of the Transformer, a roadworthy vehicle that would carry four soldiers and take off like a helicopter 

Along with the jetpack, the flying car tops the list of classic science-fiction imaginings that lead legions of fans to ask -- why don't we have this yet?
Now researchers, with some cash from the U.S. military, might be taking a step toward making these hovering vehicles -- seen in such diverse works as "Blade Runner" and "The Jetsons" -- a reality.
DARPA, the Defense Advanced Research Projects Agency, is awarding grants to scientists to help develop its Transformer program, which seeks to create a road-worthy vehicle that can take off vertically like a helicopter and fly.
This week, the robotics institute at Carnegie Mellon University was awarded a $988,000 contract to develop a flight system for the Transformer.
The institute has already worked on automated flying vehicles, which researchers say would be crucial to the success of a military craft that could go from an earthbound combat situation into the air seamlessly.
"The [Transformer] is all about flexibility of movement, and key to that concept is the idea that the vehicle could be operated by a soldier without pilot training," said Sanjiv Singh, a CMU research professor of robotics.
"In practical terms, that means the vehicle will need to be able to fly itself, or to fly with only minimal input from the operator. And this means that the vehicle has to be continuously aware of its environment and be able to automatically react in response to what it perceives."
Carnegie Mellon is one of six contractors DARPA has chosen for the Transformer, or TX, program. AAI Corp. and Lockheed Martin Co. were selected by DARPA to develop overall design concepts for the transforming vehicle.
DARPA frequently engages private-sector businesses and amateur technology buffs for for ideas on innovations that could be used on the battlefield and elsewhere.
Among them are a recurring robot race and a nationwide DARPA balloon hunt that awarded prizes to players who most efficiently used online networking to hunt down 10 weather balloons.
The vehicle DARPA is considering would be able to carry four troops and up to 1,000 pounds of equipment for 250 miles, either on land or through the air.
"Its enhanced mobility would increase survivability by making movements less predictable and would make the vehicle suitable for a wide variety of missions, such as scouting, resupply and medical evacuation," Carnegie Mellon said in a written release.
This isn't Carnegie Mellon's first outing with DARPA, or in the field of automated vehicles.
The university won DARPA's 2007 Urban Challenge robot road race with a self-driving SUV called "Boss."
They've worked on a self-driving submarine, and earlier this year had an autonomous helicopter demonstration. The Carnegie Mellon contract is for 17 months.
The Carnegie Mellon prototype follows the recent news that a Florida man built a flying car Video that was certified by the FAA.

New probe memory could achieve user densities over 10 terabits per square inch

Researchers have proposed a new strategy for writing data for scanned-probe memories with user densities that are potentially more than twice as high as those achieved with conventional approaches. While previous research has shown that scanned-probe memories have the potential to achieve storage densities of up to 4 Tbit/in2, the new study shows how the density could be increased to 10 Tbit/in2 or more.
This image shows mark-length recorded bits with the corresponding current below.

Sensors Use Building's Electrical Wiring as Antenna

This sensor sends data wirelessly to the copper wiring within a building’s walls. The wiring transmits the signal to a base station plugged into an outlet.'  

 
Wireless sensors scattered throughout a building can monitor everything from humidity and temperature to air quality and light levels. This seems like a good idea--until you consider the hassle and cost of replacing the sensors' batteries every couple of years. The problem is that most wireless sensors transmit data in a way that drains battery power.
Researchers at the University of Washington have come up with a way to reduce the amount of power a sensor uses to transmit data by leveraging the electrical wiring in a building's walls as an antenna that propagates the signal. The approach extends a wireless sensor's range, and it means that its battery can last up to five times longer than existing sensors, say the researchers.
The technology, called Sensor Nodes Utilizing Powerline Infrastructure (SNUPI), sends a small trickle of data wirelessly at a frequency that resonates with the copper wiring in a building's walls, says Shwetak Patel, professor of computer science and electrical engineering at the University of Washington. The copper wiring, which can be up to 15 feet away from the sensors, picks up the signal and acts as a giant receiving antenna, transmitting the data at 27 megahertz to a base station plugged into an electrical outlet somewhere in the building.
"The powerline has an amplification effect," says Patel. While many low-power sensors only have a range of a few feet, he says, his prototype sensors can cover most of a 3,000-square-foot home. In most wireless sensor schemes, Patel says, walls impede transmission of sensor data, but with SNUPI, "the more walls in the home, the better our system works." A paper describing the work will be presented at the Ubiquitous Computing conference in Copenhagen, Denmark, in September.
"Most academic research on in-building sensor nodes has looked at building infrastructure as a problem," says Matt Reynolds, professor of electrical and computer engineering at Duke University. Patel's work is interesting because it "turns the problem on its head," he says. "The building's wiring is part of the solution rather than part of the problem."
Using powerlines to transmit data is not a new idea. Broadband over powerlines, or BPL, uses the power grid to provide Internet connectivity. But using powerlines to extend the range of ambient sensors, and reduce their power consumption, is novel.
The researchers' prototype uses less than one milliwatt of power when transmitting data to the powerline antenna, and less than 10 percent of that power is used for communication. Future versions, says Patel, will reduce the amount of power the sensor uses for computation, and will also include a receiving antenna for two-way communication between the sensors and the base station. This could enable the sensor to accept confirmation that all of the data has been received properly.
Patel, who founded an in-home energy-monitoring startup called Zenzi that was sold to Belkin earlier this year, has launched another company to commercialize SNUPI. He suspects that the approach can be used for more than monitoring air quality in homes--it could also be used to collect data from wearable sensors or implanted medical devices. In fact, Patel says, preliminary studies have shown that the popular pedometer called FitBit, which sends data to a base station wirelessly, could last for a year on a single charge, rather than its current duration of 14 days, using the SNUPI scheme.

The i-MiEV

The i-MiEV has a range of 160 kilometers and can be charged in either of two ways. Fully charging the battery from a regular 100- or 200-volt power outlet requires 14 or seven hours, respectively. Alternatively, a dedicated quick-charging system can provide an 80 percent charge in about 30 minutes.
Product: i-MiEV
Cost: Under $30,000
Availability: Now
Source: www.mitsubishicars.com
Company: Mitsubishi

Companies, processes and technology behind India’s UID project,

India’s unique identification program has started collating its database, while there’s healthy debate amongst people on the how moral or ethical is it to collect biometric data from citizens, the technology behind the whole project is something that interested me. The Unique Identification Number project gained a lot of attention internationally with companies like Microsoft, Yahoo wanting to be a part of it. Here’s what is public information about the highly IT dependent initiative led by India’s IT czar Nandan Nilekani.
Mahindra Satyam and Morpho (part of the Safran Group) will be providing the technology for issuing of UIDs. According to the press release, Mahindra Satyam and Morpho will be responsible for maintaining the databases and cross-checking the information. Mahindra Satyam will be providing expertise regarding system integration and IT while Morpho will provide the biometric equipment. Morpho’s relationship with the Indian government goes beyond, they will be providing explosive detectors for the capital’s Indira Gandhi International Airport.
The UIDAI website has quite a bit of information about the various equipment and the integration. The biometric scanners for the finger prints and iris scans come from L-1 Identity Solutions and Cross Match Technologies. Morpho has signed an agreement with L-1 to use their scanners for this project. In simple terms: UIDAI brings Morpho, Morpho brings L-1. Both L-1 and Cross Match will provide one eye scanner and one finger print scanner. Certified devices from the manufacturers are:
Going through the client software readme documents available on the UIDAI website, the software is currently designed for 32-bit operating systems and requires Microsoft .NET Framework 3.5 SP1 and Microsoft SQL Server 2008 R2.
According to another technical document titled Registrar Integration manual, it is suggested that prior to citizens coming in for the biometric scans. A pre-enrollment process takes place whereby demographic data and its verification is done beforehand.

Finally, who can apply and what happens when you go to get yourself enrolled for a UID?
Any citizen of India with documents as asked by registrar can apply. The three phases of the process are:
  1. Verification and Data Entry
  2. Biometric Data Scans
  3. Signoff and Acknowledgement
Technology blog Technospot.In has done a detailed post on these steps.
The documents I referred to for this are available for download and reference in UIDAI’s download section.
PS: No matter what some blogs might claim, registering for a UID is optional as of now.

Indian government plans to introduce SIM cards with digital certificates

Communication over email and SMS using cell phones is widespread in India given the mobile penetration. The Indian government is planning to come up with guidelines to issue encrypted SIM cards with digital signatures unique to an individual or company.
The encrypted SIM cards will allow secure interaction over email and SMS from a cell phone. Phone banking has picked up in India and the enterprises make use of emails that they would prefer secure to avoid any breach of security. Digital certificates for SIM cards in such use-cases will be helpful.
The encrypted SIM cards will be a proxy SIM card issued. N Vijayaditya, Controller of Certifying Authority talking about the project said, “To make India’s future in mobile banking and mobile transactions secure, we are recommending the certification of proxy SIM cards with digital certificates, which can be inserted on top of regular SIM cards. These will enable the signatures with emails and even SMSes sent via mobile phones.”
According to Murali Venkatesan, an enterprise specialist at Sify, a regular 160 character SMS is about 40-50 bytes of data however, digitally certified SMS would be around 256KB. Bigger size, more the cost per message but given the ease of use of mobile phones, the enterprises might consider the option. Banks and government tenders use encrypted keys for transactions over the computer, a proxy SIM card will be a hassle to manage for the end user.

Intel vs. AMD: Does the CPU really matter?

With the knowledge in hand that AMD was announcing their Opteron 6000 series of CPUs this week in response to last week’s release of Intel’s 5600 Xeon, I started talking to the folks responsible for actually making the purchasing decisions for a few large SMB customers as well as consultants to Fortune 500-size datacenter customers.   I asked them one simple question; does the processor in the box affect the purchase decision?
The answer was a mixed bag, but boiled down to this; maybe.

Unsurprisingly, much of the interest in these two new CPU releases depended upon the point in the equipment replacement cycle of the IT guy I asked.  A common thread was the sit back and watch attitude, though everyone was excited, if that is  the right word, to move their 1P to 4P servers to these new, higher core count, higher performance, CPUs, when the opportunity presented itself.  These folks have the luxury of watching the market and the media and making use of the information that appears about the performance and value of the two platforms over the next few months.
One IT guy I talked to put it very succinctly; “I don’t have a dog in this race. My job is to spend my budget as effectively as possible.” He didn’t care which processor was in the box; he only buys from top tier server vendors and for his area of responsibility, squeezing the last erg of performance out of a server wasn’t really the concern; stability, reliability, and meeting the less compute-intensive needs of his business unit were the driving factor.
“My job is to spend my budget as effectively as possible”
Contrast this with the director of a database server computing unit I spoke with. In his case he was a diehard Intel fan and his belief was that even with more cores, the AMD CPUs wouldn’t deliver the performance of the new Intel processors, but he was hedging his bets. He did plan to evaluate the offerings from his vendor of choice to determine if the larger number of cores would make a difference in his environment.
Given that AMD seems to have chosen to focus on value and energy efficiency, it is likely that his testing will still show that the Xeon 5600 series will hold an edge, in his application, over the Opteron 6000.
For really serious interest in the potential of the AMD 8 and 12 core processors I had to step out of the large datacenter space to the guys that buy only a few servers at a time. For IT folks supporting smaller server groups they expressed interest in seeing published performance numbers for these new CPUs and a willingness to purchase if they were a good value for their more tightly constrained budgets.
Bigger datacenter managers didn’t really focus on the buy-in cost of their new servers. They have far more interest in the ongoing expenses related to the servers, and as most of the ones I know tend to buy from a specific server vendor, they already have an excellent idea of the projected costs of their server platforms over the usable life of the hardware. When viewed from this perspective, the price delta between the Intel and AMD offerings isn’t really significant.
Energy utilization in this scenario has the potential to be a purchasing issue, but evaluating the actual energy consumption of the servers in real-world use is going to be a much more difficult metric to define.  While the power consumption numbers of the processors are clear, the value of the power vs. workload metric, for any specific user scenario, is rarely easily seen, especially in short term testing.
One group I really haven’t been able to get feedback from yet are those that use software with per-core, as opposed to per CPU licensing. Doubling or tripling the number of cores in their servers could have a very deleterious effect on their budget numbers regardless of performance improvements. Users of Microsoft server OSes don’t need to worry; their license is per processor, not per core. VMware licenses currently allow up to 12 cores per processor, so for the moment, VMware users are also unaffected. Following up on this for software that is still licensed per core will need to wait for a later post.
It’s been a long time since the CPU was the sole deciding choice for a server platform in major business. The package delivered from the server vendor; that combination of price, support, experience, and reliability, is usually much more important than the vendor name that appears on the CPU.

Tuesday, November 9, 2010

How long will Microsoft support XP, Vista, and Windows 7?

In an ideal world, old versions of Windows would roll off Microsoft’s list of supported products and be replaced by new ones at regular, predicable intervals. That upgrade cycle has been anything but smooth and predictable in recent years, however. Microsoft’s support policy is still returning to normal after XP was allowed to live well past its normal retirement date and then got multiple extensions to placate customers who just said no to Vista.
I was reminded of this confusion earlier today when Matt Gardenghi asked a great question via Twitter:
Where would I find a list of supported MS OS versions? Trying to determine what’s in support and what’s out of support.
Microsoft product lifecycle policy is actually quite coherent and easy to understand, at least on paper. I wrote this two years ago in How long will Microsoft support XP and Vista?:
Microsoft has a well-documented support lifecycle for its software products. It’s part of the agreement that the company makes with everyone who installs Windows, especially business customers who want some assurance that they’ll be able to get updates and support for operating systems and applications even if they choose not to upgrade to the latest and greatest.
Now that Windows 7 is firmly entrenched in the marketplace, I’m starting to get questions about its life span (and it doesn’t help when high-profile web sites and bloggers get the facts dead wrong, as they did last month with the bogus “XP in 2020″ story). To help clear the air, I’ve put together a chart listing all of Microsoft’s supported operating systems. The calculations start with the general availability (GA) date for each product. Consumer operating systems are supported for five years after their GA date, and business OSes are supported for 10 years (with the last five years classed as “extended support”). The official date of retirement for support is the second Tuesday in the first month of the quarter following that anniversary (which also happens to be Patch Tuesday), which means each support cycle typically gets a few weeks or months of extra support tacked on at the end.
For Windows 7, you can do the math yourself. The GA date for all Windows 7 editions was October 22, 2009. Five years after that date is October 22, 2014. The next calendar quarter begins in January, 2015, and the second Tuesday of that month is January 13. So, that’s when mainstream support is scheduled to end. Extended support for business editions goes an extra five years, until January 14, 2020 (the second Tuesday of the month).
For Windows XP, however, those calculations don’t work, because Microsoft has extended XP’s life artificially. To find XP’s end-of-support date, you should use the Microsoft Product Lifecycle Search page to get the official answer. Enter the name of the OS and click Search, and you get back a table that shows the general availability date, the retirement dates for mainstream and extended support, and retirement dates for service packs, which are governed by a separate set of rules.
Here’s the set of search results for Windows XP:

The one date that matters most on this chart is the one I’ve circled in red—April 8, 2014.
Service Packs 1 and 1a were retired back in 2006. Service Pack 2 rode off into the sunset last month, on July 13. And Service Pack 3 will be retired along with all editions of Windows XP on Patch Tuesday, April 8, 2014.
By that time, Windows 8 will probably be well past its first birthday, and Microsoft will (at least for a short time) be supporting four separate Windows versions. Here’s a table that summarizes the support policy for all of the current Windows desktop versions:




































The point of having a predictable release cycle—a new Windows version every three years—is to encourage upgrades. That’s especially true for consumers, who can skip one version but not two. Even so, full support will be available until the beginning of 2015. For businesses, anyone considering a Windows 7 migration can take comfort in knowing it will be supported for nearly another decade more—until January 14, 2020.
Update: My Windows 7 Inside Out co-author, Carl Siechert, asks another good question: “What, exactly, is ’support’?”
For the answer, I defer to the Microsoft Support Lifecycle blog:
Generally, the minimum bar for something to be considered supported is that we provide at least one type of assisted support option and no-charge security updates. This means that, at a minimum, the customer will have some avenue to contact Microsoft for assistance and Microsoft will continue to provide security updates through channels like Windows Update and the Download Center.

Pocket-sized Sony HDR-TG1 Camcorder

WORLD’S SMALLEST FULL HD CAMCORDER FROM SONY


Nowadays, HD camcorder is a buzz word. And if the handycam is from Sony it appeals more to us. Yes, I am talking about diminutive and compact Sony HDR-TG1 camcorder, which records video in full HD resolution (in 1080i video mode). This high-definition capability influences the choice of vacationers, who don’t want to “travel heavy.” Another advantage for any traveler is the hanycam’s durable titanium body, which makes it highly resistant to scratches.
The hanycam is embedded with advanced video and audio technologies with simple and intuitive operations. Sony HDR-TG1 is equipped with a high-quality Carl Zeiss10x optical zoom lens and 2-megapixel CMOS sensor, engineered to minimize picture noise. Audio signals are captured in Dolby Digital 5.1-channel surround sound. And the in-built zoom microphone feature senses signal for clear recordings along with the video.

A Step toward Holographic Videoconferencing

A full-color holographic display system refreshes every two seconds, fast enough to send live 3-D images.
Video hologram: This display can refresh the image every two seconds. 
Researchers have made a major step toward a holographic videoconferencing system that would let people communicate with one another almost as if they were in the same room. They have developed a full-color, 3-D display that refreshes every two seconds, and they've used it to send live images of a researcher in California to collaborators in Arizona. In the coming years, the researchers hope to develop a system that refreshes at standard video rates and can compete with other 3-D displays.
"Holography makes for the best 3-D displays because it's closest to how we see our surroundings," says Nasser Peyghambarian, chair of photonics and lasers at the University of Arizona. A hologram is a display that uses an optical effect called diffraction to produce the light that would have come from an object in the image if the physical object were in front of the viewer. Holographic images appear to project out into the space in front of the display. By walking around a holographic image, it's possible to see objects in it from different angles.
Holograms don't require glasses to view, and unlike other glasses-free 3-D systems, multiple people can use them simultaneously without having to stand in a particular place. But the development of holographic displays has lagged behind that of other 3-D systems because of the difficulty in creating holographic materials that can be rapidly rewritten to refresh the image.
The first video holographic display was made at MIT's Media Lab in 1989. The volume of the hologram was just 25 cubic millimeters, smaller than a thimble. Since then, researchers have been trying to develop practical holographic systems but have come up against limitations in scaling these displays up to larger sizes. A big challenge has been the attempt to eliminate expensive optical components without sacrificing the refresh rate.
A few companies sell 3-D displays for medical and design applications, but many of these systems don't produce true holograms, and they tend to be expensive, not least because they're produced in small amounts. "Some need lasers, some need powerful computers to operate, or many displays stacked together," says Jennifer Colegrove, director of display technologies at industry research firm DisplaySearch. She notes that in 2010, such "volumetric" displays will generate $5 million in revenue, a small sliver of the $1 billion 3-D display market. Despite their expense, she says, "these displays are still primitive," and lack a combination of image quality, speed, and display size.
In collaboration with Nitto Denko Technical, the California-based research arm of a Japanese company, Peyghambarian has been working to improve the sophistication and refresh rate of holographic displays. The new displays refresh significantly faster than previous systems and are the first to be combined with a real-time camera system to show live images rather than ones recorded in advance. The new displays are based on a composite materials system developed by Nitto Denko Technical. In 2008, the groups produced a four-inch-by-four-inch red holographic display that could be rewritten every four minutes. By improving the materials used to make the display and the optical system used to encode the images, they have now demonstrated a full-color holographic display that refreshes every two seconds. This work is described today in the journal Nature.
The key to the technology is a light-responsive polymer composite layered on a 12-inch-by-12-inch substrate and sandwiched between transparent electrodes. The composite is arranged in regions called "hogels" that are the holographic equivalent of pixels. Writing data to the hogels is complex, and many different compounds in the composite play a role. When a hogel is illuminated by an interference pattern produced by two green laser beams, a compound called a sensitizer absorbs light, and positive and negative charges in the sensitizer are separated. A polymer in the composite that's much more conductive to positive charges than negative ones pulls the positive charges away.
This charge separation generates an electrical field that in turn changes the orientation of red, green, and blue dye molecules in the composite. This change in orientation changes the way these molecules scatter light. It's this scattering that generates a 3-D effect. When the hogel is illuminated with light from an LED, it will scatter the light to make up one visual point in the hologram.
Writing the data to the holographic display used to take several minutes. Part of the way the Nikko Denko researchers sped up the process was to decrease the viscosity of the dye materials so that they can change position more rapidly. The movement of the dye molecules inside the composite is analogous to the movement of liquid crystals in a conventional display, says Joseph Perry, professor of chemistry at Georgia Tech. A path to further increasing the speed of the display might be to make these materials more like liquid crystals, which can switch not just at video rates but faster than the human eye is capable of detecting.
Another boost in speed came from using a faster laser to write the data. For this to work, the researchers also had to pair the laser with polymers in the display that could respond to these faster pulses, separating charges to generate the electric fields with less delay time. In another advance over previous work, the company has developed a full set of dye molecules for red, green, and blue.
To demonstrate the relative speed of the system, the group used it as a "telepresence" system similar to the holographic communications used in sci-fi movies like Star Wars—but much choppier. Multiple cameras recorded images of an employee at Nitto Denko; these images were processed to create the data to write each hogel, and sent to the group in Arizona, where the holographic display showed a 3-D projection of their California collaborator. "Now what we can display is like a slow movie," says Peyghambarian. To make a holographic video system, they'll need to increase the display's refresh speed to at least 30 frames per second.
The university and Nitto Denko groups are working with Michael Bove at MIT on improving the fidelity of the images. "What they're reporting works beautifully, without a lot of computation," Bove says. In hopes of making the imagery clearer, Bove has developed a system to render holographic video very rapidly on an ordinary computer graphics chip.

Monday, November 8, 2010

New 3D Laptops

After the Avatar, a 3D movie peoples are very much interest in watching many 3D movies and this made the directors to make many 3D films. You can now see that many 3D movies are released. And recently all would know that the IPL are watched as 3D in theatres. This also made the many television makers to make a 3D television so that the people can watch 3D movies and also 3D TV shows.  And now, after a surplus of 3D video games, now the recent buzz that is around this world of technology is 3D laptop.

          Although the idea of 3D laptops in India is relative at a emerging stage with not so many players competing for it as of now, yet the way in which the television industry is betting on 3D TV oblige the technology lovers to believe that the laptops comes with a feature called 3D and this will be soon launched in the Indian markets.
Well, the DTH players are also too making out plans to get themselves geared up to the next level of generation technology, the globe of 3D is simply becoming more exciting. What’s more, the 3D revolution is making their inroads to the globe of mobile phones as well.   There are some players who already planted their 3D laptops, realising that the economies of scale and many product innovations seems to be quite feasible.
Most of the 3D laptops are coming with a 15.6 inch display or even bigger than this display. According to the experts the time for the smaller screen may take some more period of time.
A 3D laptop requires a larger display say 120 Hzs or higher in order for having a high resolution transmissions, a wireless transmitter, a very powerful processor, active shutter glasses, very high end graphic card and memory any thing which ranges between 2GB to 4GB.  Polarising filters present in the laptop will be converting the 2D transmission to a 3D which needs the polarised glasses for viewing.
With a 3D laptop you can do things like digital photography, you can also capture images and also videos, you can also convert the existing video and also the photo documents into  3D by making use of software which converts your stills and the videos from 2D to 3D. 3D laptops are also a customisable one.
Are you a game lover?? If you are a game lover you can really enjoy playing 3D games by making use of 3D laptops. Usually a game lover will play the games more keenly and also more interestingly. If he starts playing 3D game in a 3D laptop he will be very much amazed and he will be playing it more keenly and more interestingly than previously. The 3D effects which looks like a real one to the player and this add up more fun to the player. Since 3D Laptop is portable you can carry it to any place you like and you can enjoy watching 3D movies and playing 3D games.

What The Names of The Days Really Mean

Sunday: is the Lord’s day for Christians, yet it is named for the sun, it is a day that stands for the sun.

Monday: is named for the moon. The pronunciation is similar to Moon-day.

Tuesday: is not named for the number. It is named for the Germanic war god Tiu, Tiu is also the name of one of the king of ancient Egypt, he was a pharaoh in lower Egypt.

Wednesday: is named after a Germanic God, this time the god of the sky, Woden. Woden was also referred as Odin The Wanderer in some English parts of the world.

Thursday: – It was named for the Scandanavian god of thunder, Thor. Thor was the most famous of the Norse gods, stories of him were told around Europe.

Friday: is the day of the week named for a woman. Her name was Frigga and she was the consort of Odin.

Saturday: This one is named after the Roman god, Saturn. It is the only day of the week in the English language that retained its Roman character.
 

OLD ANTIQUE TYPEWRITERS





















Most Strange Facts of Technology

Aircraft Carrier
An aircraft carrier gets about 6 inches per gallon of fuel.


Airplanes
The first United States coast to coast airplane flight occurred in 1911 and took 49 days.

A Boeing 747s wingspan is longer than the Wright brother's first flight (120ft).


Aluminum
The Chinese were using aluminum to make things as early as 300 AD Western civilization didn't rediscover aluminum until 1827.

Automobile
George Seldon received a patent in 1895 - for the automobile. Four years later, George sold the rights for $200,000.


Coin Operated Machine
The first coin operated machine ever designed was a holy-water dispenser that required a five-drachma piece to operate. It was the brainchild of the Greek scientist Hero in the first century AD.

Compact Discs
Compact discs read from the inside to the outside edge, the reverse of how a record works.

Computers
ENIAC, the first electronic computer, appeared 50 years ago. The original ENIAC was about 80 feet long, weighed 30 tons, had 17,000 tubes. By comparison, a desktop computer today can store a million times more information than an ENIAC, and 50,000 times faster.

From the smallest microprocessor to the biggest mainframe, the average American depends on over 264 computers per day.

The first "modern" computer (i.e., general-purpose and program-controlled) was built in 1941 by Konrad Zuse. Since there was a war going on, he applied to the German government for funding to build his machines for military use, but was turned down because the Germans did not expect the war to last beyond Christmas.

The computer was launched in 1943, more than 100 years after Charles Babbage designed the first programmable device. Babbage dropped his idea after he couldn't raise capital for it. In 1998, the Science Museum in London, UK, built a working replica of the Babbage machine, using the materials and work methods available at Babbage's time. It worked just as Babbage had intended.


Electric Chair
The electric chair was invented by a dentist, Alfred Southwick.

E-Mail
The first e-mail was sent over the Internet in 1972.

Eye Glasses
The Chinese invented eyeglasses. Marco Polo reported seeing many pairs worn by the Chinese as early as 1275, 500 years before lens grinding became an art in the West.

Glass
If hot water is suddenly poured into a glass that glass is more apt to break if it is thick than if it is thin. This is why test tubes are made of thin glass.

Hard Hats
Construction workers hard hats were first invented and used in the building of the Hoover Dam in 1933.

Hoover Dam
The Hoover Dam was built to last 2,000 years. The concrete in it will not even be fully cured for another 500 years.

Limelight
Limelight was how we lit the stage before electricity was invented. Basically, illumination was produced by heating blocks of lime until they glowed.

Mobile (Cellular) Phones
As much as 80% of microwaves from mobile phones are absorbed by your head.

Nuclear Power
Nuclear ships are basically steamships and driven by steam turbines. The reactor just develops heat to boil the water.

Oil
The amount of oil that is used worldwide in one year is doubling every ten years. If that rate of increase continues and if the world were nothing but oil, all the oil would be used up in 400 years.

Radio Waves
Radio waves travel so much faster than sound waves that a broadcast voice can be heard sooner 18,000 km away than in the back of the room in which it originated.

Rickshaw
The rickshaw was invented by the Reverend Jonathan Scobie, an American Baptist minister living in Yokohama, Japan, built the first model in 1869 in order to transport his invalid wife. Today it remains a common mode of transportation in the Orient.

Ships & Boats
The world's oldest surviving boat is a simple 10 feet long dugout dated to 7400 BC. It was discovered in Pesse Holland in the Netherlands.

Rock drawings from the Red Sea site of Wadi Hammamat, dated to around 4000 BC show that Egyptian boats were made from papyrus and reeds.

The world's earliest known plank-built ship, made from cedar and sycamore wood and dated to 2600 BC, was discovered next to the Great Pyramid in 1952.

The Egyptians created the first organized navy in 2300 BC.

Oar-powered ships were developed by the Sumerians in 3500 BC.

Sails were first used by the Phoenicians around 2000 BC.


Silicon Chip
A chip of silicon a quarter-inch square has the capacity of the original 1949 ENIAC computer, which occupied a city block.

Skyscraper
The term skyscraper was first used way back in 1888 to describe an 11-story building.

Sound
Sound travels 15 times faster through steel than through the air.

Telephones
There are more than 600 million telephone lines today, yet almost half the world's population has never made a phone call.

Television
Scottish inventor John Logie Baird gave the first public demonstration of television in 1926 in Soho, London. Ten years later there were only 100 TV sets in the world.

Traffic Lights
Traffic lights were used before the advent of the motorcar. In 1868, a lantern with red and green signals was used at a London intersection to control the flow of horse buggies and pedestrians.

Transistors
More than a billion transistors are manufactured... every second.

VCR's
The first VCR, made in 1956, was the size of a piano.

Windmill
The windmill originated in Iran in AD 644. It was used to grind grain.

World Trade Center
The World Trade Center towers were designed to collapse in a pancake-like fashion, instead of simply falling over on their sides. This design feature saved hundreds, perhaps thousands of lives on Sept. 11, 2001, when they were destroyed by terrorists.

Wireless Power Harvesting for Cell Phones

Nokia hopes to create a device that could harvest enough power to keep a cell phone topped up.

A cell phone that never needs recharging might sound too good to be true, but Nokia says it's developing technology that could draw enough power from ambient radio waves to keep a cell-phone handset topped up.
Ambient electromagnetic radiation--emitted from Wi-Fi transmitters, cell-phone antennas, TV masts, and other sources--could be converted into enough electrical current to keep a battery topped up, says Markku Rouvala, a researcher from the Nokia Research Centre, in Cambridge, U.K.
Rouvala says that his group is working towards a prototype that could harvest up to 50 milliwatts of power--enough to slowly recharge a phone that is switched off. He says current prototypes can harvest 3 to 5 milliwatts.
The Nokia device will work on the same principles as a crystal radio set or radio frequency identification (RFID) tag: by converting electromagnetic waves into an electrical signal. This requires two passive circuits. "Even if you are only getting microwatts, you can still harvest energy, provided your circuit is not using more power than it's receiving," Rouvala says.
To increase the amount of power that can be harvested and the range at which it works, Nokia is focusing on harvesting many different frequencies. "It needs a wideband receiver," says Rouvala, to capture signals from between 500 megahertz and 10 gigahertz--a range that encompasses many different radio communication signals.
Historically, energy-harvesting technologies have only been found in niche markets, powering wireless sensors and RFID tags in particular. If Nokia's claims stand up, then it could push energy harvesting into mainstream consumer devices.
Earlier this year, Joshua Smith at Intel and Alanson Sample at the University of Washington, in Seattle, developed a temperature-and-humidity sensor that draws its power from the signal emitted by a 1.0-megawatt TV antenna 4.1 kilometers away. This only involved generating 60 microwatts, however.
Smith says that 50 milliwatts could require around 1,000 strong signals and that an antenna capable of picking up such a wide range of frequencies would cause efficiency losses along the way.
"To get 50 milliwatts seems like a lot," adds Harry Ostaffe, head of marketing for Pittsburgh-based company Powercast, which sells a system for recharging sensors from about 15 meters away with a dedicated radio signal.
Steve Beeby, an engineer and physicist at the University of Southampton, U.K., who has researched harvesting vibrational energy, adds, "If they can get 50 milliwatts out of ambient RF, that would put me out of business." He says that the potential could be huge because MP3 players typically use only about 100 milliwatts of power and spend most of their time in lower-power mode.
Nokia is being cagey with the details of the project, but Rouvala is confident about its future: "I would say it is possible to put this into a product within three to four years." Ultimately, though, he says that Nokia plans to use the technology in conjunction with other energy-harvesting approaches, such as solar cells embedded into the outer casing of the handset.

Nanogenerator Powers Up

A device containing piezoelectric nanowires can now scavenge enough energy to power small electronic devices
Power flex: This material contains piezoelectric nanowires. When flexed, it produces enough power to drive a liquid-crystal display.

 
Devices that harvest wasted mechanical energy could make many new advances possible—including clothing that recharges personal electronics with body movements, or implants that tap the motion of blood or organs. But making energy-harvesting devices that are compact, flexible, and, above all, efficient remains a big challenge. Now researchers at Georgia Tech have made the first nanowire-based generators that can harvest sufficient mechanical energy to power small devices, including light-emitting diodes and a liquid-crystal display.
The generators take advantage of materials that exhibit a property called piezoelectricity. When a piezoelectric material is stressed, it can drive an electrical current (applying a current has the reverse effect, making the material flex). Piezoelectrics are already used in microphones, sensors, clocks, and other devices, but efforts to harvest biomechanical energy using them have been stymied by the fact that they are typically rigid. Piezoelectric polymers do exist, but they aren't very efficient.
Zhong Lin Wang, who directs the Center for Nanostructure Characterization at Georgia Tech, has been working on another approach: embedding tiny piezoelectric nanowires in flexible materials. Wang was the first to demonstrate the piezoelectric effect at the nanoscale in 2005; since then he has developed increasingly sophisticated nanowire generators and used them to harvest all sorts of biomechanical energy, including the movement of a running hamster. But until recently, Wang hadn't developed anything capable of harvesting enough power to actually run a device.
In a paper published online last week in the journal Nano Letters, Wang's group describes using a nanogenerator containing more nanowires, over a larger area, to drive a small liquid crystal display.
To make the generator, Wang's team dripped a solution containing zinc-oxide nanowires onto a thin metal electrode sitting on a sheet of plastic, creating several layers of the wires. They then covered the material with a polymer and topped it with an electrode. The resulting device is about 1.5 by two centimeters and, when compressed 4 percent every second, it produces about two volts, enough to drive a liquid-crystal display taken from a calculator. "We were generating 50 millivolts in the past, so this is an enhancement of about 20 times," says Wang.

In a paper published in Nano Letters this summer, Wang demonstrated a nanogenerator capable producing 11 milliwatts per cubic centimeter—enough to light up an LED. Wang notes that a pacemaker requires 5 milliwatts to run, an iPod 80 milliwatts. "We're almost there," he says.
The devices made by the Georgia Tech group are "getting into the realm where the power output is reasonable," says Michael McAlpine, professor of mechanical engineering at Princeton University and a 2010 TR35 awardee. "Getting impressive power outputs is a matter of scaling up," he adds.
Both Wang and McAlpine are looking to more efficient materials for making nanogenerators. Both have recently demonstrated making nanowires from PZT, a crystalline material that is standard in commercial piezoelectric devices. PZT, a compound that contains lead, zirconium, and titanium, is the most efficient piezoelectric material known, but making it into nanowires has been tricky because there are no good catalysts for growing PZT nanowires.
Wang and McAlpine have found different solutions to this problem. Wang treats his starting solution at high temperature and pressure, which does away with the need for an efficient catalyst. McAlpine grows a flat film of PZT, and then uses a mask to pattern nanowires through chemical etching. Energy harvesters made from PZT nanowires aren't as efficient as the zinc-oxide ones yet, but McAlpine says this is because he and Wang have only just begun to work with them.

A Hybrid Underwater Robot

Combining the best characteristics of earlier systems, researchers have built a new type of autonomous deep-ocean explorer that could revolutionize marine biology.

All wet: Tethys is a new autonomous underwater vehicle developed by the Monterey Bay Aquarium Research Institute in California. It is being tested in Monterey Bay (top), and in a tank at the institute (bottom).












A new type of underwater robot could be better at tracking marine organisms and measuring the physical and chemical properties of the ocean than previous robot designs. The vehicle, called Tethys and developed by the Monterey Bay Aquarium Research Institute (MBARI) in California, compensates for the shortcomings of current robots by merging their best qualities into one unit.
For decades, researchers have used underwater vehicles to study the biological processes and physical characteristics of the ocean. But such work has been constrained because there were only two types of underwater robots: gliders and propeller-driven vehicles. A glider drifts very slowly through the ocean, using a buoyancy system for propulsion. Its low speed makes it vulnerable to tides and currents, which can knock it off course. It also has a small payload capacity, but high endurance, so it can remain at sea for months at a time. In contrast, propeller-driven vehicles can zoom through the ocean like torpedoes. They can be up to 10 times the size of gliders, but they can remain at sea only for about 24 hours.
Tethys combines the speed of propeller-driven systems with the range and duration of gliders to create a new kind of robot. It uses a new propeller and body design to travel about four times as fast as a glider, but slightly slower than the cruising speed of high-powered vehicles. Tethys also has an efficient power management system, so it can spend many weeks to months in the ocean while carrying a large payload of sophisticated instrumentation.
"To understand the biological processes in the ocean, which change very quickly, you need a flexible system," says James Bellingham, chief technologist at MBARI and project lead for Tethys. Sometimes an interesting area is far offshore, so you need a vehicle that can get there quickly and then remain, slowly following organisms for an extended period of time, he says.
The new underwater robot fills a void in the commercial market and in oceanographic research, says David Kelly, CEO of Bluefin Robotics, a company based in Cambridge, Massachusetts, that designs and develops autonomous underwater vehicles. Others have experimented with hybrids—including the company iRobot—but Tethys is the first fully developed vehicle. Kelly says his company would be interested in using the technology.
Tethys is about two meters long and weighs 110 kilograms. MBARI researchers designed the vehicle's tube-like shape to minimize drag and optimize propulsion. The design also allows the researchers to lengthen or shorten the body to accommodate a range of payload sizes. The researchers increased the robot's performance over previous designs by building a propeller that works at two speeds: one meter per second and half a meter per second. To make Tethys consume little power, the MBARI researchers custom-built most of the onboard electronics. They also built a system to monitor the instrumentation and turn devices on and off every fraction of a second they are not being used.
Using Tethys is like putting a laboratory in the ocean, says Bellingham. During its autonomous journeys through the water, it occasionally surfaces to send data back to the researchers via satellite. MBARI researchers have tested the robot in California's Monterey Bay, tracking algal blooms and phytoplankton, and measuring the physical and chemical properties of the surrounding water. Eventually, the system could do much more, says Eric D'Asaro, a professor of oceanography at the University of Washington. "This could be the first vehicle to take sample materials in the water and bring them back to the lab," he says

Sunday, November 7, 2010

A New Way to Make Stem Cells

New cell: This microscope image shows neurons (colored green) created from pluripotent stem cells using modified RNA.


A Harvard researcher has developed a way to make pluripotent stem cells that solves several of the major impediments to using them to treat human diseases.
Derrick Rossi, an assistant professor at Harvard Medical School, created pluripotent stem cells--which can turn into virtually any other type of cell in the body--from non-stem cells without using viruses to tinker with a cell's genome, as conventional methods do. This means that Rossi's method could be substantially safer for treating disease. The work is published today in the journal Cell Stem Cell.
"Rossi has figured out how to turn a skin cell into a stem cell without genetic modifications, and to do it efficiently," said Doug Melton, codirector of the Harvard Stem Cell Institute, where Rossi is a principal faculty member, at a press conference.
Rossi's innovation, which has not yet been tested in people, was to use messenger RNA instead of DNA to produce the four proteins needed to reprogram the cell. He has started a company called ModeRNA to commercialize this use of messenger RNA. He said the approach may also have potential in gene therapy, which also relies on viruses to deliver treatment, but he declined to talk further about the company or possible gene therapy applications because the work is at such an early stage.
Improving the usability of man-made stem cells is key to helping patients and ending the political morass that has slowed stem-cell research. On Tuesday, a U.S. federal appeals court allowed federal funding of embryonic stem-cell research to continue while a legal case against such funding proceeds.
The human embryonic stem cells used in research were mostly derived from embryonic tissue grown a decade ago. These are the most versatile cells in the body, and the gold standard by which man-made cells are judged.
Four years ago, Japanese researcher Shinya Yamanaka showed that regular cells could be turned into embryonic-like stem cells--called induced pluripotent stem (iPS) cells--through the introduction of four specific proteins. Theoretically, this meant that doctors could take skin cells from a sick or disabled person, transform them into stem cells, and then into a specialized cell to treat them--an insulin-producing islet cell for someone with diabetes or a nerve cell for someone who is paralyzed, for instance. Using iPS cells avoids the need to destroy embryos and, because they can be derived from the patient's own cells, means less risk of rejection.
But only one in 1,000 or one in 10,000 skin cells could be transformed into a stem cell using Yamanaka's method. It also changes a cell's genes in ways that might trigger cancer or other problems.
Rossi 's idea was to produce Yamanaka's four proteins in a different way. Instead of using the DNA that holds the instructions for making proteins, he wanted to use RNA, which carries those instructions to the place in a cell where proteins are made.
His first several attempts were miserable failures. When he tried to change the messages the RNA carried, he triggered a serious immune response and most of the cells shut down or self-destructed. Rossi then tried modifying the RNA chemically and eventually figured out a way to allow his changes to escape immune detection while delivering the message. "This was key to our success," said Rossi, who is also a researcher at Children's Hospital Boston. "We could encode RNA for any protein we wanted to express and insert it into a cell."

Rossi said it was a happy coincidence that using RNA instead of changing the DNA was as much as 100-fold more efficient. He said the effect was possibly because the process more closely reflects how cells themselves transform.
Rossi successfully differentiated his stem cells into muscle cells using RNA, a process that may offer promise in gene therapy and other treatments. His method does not alter the cell's underlying genome, though Rossi admits that he does not yet understand what it does to the cell's epigenome, which controls expression of genes.
Rossi said that his cells, which he's named RiPS, for "RNA induced Pluripotent Stem" cells, are more like embryonic stem cells than traditional iPS cells because they have not been genetically altered.
Melton said the Harvard Stem Cell Institute, which includes several hundred stem-cell researchers from across Harvard University and its affiliated hospitals, will now be making its standard iPS cells with Rossi's method.
In a prepared statement, Yamanaka, now at the University of California, San Francisco, said Rossi's approach to generating stem cells seems promising, and he would like to have someone in his lab try it.
"The quality of the induced pluripotent stem cells generated by this method should be carefully examined because their characteristics vary depending on the induction methods and the origins of the resulting cells," he said. "The standard method to generate iPSCs for clinical applications has yet to be established. I think this method has the potential for it."
Jacob Hanna, a postdoctoral fellow at the Whitehead Institute for Biomedical Research in Cambridge, said he's also eager to begin working with the cells.
"I think it's a very exciting paper with a very promising method," said Hanna, who was not involved in the research. When asked if he was jealous that Rossi had developed the method first, Hanna said, "Yes, of course! It's a very nice paper," quickly adding, "Jealous in a very positive and supporting way."

Feeding the Bandwidth Beast


New cellular modems will provide broadband Internet connections with speeds that can rival those of some wired networks. 







Trying to meet the skyrocketing demand fueled by smart phones and other mobile devices, wireless service providers are striving to introduce new infrastructure while bolstering existing networks.
The new networks being rolled out come in two flavors: WiMax and Long Term Evolution (LTE). The two use similar tricks to allow significant bandwidth increases over the data links used today.
 

Saturday, November 6, 2010

A Cell-Phone Network without a License

A trial system offers calling, texting, and data by weaving signals around the chatter of baby monitors and cordless phones.
 No license required: This handset may look a little clunky, but the tech inside can steer signals through unlicensed airwaves for a novel kind of cell connection. Such technology could be built into commercial handsets.
Credit: xG Technology

A trial cell-phone network in Fort Lauderdale, Florida, gets by without something every other wireless carrier needs: its own chunk of the airwaves. Instead, xG Technology, which made the network, uses base stations and handsets of its own design that steer signals through the unrestricted 900-megahertz band used by cordless phones and other short-range devices.
It's a technique called "cognitive" radio, and it has the potential to make efficient use of an increasingly limited resource: the wireless spectrum. By demonstrating the first cellular network that uses the technique, xG hopes to show that it could help wireless carriers facing growing demand but a relatively fixed supply of spectrum.
Its cognitive radios are built into both the base stations of the trial network, dubbed xMax, and handsets made for it. Every radio scans for clear spectrum 33 times a second. If another signal is detected, the handset and base station retune to avoid the other signal, keeping the connection alive. Each of the six base stations in xG's network can serve devices in a 2.5-mile radius, comparable to an average cell-phone tower.
"In Fort Lauderdale, our network covers an urban area with around 110,000 people, and so we're seeing wireless security cameras, baby monitors, and cordless phones all using that band," says Rick Rotondo, a vice president with xG, which is headquartered in Sarasota, Florida. "Because our radios are so agile, though, we can deliver the experience of a licensed cellular network in that unlicensed band."
While most radios can only use frequencies that are completely clear, xG's radios can unlock more free space by analyzing channels whose use varies over time, Rotondo says. Signals can then be inserted in between bursts of activity from a device using that channel.
"Where a more conventional radio would see a wall of signals, we are able to put our packets in between them and move around between those gaps," he explains. "Using that method, we find that even in an urban area, the 900-megahertz band is really only around 15 percent occupied at any time."
The company recently won a contract to install an xMax network to cover a large chunk of the U.S. Army's Fort Bliss training base in New Mexico. "They're interested in the possibility of one day being able to create cellular networks for use on their bases for everything we use cell networks for: voice, texting, e-mail, and data access," Rotondo says, "or rapidly deploying a version on the battlefield."
Craig Mathias, an analyst with the Farpoint Group, which specializes in the wireless industry, has inspected the Fort Lauderdale network. "It really is just like using a regular cellular system, even though the technology is so different," he says.
The potential for cognitive radio to make better use of spectrum has motivated many companies and academic labs to work on the technology in recent years, says Mathias. "The real advance of xG's system is that it can be deployed in exactly the same way as a conventional cell-phone network," he says. But exactly how xG will bring the technology to market is unclear. "One option may be for a carrier to use this in an area or market where they don't have spectrum, or to serve rural areas without coverage."
Rotondo says that xG wants to offer its approach as a complement to existing networks. "We are interested in having devices able to dynamically access different areas of spectrum--both licensed and unlicensed," he says. Wireless carriers like AT&T are turning to Wi-Fi hot spots to offload some of the load on their licensed spectrum, he points out. Being able to have devices switch to the 900-megahertz band at times of high load could be an attractive option, because it can perform much more like a cell network. The radios developed by xG could be built into commercial phone handsets, says Rotondo.
Alternatively, the system could augment emerging networks that operate in the unlicensed "white spaces" recently freed up by the end of analog TV broadcasts, Rotondo says. A recent study by University of California-Berkeley academics revealed how the density of TV stations in metropolitan areas could reduce the availability of white spaces in such areas.