Skip to main content

Solar plane lands in New York City

A solar-powered airplane finished crossing the United States on Saturday, landing in New York City after flying over the Statue of Liberty during its historic bid to circle the globe, the project team said.  The spindly, single-seat experimental aircraft, dubbed Solar Impulse 2, arrived at New York's John F. Kennedy International Airport at about 4 a.m. local time after it took off about five hours beforehand at Lehigh Valley International Airport in Pennsylvania, the team reported on the airplane's website.  Such a pleasure to land in New York! For the 14th time we celebrate sustainability," said the project's co-founder Andre Borschberg on Twitter after flying over the city and the Statue of Liberty during the 14th leg of the trip around the globe. The Swiss team flying the aircraft in a campaign to build support for clean energy technologies hopes eventually to complete its circumnavigation in Abu Dhabi, where the journey began in March 2015. The solar cr...

Can robot reach human level intelligence??

Image result for The search for a thinking machine

By 2050 some experts believe that machines will have reached human level intelligence.Thanks, in part, to a new era of machine learning, computer are already learning from raw data in the same way as the human infant learns from the world around her. It means we are getting machines that can, for example, teach themselves how to play computer games and get incredibly good at them (work ongoing at Google's DeepMind) and devices that can start to communicate in human-like speech, such as voice assistants on smartphones. Computers are beginning to understand the world outside of bits and bytes.First as a PhD student and latterly as director of the computer vision lab at Stanford University, she has pursued the painstakingly difficult goal with an aim of ultimately creating the electronic eyes for robots and machines to see and, more importantly, understand their environment. Half of all human brainpower goes into visual processing even though it is something we all do without apparent effort. "No one tells a child how to see, especially in the early years. They learn this through real-world experiences and examples," said Ms Li in a talk at the 2015 Technology, Entertainment and Design (Ted) conference.


"If you consider a child's eyes as a pair of biological cameras, they 
Image result for The search for a thinking machinetake one picture about every 200 milliseconds, the average time an eye movement is made. So by age three, a child would have seen hundreds of millions of pictures of the real world. That's a lot of training examples," she added. She decided to teach computers in a similar way. "Instead of focusing solely on better and better algorithms, my insight was to give the algorithms the kind of training data that a child is given through experiences in both quantity and quality."Back in 2007, Ms Li and a colleague set about the mammoth task of sorting and labelling a billion diverse and random images from the internet to offer examples of the real world for the computer - the theory being that if the machine saw enough pictures of something, a cat for example, it would be able to recognise it in real life. They used crowdsourcing platforms such as Amazon's Mechanical Turk, calling on 50,000 workers from 167 countries to help label millions of random images of cats, planes and people.

Image result for The search for a thinking machine
Eventually they built ImageNet - a database of 15 million images across 22,000 classes of objects organised by everyday English words. It has become an invaluable resource used across the world by research scientists attempting to give computers vision. Each year Stanford runs a competition, inviting the likes of Google, Microsoft and Chinese tech giant Baidu to test how well their systems can perform using ImageNet. In the last few years they have got remarkably good at recognising images - with around a 5% error rate. To teach the computer to recognise images, Ms Li and her team used neural networks, computer programs assembled from artificial brain cells that learn and behave in a remarkably similar way to human brains.

A neural network dedicated to interpreting pictures has anything from a few dozen to hundreds, thousands, or even millions of artificial neurons arranged in a series of layers. Each layer will recognise different elements of the picture - one will learn that there are pixels in the picture, another layer will recognise differences in the colours, a third layer will determine its shape and so on. By the time it gets to the top layer - and today's neural networks can contain up to 30 layers - it can make a pretty good guess at identifying the image.

Comments