7 of 10Image
The Meta glasses will be a wearable 3D display and computer that can be controlled using gestures or voice.
The glasses, being developed by startup Meta, will pair a 3D headset with a depth-tracking camera that can place objects in 3D space, allowing it to track hand movements in a manner similar to Microsoft's Kinect sensor.
Meta is designed to provide an augmented reality or head-up display, meaning users looking through the glasses see virtual objects on the display, giving the effect of overlaying digital objects and information onto the real world.
Unlike Google's augmented reality headset Glass, Meta will offer a 3D image. Being 3D capable will allow the Meta to be used for playing 3D games, or for overlaying 3D virtual objects in the user's view, which the headset's creators anticipate could have applications in the fields of architecture, engineering, medicine, film and other industries.
The Meta headset is available to order as part of a developer's kit for $750, which is due to ship in January next year. The headset that ships with the developer's kit has a resolution of 960x540 per TFT LCD screen and needs to be tethered to a Windows computer to function. Sensor and camera-wise it includes a 720p RGB camera and 320x240 infra-red depth imaging, as well as an accelerometer, gyroscope and compass to track movement to nine degrees of freedom. Meta plans for the consumer version of the display, due to be released at a later date, to work as a standalone device.
The developer's kit also ships with various software: including a chess game, 3D sculpting software and MetaCraft, a Minecraft simulator, and a Unity 3D game engine framework for managing gestures and tracking control.
According to an interview with Meta's founders the company plans to model itself after Apple, selling its own hardware and operating system, and working with app developers to build out an ecosystem.
The Kickstarter project raised more than $190,000, almost double its $100,000 goal.
While low-cost computers like the Raspberry Pi are becoming more common, budget massively multi-core boards aimed at supercomputing are relatively rare.
Adapteva has created the Parallella, a Raspberry Pi for parallel programming, with the 16- and 64-core Epiphany Risc chip on the board providing a cheap parallel programming environment for developers to experiment with.
The 16-core board is only the first step, the ultimate goal of the project is to create PCIe boards with multiple 1024-core chips and 2048 GFLOPS of double precision performance per chip.
As engineering challenges force chipmakers like Intel and AMD look to increase the performance of processors by piling in more cores, rather than ramping up clock speeds, getting more developers used to designing programs to run in parallel across multiple cores is becoming increasingly important.
Each Parallella board pairs a dual-core ARM A9 processor with a 16- or 64-core Epiphany Multicore Accelerator chip along with 1GB of RAM, a MicroSD card, two USB 2.0 ports, 10/100/1000 Ethernet and an HDMI connection. In addition to the hardware, each board will ship with a set of open-source development tools for the Epiphany chip and the Ubuntu operating system.
The low-power boards should consume around 5W and theoretically deliver 45GHz in equivalent compute performance, if all the chips on the board are maxed out.
Adpateva was looking for $750,000 in Kickstarter funding but raised $898,921. The 16-core boards are available for $99 through Adapteva's website and are expected to start shipping next month. 64-core boards will be available at a later date.
Teaching computers to recognise objects is getting even easier, thanks to devices like the Arduino Pixy.
The Pixy is a fast vision sensor that can be taught to find objects in the real world and report its findings via simple interfaces. The device is a small camera, about half the size of a business card, attached to an Arduino board that handles the image recognition.
Teaching Pixy to identify objects requires users to place an item in front of the Pixy and press the button on top of the device. Pixy then generates a statistical model of the colours of the object that it stores in flash and later uses to identify the item.
Software in the Pixy detects objects using a hue-based colour filtering algorithm. Pixy calculates the hue and saturation of each RGB pixel from the image sensor and uses these as the primary filtering parameters for detecting objects. The hue of an object remains largely unchanged with changes in lighting and exposure, making it an effective way to detect objects.
The makers of Pixy claim it can identify hundreds of objects in a scene at a time, using its connected components algorithm, and then report back on each object's size and location through one of its interfaces.
The device consists of an Omnivision OV9715 0.25–inch image sensor with 1280x800 resolution and an NXP LPX4330 microcontroller with a dual-core ARM processor, which can process images at 50 FPS. The Pixy has interfaces for UART serial, SPI, 12C, digital and analogue I/O.
An application called PixyMon allows users to see what Pixy sees, outputting either raw or processed video, and allowing users to configure Pixy, set the output port and manage color signatures.
The Pixy is a joint development between Carnegie Mellon University and Texas-based Charmed Labs.
The Kickstarter campaign for Pixy raised more than ten times its target of £25,000 and the first 3,500 Pixy cameras are expected to ship on 14 January next year. The camera is still available to order for $75.00, including shipping.