IBM has released tools for developers to plug its Watson artificial intelligence into any device or thing, from robots to walls.
Its latest effort to get Watson in the hands of developers is Project Intu, an experimental framework to help developers equip devices more easily with Watson's conversation, language and visual-recognition services.
IBM wants developers to use Intu to add so-called cognitive capabilities to machines, such as getting them to understand questions and gestures, or for example, monitor the weather and the time of day.
The company has released software developer kits for Intu on GitHub and its Intu gateway, offering the tools to build Watson features for any platform, including Raspberry Pi, Windows, Linux and macOS. Developers will need to have an account with IBM's Bluemix cloud to experiment with the tools.
"Intu is an architecture that enables Watson services in devices that perceive by vision, audition, olfaction, sonar, infrared energy, temperature, and vibration. Intu-enabled devices express themselves and interact with their environments, other devices, and people through speakers, actuators, gestures, scent emitters, lights, navigation, and more," IBM explains on its GitHub page.
The project is in search of all manner of developers, whether they're hacking together Raspberry Pi with various sensors to create robots, or businesses exploring bots for customer service.
IBM says Intu is a more advanced development platform than Node-RED, an IBM-developed flow editor that enables drag-and-drop programming for IoT devices.
"Project Intu has the ability to react to ever-changing situations and acting out the best response, whereas traditional orchestrations, think Node-Red, are optimal for command-to-action scenarios," IBM Watson exec Shantenu Agarwal said.
"When it comes to scale of both the actions and the use of the services, Project Intu's extensibility enables you to quickly add new behaviors with minimum coding rather than having to manage an ever-growing logic tree."