The Teachable Machine is an effort by Google to make Machine Learning
and AI accessible to the wider public, without requiring any specialized
training, knowledge in Computer Science or coding.
The site https://teachablemachine.withgoogle.com/ is a move that reflects the current trend of the personalization of AI in shifting the algorithms from the Cloud to the user's space, be it their desktop, their phone or other smart device.
That's not the biggest problem though; the real issue is that the models used for training the algorithms under the common supervised learning model, require massive datasets and excessive amounts of CPU power.
So as the situation currently stands, the bulk processing is done on the Cloud by Platform as a Services which offer Machine Learning as plug and play API's which encapsulate the necessary pre-trained algorithms, with offerings including tone analysis, visual recognition or conversation analysis. Prevalent examples of such PaaS are Haven OnDemand and IBM's Watson/BlueMix.
But the tide seems to turning. Back in August 2016, in AI Linux, we identified a divergence to this trend:
full article on i-programmer.info
The site https://teachablemachine.withgoogle.com/ is a move that reflects the current trend of the personalization of AI in shifting the algorithms from the Cloud to the user's space, be it their desktop, their phone or other smart device.
That's not the biggest problem though; the real issue is that the models used for training the algorithms under the common supervised learning model, require massive datasets and excessive amounts of CPU power.
So as the situation currently stands, the bulk processing is done on the Cloud by Platform as a Services which offer Machine Learning as plug and play API's which encapsulate the necessary pre-trained algorithms, with offerings including tone analysis, visual recognition or conversation analysis. Prevalent examples of such PaaS are Haven OnDemand and IBM's Watson/BlueMix.
But the tide seems to turning. Back in August 2016, in AI Linux, we identified a divergence to this trend:
Things seem to be shifting though,
with those elaborate algorithms looking to move on to run locally on
mobile devices. That includes their training too; the pictures, notes,
data and metadata that reside in the device and which are going to be
worked upon, will also serve to train the network and aid its learning
activities such as the recognizing, ranking and classifying of objects.
The difference is that now all of that is going to happen locally.
Qualcomm's Snapdragon 820 processors
and their accompanying Snapdragon Neural Processing Engine SDK are
behind such a move which would allow manufactures to run their own
neural network models on Snapdragon powered devices, such as smart
phones, security cameras, automobiles and drones, all without a
connection to the cloud. Common deep learning user experiences that
could be realized with the SDK, would be scene detection, text
recognition, object tracking and avoidance, gesturing, face recognition
and natural language processing.
So instead of the ML algorithms being bred on the cloud, satisfying
their hunger with user-collected data, the alternative idea is to shift
both the algorithms as well their training offline and onto the source
generating the data in the first place.full article on i-programmer.info
Comments