Skip to main content

Google's Teachable Machine - What it really signifies

The Teachable Machine is an effort by Google to make Machine Learning and AI accessible to the wider public, without requiring any specialized training, knowledge in Computer Science or coding.

The site https://teachablemachine.withgoogle.com/ is a move that reflects the current trend of the personalization of AI in shifting the algorithms from the Cloud to the user's space, be it their desktop, their phone or other smart device.
That's not the biggest problem though; the real issue is that the models used for training the algorithms under the common supervised learning model, require massive datasets and excessive amounts of CPU power.
So as the situation currently stands, the bulk processing is done on the Cloud by Platform as a Services which offer Machine Learning as plug and play API's which encapsulate the necessary pre-trained  algorithms, with offerings including tone analysis, visual recognition or conversation analysis. Prevalent examples of such PaaS are Haven OnDemand and IBM's Watson/BlueMix.
But the tide seems to turning. Back in August 2016, in AI Linux, we  identified a divergence to this trend:
Things seem to be shifting though, with those elaborate algorithms looking to move on to run locally on mobile devices. That includes their training too; the pictures, notes, data and metadata that reside in the device and which are going to be worked upon, will also serve to train the network and aid its learning activities such as the recognizing, ranking and classifying of objects. The difference is that now all of that is going to happen locally.
Qualcomm's Snapdragon 820 processors and their accompanying Snapdragon Neural Processing Engine SDK are behind such a move which would allow manufactures to run their own neural network models on Snapdragon powered devices, such as smart phones, security cameras, automobiles and drones, all without a connection to the cloud. Common deep learning user experiences that could be realized with the SDK, would be scene detection, text recognition, object tracking and avoidance, gesturing, face recognition and natural language processing. 
So instead of the ML algorithms being bred on the cloud, satisfying their hunger with user-collected data, the alternative idea is to shift both the algorithms as well their training offline and onto the source generating the data in the first place.

full article on i-programmer.info

Comments

Popular posts from this blog

Serverless JavaScript

We recently joined in an interesting two-hour long conversation about Serverless JavaScript led by Steve Faulkner of Bustle who answered questions on Bustle, the Shep framework, the mindset behind the AWS Lambda infrastructure, and related topics.

The discussion took place on the Sideway conversation-sharing platform on January 6th. Here we present the best takeaways from the session which really should be taken notice of by anyone working on AWS.

Steve Faulkner:
At Bustle we serve over 50 million unique readers per month through a "serverless" architecture based on AWS Lambda and Node.js.  Of course there are still servers but we don't manage them. This shift has allowed us to develop products faster and decreased the cost of our infrastructure. I'll answer any questions about how we made this transition and how it has worked out. I'll also discuss some of the tools and best practises including our open source framework shep

Eran Hammer:
When would you…

Insider's Guide To Udacity Android Developer Nanodegree Part 3 - Making the Baking App

Continuing to chart my experience of Udacity's Android Developer Nanodegree we step up in level, embarking on the advanced part of the super-course.
Completing project "Popular Movies" (see Part 2 of this series) signaled the end of "Android Developer". Now we are ready to tackle the second element of the program "Advanced Android Developer", a new class with a new syllabus and project. Continuing to chart my experience of Udacity's Android Developer Nanodegree we step up in level, embarking on the advanced part of the super-course.

Completing project "Popular Movies" (see Part 2 of this series) signaled the end of "Android Developer". Now we are ready to tackle the second element of the program "Advanced Android Developer", a new class with a new syllabus and project.

"Advanced Android Developer" is a mixed bag of self contained material and of coding seven different sample apps to learn about the…

AWS and Ionic Team Up In Starter Project

Amazon is quick in recognizing that just offering support for a number of popular programing languages is not enough to lure hoards of developers to the platform. That's why we are seeing a move towards wrapping its AWS services with greater user-friendliness.

The start was made with the introduction of CodeStar, which aimed to simplify the setting up of a project's AWS infrastructure, especially  with regard to policy and authorization, as we examined in CodeStar to Simplify Development On AWS. 

It continues this trend with the release of the open source Ionic AWS starter project Mobile Web and Hybrid Application which aims to act as a skeleton, or boilerplate, Ionic application tweaked in such a way to give developers a headstart in configuring their mobile Ionic front-end applications in relation to an AWS backend.

full article on i-programmer