Google has released a small, specialized, but powerful model that can run on low resource devices. EmbeddingGemma is a new open embedding model that delivers value for money for its size. Based on the Gemma 3 architecture, it is trained on 100+ languages and is small enough to run on less than 200MB of RAM with quantization. https://www.i-programmer.info/news/105-artificial-intelligence/18358-google-releases-embeddinggemma-state-of-the-art-on-device-embedding.html
A dairy of my work.Just links to the full artices on i-programmer.info
Subscribe with RSS



