Qualcomm and Meta announced a partnership today that will help everyone's favorite social media company's new large language model (LLM) Llama v2 run on mobile devices. While many LLMs require server farms powered by massively expensive NVIDIA GPUs, Llama 2 intends to run on Qualcomm chips for phones and PCs as soon as next year. Meta claims that Llama 2 can be packaged in smaller applications that can run on phones while doing much of the same stuff as market-leading chatbot ChatGPT.
It's important to note that Llama v2 is an open-source effort from Meta. The company published its "weights," the data set that helps govern how an AI model works. This is a drastically different strategy when compared to other notable LLMs like Google Bard and OpenAI's GPT-4.
Qualcomm chips feature a tensor processor unit (TPU) that the company believes will be fit for running calculations required by AI models. Between Meta's more streamlined Llama v2 offerings and Qualcomm's desire to get more involved in the extemely hot AI sector, it's possible that NVIDIA may actually face some competition if and when Llama v2 ships on mobile devices next year. This could also open the door for a lot of companies that have been on the sidelines waiting for more affordable ways to implement LLMs at a corporate or consumer level.
Meta and Qualcomm are no strangers to working together, with the Quest VR HMDs all running on variants of the Snapdragon line of chips.
Asif Khan posted a new article, Qualcomm & Meta partner up on bringing open source Llama v2 LLM to mobile devices
I can't stress how odd it is that the supercomputers we carry in our pockets will soon have an all-knowing* oracle running in it. No internet required.
* It has approximate knowledge of nearly everything.