The Stanford Alpaca AI demonstrated the use of a larger and more expensive AI model to train a smaller and cheaper model. The cheaper model was as good and in come cases better than the more expensive model. The more expensive AI model generated vast amounts of better training data to improve the smaller model. This lowered the cost of training by about one thousand times. This could be a form of AI compression. A smaller AI model could, for example, use twenty times less parameters to fit onto cheaper hardware.
This might allow superior Tesla FSD (Full Self Driving) performance on hardware 3 delaying the need for customers to upgrade to more costly hardware 4 or hardware 5 to achieve acceptable robotaxi performance.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.