I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”
This article, in contrast, is quotes from folks making the next AI generation - saying the same.
You must log in or register to comment.
It’s absurd that some of the larger LLMs now use hundreds of billions of parameters (e.g. llama3.1 with 405B).
This doesn’t really seem like a smart usage of ressources if you need several of the largest GPUs available to even run one conversation.