Meta reports that Muse Spark achieves its reasoning capabilities using over an order of magnitude less compute than Llama 4 ...
Can the other LLMs keep up?