News
Google’s new Gemini AI beats GPT and human experts across 57 subjects
December 10, 2023 /
With a score of 90.0% on the MMLU (massive multitask language understanding) test, it’s the first model to outperform human experts (89.8%), as well as GPT-4 (86.4%) in a range of knowledge and problem solving tasks across a range of 57 subjects including math, physics, history, law, medicine and ethics. That’s experts, not the average human.
MATR Supporters (view all)
Sorry, we couldn't find any posts. Please try a different search.