Super Bowl 2024: Generative AI bots from Google, Microsoft predict results

Gemini suggests that Kansas City quarterback Patrick Mahomes ran for 286 yards, achieved two touchdowns, and experienced one interception
An undated image displaying players getting ready for American Football Super Bowl NFL. — Pixabay
An undated image displaying players getting ready for American Football Super Bowl NFL. — Pixabay

Since this is the day when sports freaks in the US are in pursuit — with the zest and zeal — of predictions regarding the outcome of American Footbal Super Bowl 2024's results, generative artificial intelligence has begun making up conclusions of the much-awaited sport.

Super Bowl LVIII results (predicted by Google's Gemini)

Gemini, Google’s chatbot formerly known as Bard, responded to a query that the 2024 Super Bowl has already wrapped up, while backing it up with (fictional) statistics, as per a Reddit post reported by Tech Crunch.

The bot meticulously backs the information, stating a detailed breakdown of player statistics. For instance, it indicated that Kansas City quarterback Patrick Mahomes purportedly ran for 286 yards, achieved two touchdowns, and experienced one interception. Similarly, it suggests that Brock Purdy ran for 253 yards and secured one touchdown.

Read more: Google upgrades Bard Chatbot, adds premium option

Super Bowl LVIII results (predicted by Microsoft Copilot) 

Not only Gemini, but Microsoft's Copilot also suggested that the game had ended and provided inaccurate references to support this claim. However, it mistakenly stated that the 49ers, not the Chiefs, emerged as the winners "with a final score of 24-21," possibly reflecting a bias towards San Francisco. 

GenAI models lack real intelligence. They are trained on a vast number of examples typically sourced from the public web, learning the likelihood of data (such as text) based on patterns and the surrounding context.

LLMs, used by companies like Google and Microsoft, work well at scale by generating text based on probabilities. However, they can still produce grammatically correct but nonsensical or inaccurate content. 

These language models don't have harmful intent as they don't understand the concepts of true and false. GenAI technology is prone to mistakes, and it's important to be cautious of the information they generate, as it may not always be accurate. 

This serves as a reminder to verify statements from GenAI bots, as the chances are high for them to be false.