This is the problem with game reviews today. The top 15 reviews so far have all given Bioshock infinite 100 out of 100. I.E. a perfect score. This by definition suggests that this is the perfect game. Pack up and go home games industry, bioshock infinite is the best there will ever be.
However, each and every one of them mention negatives in their review that there are some negatives. So the game has been given a perfect score at the end, but throughout the review there are negatives. We reach this curious problem where a perfect score does not mean a perfect game.
A few people have argued that this score simply represents maximum enjoyment, or simply that this is a industry changing game. But neither of these arguments hold water. If the reviewer had the maximum enjoyment they could possibly have, then why were there negatives that they admit detracted from the experience?
What about the score being representative of how innovative and industry changing the game is? This falls straight out of the gate, breaking it’s neck and being put down by a vet with a shotgun in a tent on the track. You only need to look at the reviews for Call of Duty: Modern Warfare 2 to see how laughable that proposition is. A game that brought nothing new to the industry has multiple 100 scores.
So what does the score mean? Apparently nothing. If it were a scale of how good a game was, 100 would mean perfect. If it were a scale of how enjoyable a game was, a 100 score should have no negatives that detract from the experience. And if it were a scale of innovativeness, then no call of duty game should have ever gotten a 100.
But this problem is not limited to metacritic. Its a far more widespread problem, as you can see by this image below:
If reviews were fair, using the scale as it was designed to be used, then these graphs should both be a perfect bell curve. The majority of games should be good, with the better ones being great, and the best being excellant. However IGN’s average score is 8.0, which according to their website is Great, “If you play a lot of games, then you have got to play this one”. Uh-uh. So your mean score of all of the games you have reviewed are games that we “have got to play”. This smacks of something fundamentally wrong with the rating system.
Gamespot is better, with an average score of 7. This translate to good according to their website. This is a better mean score, but the bell curve is still heavily shifted to the right. This means that the scores are heavily inflated, according to their own scoring system. When scores were first introduced in Game Player’s magazine, the average score was 5. Statistically this makes sense, and is correct. But you can clearly see how this has shifted up. It’s not for me to speculate as to the cause of this upwards shift, but the internet is always abound with rumours of publishers heavily leaning on reviewers for a better score.
From the mid 90’s onwards reviewers started reffering to below 5 as ‘garbage’ or ‘poor’. This was the start of the 7 point average, and the skewing the entire scale towards the higher scores. Next time you see a review, have a look at the actual content of the review and make up your own score using all the numbers 1-10. I think you’ll find an average of 5 will soon emerge.