At the time they were a graphic revolution, and many fans were delighted with them … but we have to admit that, in today’s eyes, many of the first generation 3D video games look pretty ugly.

One of the reasons for this is the scarcity of polygons in your 3D models. And the other, not inconsiderable, the (low) quality of textures (that is, of the images that made a mere cube take on the appearance of, say, a stone wall).

Thus, no one can be surprised that in this era of recovery, revaluation and remastering of classic games, textures are one of the aspects to ‘groom’. What may surprise us most is that it is using artificial intelligence to automate this work.

A revamped ‘Zelda’ … for Chinese users only

When Nvidia launched its Nvidia Shield set top box in China, it struck a deal with Nintendo: Chinese users (and only them), they could run Wii games on their Shield.

The problem is that that Nintendo game console was not compatible with high definition resolutions, which negatively affects the gaming experience on current 4K displays.

So Nvidia has decided to do something about it, and has announced this week a graphical update of one of those games, ‘Legend of Zelda: Twilight Princess’ consisting of the improvement of 4,400 textures thanks to the use of deep learning techniques.

As we can see, the result is not as spectacular as the HD version of the same game released by Nintendo for the Wii U, but it clearly improves the original version of this installment of ‘Zelda’:

Graphical comparison:
Original Shield Remaster
VS
Shield Version updated textures
VS
Twilight Princess HD on Wii U pic.twitter.com/xjnB0yZv8q

– Chinese Nintendo (@chinesenintendo) January 15, 2019

A technology within the reach of any AI fan

In recent times, several fan groups They have set out to perform similar tasks, at their own risk and expense, to that performed by Nvidia.

Games like ‘Final Fantasy VII’ have benefited from the work of developers who, armed only with a $ 100 software called AI Gigapixel, have managed to improve their appearance.

The operation is, within those that fits, simple: the developers feed a GAN (an antagonistic generative network) with two versions of the same image, one with an extremely high resolution and the other with a low resolution.

Next, the GAN plays cat and mouse against itself: a neural network tries to rebuild the first image through the second, and another evaluates the result, until it is satisfactory.

Once the system has been ‘trained’, all the textures of the video game in question are subjected to this process, until we get high resolution textures based on the originals (low resolution).

If you are interested, in this forum they review more examples applied to other games, and provide more technical information.