• 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle


  • Those are valid points, but nothing there is insurmountable with even little bit of advancement.

    For example this is a relatively old example from 2021, before any of the dall-e2 and stable diffusion and video consistency models were out. https://www.youtube.com/watch?v=P1IcaBn3ej0

    Is this perfect? No, there are artifacts and the quality just matches the training driving dataset, but this was an old specialized example from a completely different (now archaic) architecture. But the newer image generation models are much better, and frame-to-frame consistency models are getting better by week and some of them are nearly there (obviously not in real-time).

    About the red-on-red bleed/background separation etc. issues: for 3d rendered games it’s relatively straightforward to get not just the color but depth and normal maps from the buffer (especially assuming this will be done with the blessing of the game developers/game engines/directX or other APIs). I don’t know if you follow the developments but for example using ControlNet with StableDiffusion it is trivial to add color, depth, normal map, pose, line outline, or a lot more other constraints on the created image, so if the character is wearing red over a red background, that is separated by the depth map and the generated image will also have the same depth, or their surface normals would be different. You can use whatever aspects of the input game as constraint in the generation.

    I am not saying we can do this right now, the generation speeds for high quality images, plus any other required tools in the workflow (from understanding the context and caption generation/latent space matching, to getting these color/depth/normal constraints, to do temporal consistency using the previous N frames to generating the final image, and doing it all fast enough) obviously have a ton of challenges involved. But, it is quite possible, and I fully expect to see working non-realtime demos within a year or couple years at the most.

    In 2D games, it may be harder due to pixelation. As you said there are upscaling algorithms that work more or less well to increase the resolution slightly, nothing photorealistic obviously. There are also methods such as these using segmenting algorithms to split the images and generating new ones using AI generators: https://github.com/darvin/X.RetroGameAIRemaster

    To be honest to make 2D games in a different style you can do much better, even now. Most of the retro games have their sprites and backgrounds already extracted from their original games, you can just upscale once (by the producer, or fan-edits), and then you don’t even need to worry about the real time generation. I wanted to upscale Laura Bow 2 this way for example. One random example I just found: https://www.youtube.com/watch?v=kBFMKroTuXE

    Replacing the sprites/backgrounds won’t make them really photorealistic with dynamic drop shadows and lighting changes, but once the sprites are in enough resolution then you can feed them into the full-frame re-generation frame by frame. But then I probably don’t want Guybrush to look like Gabriel Knight 2 or other FMV games so not sure about that angle.


  • When they are ripe, especially the red skin/red flesh ones or the yellow skin ones, are really good. Problem is most of them sold in the US are very mediocre and tasteless. (Tbh that’s true for most fruits, peaches or melons or most fruit I find here are in the supermarket are terribly bland).

    I get the better tasting ripe ones in Asian grocers or in Chinatown, I usually follow and ask Asian aunties and grandmas, they know where the best stuff is. But red skin/white flesh is usually disappointing unless you are really lucky, so try the others.