MetalliC wrote: ↑Thu Apr 24, 2025 9:54 pm
I see, thank you.
ahh, I see, it seems opaques discarded only during that "1st pass".
I don't know if Pro-1000 operates in passes per se but there is this concept of a contour texture. Alpha blending also works entirely differently than expected. One theory is that it actually renders only every other pixel and then biases the rendered pixels toward visible or black.
See here.
no, I'm 99.99% sure Hikaru's GPU have nothing with PowerVR and ImgTec.
They must be based on some other existing IP. Seems highly unlikely that at that point in time Sega would have developed their own from scratch.
why? I'd say it looks familiar in general concepts, with the difference Hikaru have it in the form of "command list" for GPU, which is probably transformed by "Command Processor" to the form of more low level data for next ASIC which will do actual rendering.
The command list you linked looks like a set of instructions for the GPU to execute. Pro-1000 doesn't have that concept. The display lists aren't arbitrary lists of commands to follow. They're a data structure that is traversed. At the high level, there is a linked list of viewports to render. Each viewport is described by the exact same struct (which sets lighting, fog, and other parameters) that points to a list of top-level nodes to render. Each of these is a hierarchical set of culling nodes that terminate in a pointer to a model to render (stored in polygon RAM or in VROM). And each node can, IIRC, point to yet another list. So it's a tree-like structure of nodes and lists of nodes. Each time you go one node deeper, you apply a transform matrix (stored elsewhere and referred to by its index), which translates pretty directly to the OpenGL matrix stack.
It's a scene graph: each node basically specifies how to transform the children below it. And at the very end of this chain is the thing to render with those transforms applied. There is also bounding box information and LOD information (up to, I believe, 4 different models to select from depending on distance and view angle) for culling invisible models and switching to lower-fidelity ones.
But here's where things really differ from a more conventional GPU: rendering state isn't modified arbitrarily by command lists, as in Hikaru. Apart from some viewport-level things like fog and light vector, the various shading and rendering options are configured per-polygon. Every polygon in a model contains a big header consisting of 7 words (32 bits each). These specify texturing, shading, color, blending with fog, etc. Normally, most of these would be global state parameters. You'd set them using some command, submit a bunch of polygons, then change some setting, send more polygons, etc. State transitions are expensive on modern GPUs. Not exactly sure why but I suspect that they try to parallelize as much as they can and so if the state changes, you have to wait for everything to finish drawing before the next batch of triangles with different state parameters can be processed. For example, Supermodel's legacy renderer breaks up a single model into two models based on transparency state (the rest of the parameters it passes to the shaders on a per-vertex basis). I think the new renderer does something similar as well. I think it is normal for modern rendering engines to sort meshes based on their state parameters (lighting, shader used, etc.) That wasn't necessary on Pro-1000.
On Pro-1000, all that state was encoded per-polygon. So the system isn't taking a list of commands. It's taking a scene description -- sort of like a modern day .usd file or something. It then traverses all those nodes to determine what to draw. It will even decide on its own to stop drawing if it runs out of time for the current frame. The programmer has almost no control over the process -- it's just like handing off a file to some other program to draw for you.