In supermodel a single model might be broken down into say 4 different meshes. They all have the same model matrix so this is only passed once. The shader class caches all the shader uniforms so only the differences are sent to the GPU. Ie if the only difference in the meshes is the texture ID, only that is updated with a call to glUniform.
If you don't care about quad rendering you could probably pass the entire 7 word header and pass it directly to the GPU as an integer attribute. Then extract the relevant bits and do everything in the shader.
Model 3 Step 1/1.5 and Step 2 Video Board Differences
Forum rules
Keep it classy!
Keep it classy!
- No ROM requests or links.
- Do not ask to be a play tester.
- Do not ask about release dates.
- No drama!
Re: Model 3 Step 1/1.5 and Step 2 Video Board Differences
quite interesting, thanksBart wrote: ↑Thu Apr 24, 2025 10:46 pm I don't know if Pro-1000 operates in passes per se but there is this concept of a contour texture. Alpha blending also works entirely differently than expected. One theory is that it actually renders only every other pixel and then biases the rendered pixels toward visible or black. See here.
and that's exactly why I'm asking questions here

it should be based on something Sega already familiar with.
why? I'd say it looks familiar in general concepts, with the difference Hikaru have it in the form of "command list" for GPU, which is probably transformed by "Command Processor" to the form of more low level data for next ASIC which will do actual rendering.
okay, but if you replace words "CALL" and "JUMP" with "Node link", and "RET" with "return to previous level node" it will be exact same tree-leaf graph structure as you described, isn't it?Bart wrote: ↑Thu Apr 24, 2025 10:46 pm The command list you linked looks like a set of instructions for the GPU to execute. Pro-1000 doesn't have that concept. The display lists aren't arbitrary lists of commands to follow. They're a data structure that is traversed. At the high level, there is a linked list of viewports to render. Each viewport is described by the exact same struct (which sets lighting, fog, and other parameters) that points to a list of top-level nodes to render. Each of these is a hierarchical set of culling nodes that terminate in a pointer to a model to render (stored in polygon RAM or in VROM). And each node can, IIRC, point to yet another list. So it's a tree-like structure of nodes and lists of nodes. Each time you go one node deeper, you apply a transform matrix (stored elsewhere and referred to by its index), which translates pretty directly to the OpenGL matrix stack.
It's a scene graph: each node basically specifies how to transform the children below it. And at the very end of this chain is the thing to render with those transforms applied. There is also bounding box information and LOD information (up to, I believe, 4 different models to select from depending on distance and view angle) for culling invisible models and switching to lower-fidelity ones.
of course the way of how Hikaru retrieving display list is totally different than Model3. it can be anywhere in PCI space, and even be in different RAMs of that space, usually it starts in PCI bridge chip's local RAM, but then may do "CALL"s for example to slave SH4 CPU RAM, where stored some models.
the result is presumable stored to CP's(Command Processor) local RAM (Antarctic ASIC on diagram), there is 8MB of SDRAM wired to it, which seems not accessible by main CPU since it's not checked by BIOS RAM test.
but, it should be used for *something*, and that something is not textures or 2D layers data or smth similar. so, it's probably for display list (polygons, models, etc), and of course data there will be not the same "high level" as input commands, but more close to something like M3 scene node list with headers full of magic control bits and everything like that.
as I suppose there is also may be double buffering - while CP processing one list it also transfering previous one to GPU which actually does rendering.
as of matrices - Hikaru GPU have similar thing: at top level you may setup 2 current matrices, then at next level you may multiply them by another matrix, and so on, there is also push/pop operations to preserve matrices of upper level.
TLDR: just imagine if add to Real3D 1000 some "frontend" processor, which will be read data via PCI, parse commands/vertex lists, then transform it to the form of linked node list with all these polygons headers, etc, store to local RAM, and then push it to ASIC which does actual rendering

nothing unusual here, it was the same in most of mid-late 90x GPUs, like PowerVRs and others.Bart wrote: ↑Thu Apr 24, 2025 10:46 pm But here's where things really differ from a more conventional GPU: rendering state isn't modified arbitrarily by command lists, as in Hikaru. Apart from some viewport-level things like fog and light vector, the various shading and rendering options are configured per-polygon. Every polygon in a model contains a big header consisting of 7 words (32 bits each). These specify texturing, shading, color, blending with fog, etc.