Oh nice, I had been curious about that

I'm working on a block diagram to show how the Model 3 video board ASICs are linked together, but suffice to say that it is not the same as Hikaru.
Oh nice, I had been curious about that
well, to me it looks similar to some degreegm_matthew wrote: ↑Tue Apr 08, 2025 4:37 pm I'm working on a block diagram to show how the Model 3 video board ASICs are linked together, but suffice to say that it is not the same as Hikaru.
I see, thank you.Bart wrote: ↑Thu Apr 24, 2025 8:47 pm We know the gist of how it works and some details are hinted at in patents. By piecing together what the games are doing, what the official (very high level) documentation says, and hints from the Real3D SDK, more is known. For example, Ian and Matthew recently deduced that part of the Real3D culling RAM is used to buffer writes to other RAM regions before frames are processed.
There is a Pro-1000 Windows SDK that includes the Pro-1000 firmware (roughly equivalent to a Model 3 ROM set but smaller), C++ header files and libraries, etc. There are also some PDF manuals describing the Windows SDK. The Pro-1000 was connected to Windows NT workstations and commands were sent over to it via a SCSI bus. As far as we can tell, these commands closely mirror actual operations on the Pro-1000 (and the Model 3's version of it), so the Pro-1000 firmware (which runs on the PowerPC CPU on the Pro-1000 CPU board) is likely just executing the commands verbatim.
I think fragments being discarded based on translucency values is probably "accurate" (although obviously the hardware processes these differently than a GL fragment shader). There is no polygon sorting, to my knowledge, but there is culling of nodes and Ian handles transparency by performing a second pass for reasons I'm not clear on (I haven't had a chance to understand his renderer and I know a lot has changed in the last few years).
no, I'm 99.99% sure Hikaru's GPU have nothing with PowerVR and ImgTec.
why? I'd say it looks familiar in general concepts, with the difference Hikaru have it in the form of "command list" for GPU, which is probably transformed by "Command Processor" to the form of more low level data for next ASIC which will do actual rendering.Bart wrote: ↑Thu Apr 24, 2025 8:47 pmThe command lists aren't really comparable at all. Pro-1000 processes a higher-level data structure, basically a scene graph. This is efficient in that it allows meshes (models) to be pre-stored and then manipulated just by updating transform matrices or a few other parameters, without having the CPU perform any transformations (the hardware does all transformation and lighting).
well, PowerVR series 2 was also kind of "last of its kind", and had unique features like almost infinite "fill rate" of opaques and OIT for translucents, and also "modifier volume" shadows, which was abandoned in next generations of ImgTec's products.
I don't know if Pro-1000 operates in passes per se but there is this concept of a contour texture. Alpha blending also works entirely differently than expected. One theory is that it actually renders only every other pixel and then biases the rendered pixels toward visible or black. See here.
They must be based on some other existing IP. Seems highly unlikely that at that point in time Sega would have developed their own from scratch.no, I'm 99.99% sure Hikaru's GPU have nothing with PowerVR and ImgTec.
The command list you linked looks like a set of instructions for the GPU to execute. Pro-1000 doesn't have that concept. The display lists aren't arbitrary lists of commands to follow. They're a data structure that is traversed. At the high level, there is a linked list of viewports to render. Each viewport is described by the exact same struct (which sets lighting, fog, and other parameters) that points to a list of top-level nodes to render. Each of these is a hierarchical set of culling nodes that terminate in a pointer to a model to render (stored in polygon RAM or in VROM). And each node can, IIRC, point to yet another list. So it's a tree-like structure of nodes and lists of nodes. Each time you go one node deeper, you apply a transform matrix (stored elsewhere and referred to by its index), which translates pretty directly to the OpenGL matrix stack.why? I'd say it looks familiar in general concepts, with the difference Hikaru have it in the form of "command list" for GPU, which is probably transformed by "Command Processor" to the form of more low level data for next ASIC which will do actual rendering.
The vertex parameters such as colour / normal are passed as vertex attribs with the new renderer. The rest of the polygon rendering bits are used to calculate a 64 bit value which acts as sort of a bucket ID. When parsing the models, each polygon with the same bucket ID gets put in the same bucket. So this groups polygons together which have the same texture number / rendering parameters, which means they can get rendered in the same draw call. You can just create a new draw call every time you find a polygon with a different texture ID, but the bucket approach reduces draw calls by up to 30%.For example, Supermodel's legacy renderer breaks up a single model into two models based on transparency state (the rest of the parameters it passes to the shaders on a per-vertex basis). I think the new renderer does something similar as well. I think it is normal for modern rendering engines to sort meshes based on their state parameters (lighting, shader used, etc.) That wasn't necessary on Pro-1000.