gm_matthew wrote: ↑Fri May 23, 2025 1:49 pm
I'm skeptical that most of the Hikaru video ASICs actually run at only 41.6 MHz. For a start, at that speed why would they even need heatsinks? Also, if "Europe"'s SDRAM is clocked at 83.3 MHz then surely "Europe" itself would have to be running that fast in order to use it. Perhaps the ASICs have internal clock multipliers?
it sounds weird and unusual, but it really as was said above.
for example Europe chip have 4 clock inputs: main clock 41.6MHz, PCI clock 33MHz, SDRAM clock 83.3MHz, pixel clock ~52/32MHz depending on video mode.
Antarctic is similar have 4 clocks: main clock, SDRAM clock, cache RAM clock (all is same 83.3MHz), and PCI clock 66MHz
Africa, x2 Atlantis, America, x2 Australia is homogeneous and use only 41.6MHz clock, which makes me think at some point they was designed as one device, while Antarctic and Europe was glued to them later.
why they need heatsinks? - perhaps because they are a bit outdated Fujitsu gate arrays, and even at 41MHz clock they produce too much of heat?
also, as for 1997 it was quite good enough GPU clock.
only Antarctic is NEC's CMOS gate array, which is probably newer and faster, so able to work at 83MHz.
gm_matthew wrote: ↑Fri May 23, 2025 1:49 pmAt least some of the Real3D ASICs must be using clock multipliers because otherwise Model 3 Step 1.5 would be limited to a maximum fillrate of 33 megapixels per second or 573,672 pixels per frame, and I know from testing that Scud Race definitely outputs more pixels than that.
... or they may use some hidden surface removal algorithms, to not produce unneeded fill rate load.
btw, 573K pix/frame is very good for the time. for comparison Dreamcast's PowerVR2 had real fill rate up to 1.6Mpix/frame @60Hz, but it was able to process scenes with many times more complex opaque objects (up to 53Mpix/frame), thanks to HSR.