the main problem of HOLLY chipset is lack of hardware T&L, and this also was quite hard task for the CPUs of the time, for example 200MHz SH-4 was able to calculate only up to 30000 or a bit more vertices with "good enough" lighting per scene @60Hz, which doesn't looks impressive, right?gm_matthew wrote: ↑Sun May 25, 2025 4:57 pm What doesn't make sense to me is Hikaru's graphics chips running at only 41.6 MHz when the "less powerful" NAOMI/Dreamcast runs its PowerVR2 chip at 100MHz, nor Model 3's graphics chips running at only 33 MHz when the Model 2 runs at 50 MHz.
and it also was huge PITA for game developers, because besides of T&L main CPU has a lot of other tasks to do.
so, Hikaru was actual (only) during 1999-2000, when Sega had no other modern "premium class" arcade platform with hardware T&L.
but, it seems that this platform was not good enough and powerful enough to meet the overheated expectations of the "Sega Model 4" name, so it was never officially called like that, and was quickly retired.
then was released Naomi 2 which solved T&L problem and Hikaru was immediately abandoned.
and IMO not only because Hikaru was "more expensive" (price of arcade PCBs never was a problem for Sega), but because it turned out weaker than Naomi 2, and more hard to work with, lacked modern software tools and libraries, etc.
PS: as of PVR2 CLX and 100MHz - it is worth to remember it had unified memory, which actively used by: TA(Tile accelerator) when write display list data sent by main CPU, ISP(rasterizer) which fetch display list to do HSR and polygon sorting, TSP(texture processor) which fetch polygon parameters and texture data and write rendered frame, CRTC - read bitmap to display onscreen.
so, these 100MHz of VRAM bandwidth divided between all these subsystems, while in Model 3 or Hikaru for example texture RAM exclusively accessed only by texture "processor".